Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/05/2019 in Posts

  1. 9 points
    So I wasn't there but there was a public announcement at GDevCon about a new edition of LabVIEW called Community Edition which is the LabVIEW Professional version (I read that as application builder included), and completely free with no watermarks for non-commercial use. NI hasn't made any post about timelines, or other details yet but I hear those are in the works. This is obviously a huge thing for LabVIEW as any monetary barrier to entry will discourage new developers from experimenting with LabVIEW. And then there is the fact that those that are familiar with LabVIEW, can keep up with the newest version outside of their company, or when they are between jobs.
  2. 7 points
    The core of our business has changed. Fewer users are developing their own test applications; instead, they're buying something off the shelf like TestStand. Fewer users are developing their own data acquisition software; instead, they're buying something off the shelf like FlexLogger. This trend alters significantly the role of LabVIEW (CG and NXG) in the NI ecosystem -- it becomes far less important to support whole application development (though, of course, we still do and will) and far more important to support "just a bit of customization" when the pre-built tools fail. A lot of software has an endless array of switches and options, but LV provides the ability for a user to write a custom routine to specify the behavior they want in some corner niche of a product. Think like Signal Express, able to generate sine wave, square wave, triangle wave or "pick a VI that generates the wave that you need" wave. What's funny about this is that although the app devs are growing rarer, they're also individually growing more profitable for NI as a whole because the companies still paying to develop custom software are the ones that are generally buying a lot of hardware to do something unique in the world (or not in the world, in the case of SpaceX, Blue Origin, Ad Astra, etc.). So I don't expect the big scale parts of LabVIEW to vanish, but I do expect them to be driven by specific requests from megascale customers rather than from the massed collective. The massed collective will be driving more of the IDE developments. At least, that's my suspicion at this time based on the presentations I've seen.
  3. 7 points
    I've exported the OpenG sources from Sourceforge SVN to Github. It's located here: https://github.com/Open-G I'm hoping this will encourage collaboration and modernization of the OpenG project. Pull requests are a thing with Git, so contributions can be encouraged and actually used instead of dying on the vine.
  4. 6 points
    Thanks AQ, you are the first to actually spell this out in words that make sense to engineers. Not sure too many here are going to like it though! ps: I liked your post due to its honesty and absence of marketing weasel words, not because I think this is a particularly good strategy for NI. Maybe I have just had a weird career but in the 20 years or so I have been developing LabVIEW based solutions virtually never would a custom off-the-shelf piece of software like Signal Express or similar come anywhere close to doing what I needed it to and it would require so much customisation that the benefit would be so low. To me LabVIEW is a programming language or RAD tool and the responsibility of NI is to deliver first class hardware with amazing software to help me bring the two together and that is it.
  5. 6 points
    I don't mind the new green on the landing page of ni.com, but elsewhere on the site the new theme is a bit too much. I wanted to fix the near invisible links that @LogMAN ran into, but got a bit carried away: If anyone is interested in using the blue style, you can download it from here. Be warned it's not perfect, there's still lots of green bits on mouse over etc, but I find overall it makes the site much more readable. If blue isn't your thing, the primary color can be changed by setting the root --forrest-green color to something else.
  6. 6 points
    The more I look at the center logo, the more I believe it captures exactly the kind of excitement generated by the whole operation.
  7. 6 points
    Why are so many things just that little bit harder in or weirder in NXG? I am trying to use it to make my first "real" application, in this case a relatively simple WebVI. I put this list down in the hope someone can tell me I am being dumb and there is a sensible way to do these thing Why can I not easily branch off a wire by clicking on it somewhere? Now I have to right click and select the option to create a wire branch Why can I not right click on a primitive to open the sub-palette for that thing to give me similar items. I can right click and replace or right-click and insert... Example, I have an existing 2D array wire I want to get the size of, there is no way for me to right click the wire to quickly open the array palette and then drop down a Size primitive I have to relearn the whole palette structure as all the icons have changed. OK that is fine so let me explore a bit and poke around but I cannot keep a palette open by pinning it? (OK so it turns out I can do this if I start the browse from the left-hand palette and then weirdly click the << arrow, but I am so used to opening the palette by right clicking on the diagram). Arg, then the pop-up help covers over the next item in the list 😞 The Align menu is so much less usable in that drab gray and single line. There was nothing wrong with the way it is implemented in Current Gen, why change this? The GUI is so dull in general. The colours are washed out and grey everywhere is just depressing. It sounds silly but it makes me not want to use it. Sorry, but MDI is not a suitable technique for anything other than the most trivial of applications. I like the really like the zoom but please let us pan with the middle-mouse or something similar Please pop open menu items as soon as I browse into them, rather than forcing me to click (looking at you Case Structure Cases and Align menus) Why are the icons so confusing. Please can someone explain how the picture below conveys any information that this array concatenation. Why can I not run a Sub VI in a WebVI? In order to test the correctness of a piece of code I have to move it out of the .gcomp to run in isolation, and this actually moves the code on disk What was fundamentally wrong with the Project Window in Current Gen? I have a vertical monitor that I use exclusively for displaying the Project window and it is amazing. I don't particularly like the new implementation but at least let me undock it! I am also not really filled with confidence that as my project grows in size it will not become overwhelming (yet another reason to keep Virtual Folders) This is just a small subset of the items I am currently struggling with. In general I am quite forgiving of new software, but I think NXG has been baking in the oven for something like 8 years! I appreciate that NXG has not been designed for me, rather I suspect it is targeted at a whole new audience of LabVIEW developer. As such I know my muscle memory is going to be really detrimental in getting me up to speed with this new way of doing things so I am trying really hard to not let that get in the way of my journey. Something deep down just makes me worry that the essence of what makes LabVIEW (current gen) so special has been lost in translation. It just feels like too many decisions have been made by people who are not actually very familiar with LabVIEW. This makes me a bit sad as I have no doubt that a ridiculous amount of engineering effort and love has gone into NXG (and am under no illusions at the scale of the task of rewriting current gen). All in all my experience trying to develop a non trivial (not by much though) application in NXG has further cemented my thoughts that I am going to have to stick with current gen for the foreseeable future. That said, strength and courage to NI. I will check back again in a few years. ps: I am really excited for the WebVI technology. Please port it to Current Gen so I can actually use it πŸ™‚
  8. 6 points
    Thanks for putting down all your thoughts and providing examples, Neil. I agree with every point you've made. Have you used the Shared Library Interface editor yet? That's some next level UI inconsistency. I wrote a couple of blog posts on my experience converting a small (< 100 VIs, < 10 classes) LabVIEW project to NXG (see Let's Convert A LabVIEW Project to LabVIEW NXG! Part 1 and Part 2). During the process I made a lengthy list of issues and came to the same conclusions many people have voiced in this thread. Of the issues uncovered during the conversion, some were due to missing features or bugs, some a lack of understanding on my part, but a surprising number were due to interesting design choices. The TL;DR of the blog is there is nothing in NXG for me to want to continue using it, let alone switch to it from LabVIEW. Which is sad because I was really hoping to find something to look forward to. Here's hoping for a LabVIEW NXG: Despecialized Edition!
  9. 6 points
    The New Data Value Ref and Delete Data Value Ref nodes will be able to be in inline VIs (and thus malleable VIs) in LV 2020.
  10. 6 points
    View File Hooovahh's Tremendous TDMS Toolkit This toolkit combines a variety of TDMS functions and tools into a single package. The initial release has a variety of features: - Classes for Circular, Periodic, Size, and Time of Day TDMS generation with examples of using each - Reading and Writing Clusters into TDMS Channels - XLSX Conversion example - File operations for combining files, renaming, moving, and saving in memory to zip - Basic function for splitting TDMS file into segments (useful for corrupt files) - Reorder TDMS Channel with Demo There is plenty of room for improvements but I wanted to get this out there and gauge interests. The variety of classes for doing things, along with VIMs, and class adaptation makes for using them easier. If I get time I plan on making some blog posts explaining some of the benefits of TDMS, along with best practices. Submitter hooovahh Submitted 12/12/2019 Category *Uncertified* LabVIEW Version 2018 License Type BSD (Most common)  
  11. 6 points
    All of the presentations are now on the LabVIEW Wiki. You can find them at: https://labviewwiki.org/wiki/Americas_CLA_Summit_2019 Thanks Kevin Shirey and Mark Balla for producing the videos and all those that volunteered to run the cameras. This is an awesome resource to be able to go re-watch and review these great presentations again or for those that couldn't join us in person to be able to view them as well.
  12. 6 points
    @Jim Kring, it seems to me that the export of the code has gotten a positive response from the community. However I may be wrong. If anyone has any opinion either way, please come forward. As you can see in this thread, it appears the community has rallied around this effort. This is why I emailed you to come here and share your thoughts. In the past, OpenG was a great venue to showcase how a bunch of passionate LabVIEW users can come together and collaborate on something useful. The passion is clearly still there, as shown by the numerous discussions here. The general coding community has moved to Git with GiHub being the hub. This seems like the logical next step. Who knows what this initiative will lead to. However, I’m expecting that placing OpenG in a neutral GitHub repo will provide the spark and the tools to facilitate open collaboration, then the community can drive the future. The community is full of smart people who have a desire for clean tested code. And if issues come up, LAVA discussions (or GitHub issues) are there to hash things out. When LAVA offered to host all OpenG discussions back in 2011. it was clear that the community wanted to help. When @jgcode put his standards together for how code should be discussed at that time, It was an exciting time. Since then, many people have come forward with offers to add new code into OpenG and fix bugs. For example @drjdpowell first offered to include his awesome SQLite toolkit for inclusion into OpenG. He got no response either way. It’s a shame to have a platform and forums to allow people to post and discuss OpenG code and then ignore it. If you have ideas on what the future of OpenG is. I’m hoping it’s to be more transparent and inclusive. Providing the tools, resources and some safety checks along the way, is the best way to facilitate passionate individuals to dive in. Do you think keeping the status quo of the past 10 years makes sense? It seems to me that the community disagrees. What do you think?
  13. 5 points
    @Aristos Queue, I was part of the private preview event and afterwards there were several comments basically saying "I watched all of this and have no idea what NI is announcing". And multiple requests that NI make it clear what they are trying to announce. I thought maybe the public event would be more clear. Nope, dozens of comments were flying in asking what, if anything was changing as the event went on. After the event ended my favorite comment was "That was a great introduction, but when does the actual event start?". Threads on Reddit, LAVA, and NI all have had various amounts of "What does this mean?" other than a new logo and color scheme. After reading and listening to NI's feedback, only your post made it clear to me what NI was trying to say. So while NI marketing may think they are making it loud and clear, the community has also been pretty loud themselves with their statements that they aren't sure what NI was trying to say.
  14. 5 points
    The logo is pretty uninspired and looks lifted from this company. It's going to take some time to get used to the green theme too - in my mind NI = blue + white. I wonder if NXG will get a green coat of paint. I'll reserve judgement on the content until I've seen the webinar, but it's heavy with cringe worthy marketing speak. Also, a moment of silence for Nigel the NI eagle. Soar Ambitiouslyβ„’, N πŸ¦…
  15. 5 points
    My experiment with NXG is now over. A simple web page has taken about 5x longer than I had planned for. Some of this is due to me underestimating the nuances of the web module but most of it has been me fighting the new IDE. The other night instead of happily diving into some after dinner software development fun I was actually filled with dread at the thought of having to open NXG and finish what I started, it really is that unpleasant to use. For me, NXG is nowhere near usable in a real project that I expect to have to develop, maintain and make money off. Some stuff seems to work, but everything has this toy feel about it. It is ugly, sluggish, unintuitive and absolutely repulsive to develop with. Sorry that sounds harsh, but it has been in development for over 8 years and has an incredibly strong pedigree to compare against. NI have taken almost everything that made current gen so special and thrown it in the bin. NXG is clearly being managed and developed by people who have never actually become intimately familiar with LabVIEW. I will check back in a few years time but at this point I am extremely disappointed and now need to think very strongly about where my professional systems development career is going. Current Gen is going to be sunsetted at some point and will fade into irrelevance due to its closed source nature (not that open sourcing something of its complexity would help now, it is too late for that). I could wait a few years if I had confidence that the ship was sailing in the right direction, but apart from AQ who consistently has the courage to actually even reply to these threads there is virtually nothing coming back from NI and I feel that the HMS NXG-itanic is sailing full steam ahead towards its doom. NI is run by extremely clever people who have no doubt done their sums and analyses and are charting the course for NXG that they think will bring them the most success in the long-run. I have a strong appreciation for just how big an undertaking something like NXG is, but given where it is after 8 years of development it just seems that I am not the target market and there is not too much I can do about it. Happily, given how robust NI hardware and current gen LabVIEW actually are I suspect there will be quite a bit of work supporting old systems for at least another decade (perhaps more).
  16. 5 points
    For a final Case. Sadly there isn't any non-depreciated Items to replace that vi. Which makes this work for Clusterzilla. ArrayToCluster.vim
  17. 5 points
    Hey LAVA friends. I'm going to be doing a live-stream on Youtube next Tuesday April 28, (10AM Pacific) to go over LabVIEW Community Edition. I'd love to see you guys there. It'll be interactive with chat for your questions, and I will be making an attempt to talk to a Raspberry Pi and Arduino. If you're curious about low-cost hardware or just want to find out what's new in the latest LabVIEW. Join me here: https://youtu.be/4HLVqYXpxIo. Edit: If any of you have done any projects with the supported hardware. Let me know and I can mention you or pull you into the discussion. - Thanks.
  18. 5 points
    The main difference between LabVIEW and a C compiled file is that the compiled code of each VI is contained in that VI and then the LabVIEW Runtime links together these code junks when it loads the VIs. In C the code junks are per C source file, put into object files, and all those object files are then linked together when building the final LIB, DLL or EXE. Such an executable image still has relocation tables that the loader will have to adjust when the code is loaded into a different memory address than what its prefered memory address was defined to be at link time. But that is a pretty simple step. The LabVIEW runtime linker has to do a bit more of work, that the linker part of the C compiler has mostly already done. For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. In this case the code junks in each VI are real compiled machine code directly targetted for the CPU. In the past this was done through a proprietary compiler engine that created in several stages the final machine code. It already included the seperation where the diagram was first translated into a directed graph that then was optimized in several steps and the final result was then put through a target specific compiler stage that created the actual machine code. This was however done in such a way that it wasn't to easy to switch the target specific compiler stage on the fly initially so that cross compiling wasn't very easy to add when they developed the Real-Time addition to LabVIEW. They eventually improved that with an unified API to the compiler stages so that they could be switched on the fly to allow cross compilation for the real-time targets which eventually appeared in LabVIEW 7. LabVIEW 2009 finally introduced the DFIR (Dataflow Intermediate Representation) by formalizing the directed graph representation further so that more optimizations could be performed on it and it could eventually be used for LabVIEW 2010 as an input to the LLVM (Low-Level Virtual Machine) compiler infrastructure. While this would theoreticaly allow to leave the code in an intermediate language form that only is evaluated on the actual target at runtime, this is not what NI choose to do in LabVIEW for several reason. The LLVM creates fully compiled machine code for the target which is then stored (in the VI for a build executable or if code seperation is not enabled, otherwise in the compile cache). When you load a VI hierarchy into memory all the code junks for each VI are loaded into memory and based on linker information created at compile time and also stored in the VI, the linker in the LabVIEW runtime makes several modifications to the code junk to make it executable at the location it is loaded and calling into the correct other code junks that each VI consists of. This is indeed a bit more than what the PE loader in Windows needs to do when loading an EXE or DLL, but it isn't really very much different. The only real difference is that the linking of the COFF object modules into one bigger image has already been done by the C compiler when compiling the executable image and that LabVIEW isn't really using COFF or OMF to store its executables as it does all the loading and linking of the compiled code itself and doesn't need to rely on an OS specific binary image loader.
  19. 5 points
    Found a fix for this. It should be fixed in LV 2020. The bug ONLY affects copying from a 1-element cluster of variant to a variant. Or a cluster of cluster of variant to a variant. Or... you get the idea... "any number of cluster-shells all containing 1 element, culminating in a variant" being copied to a variant. This was a fun bug... consider this: The memory layout for an byte-size integer is { 8 bits } The memory layout for a cluster of 1 byte-size integer is { 8 bits } They are identical. "Cluster" doesn't add any bits to the data. That's just the type descriptor for the data in that location. This is true for any 1-element cluster: the memory layout of the cluster is the same as the memory layout for the element by itself. This is true even if the 1 element is a complex type such as a nested cluster of many elements or an array. When a VI is compiling, if data needs to copy (say, when a wire forks), LabVIEW generates a copy procedure in assembly code. For trivial types such as integers, the copy is just a MOV instruction in assembly code. But for compound types, we may need to generate a whole block of code. At some point, the complexity is such that we would rather generate the copy procedure once and have the wire fork call that procedure. We want to generate as few of those as we have to -- keeps the code segment small, which minimizes page faults, among other advantages. We also generate copy procedures for compound coercions (like copying a cluster of 5 doubles into a cluster of 5 integers). Given all that, LabVIEW has some code that says, "I assume that type propagation has done its job and is only asking me to generate valid copy procs. So if I am asked to copy X to Y, I will remove all the 1-element shells from X and all the 1-element shells from Y, and then I will check to see if I have an existing copy proc." Nowhere in LabVIEW will we ever allow you to wire a 1-element cluster of an int32 directly to an int32. So the generator code never gets that case. In fact, the only time that we allow a 1-element cluster of X to coerce directly to X is... variant. The bug was that we were asking for a copy proc for this coercion, and the code was saying, "Oh, I have one of those already... just re-use copy-variant-to-variant." That will never crash, but it is also definitely not the right result! We had to add a check to handle variant special because variant can eat all the other types. So if the destination is variant, we have to be more precise about the copy proc re-use. I thought this was a neat corner case.
  20. 5 points
    At the 2019 Americas Certified LabVIEW Architect Summit, GCentral was introduced to the LabVIEW Community. GCentral is a non profit organization (incorporated September 2019) composed of G community leaders creating a platform for programmers to find/use, contribute, and co-develop G code packages and collaboration resources. While GCentral is leading the charge to solve these problems, we will closely align with the community's needs. This forum is designed to connect GCentral's efforts with the community's needs. Some links to be aware of: GCentral.org LabVIEW Wiki Twitter (@GCentralOrg) LinkedIn Instagram (gcentralorg) Facebook Website GitHub Repo
  21. 4 points
    Hi Everyone, I was just alerted to this discussion (thanks @drjdpowell), so I wanted to be sure I heard all the feedback, to make sure we're staying on top it. Before I dive in, I'll mention there is a version 2020.1 in beta right now (if you can't access this, please be sure you sign up for the beta and/or send me a PM). This addresses many of the points raised here, so please check it out. Also, it's important to mention that VIPM 2020 had a LOT of work (and love) put into it, and the beta+launch was in the middle of COVID-19, so things didn't get as many eyes (i.e. beta testers) as usual. That's unfortunate and we're working hard on the new 2020.1 build. Any feedback/issues you have are important, so please do post them and know we're listening. It's hard to keep tabs on conversations that happen in various LAVA threads, so if you'd like to see something improved/resolved, please do post it in the VIPM forum or PM me. I'll try hard to respond to the good points everyone raised. First, I'll mention that VIPM 2020.1 no longer requires a sign in when installing packages from the VIPM Community Repository. In 2020.0, this was causing issues for some users due to their Enterprise IT/Networking configuration. And, as you've all mentioned, some users really didn't like it, which is fair. There are still some features that use sign-in, like starring packages, and there will be a prompt when those features are invoked. @LogMAN Actually, nothing changed with how VIPM installs itself 2020, as compared to 2019 (and older versions). The issue was that the the VIPM 2020.0 (and older) installer framework (e.g. Advanced Installer) needed to be updated for newer versions of Windows. In VIPM 2020.1 (now in beta -- see link above) we've addressed this issue and it should install without issues. That said, there were some bugs in NI's LabVIEW 2020 installer that causes it to fail to correctly install VIPM 2020, in some cases -- e.g. the issue where it sometimes would fail to start. NI has been working with JKI to fix this. @Neil Pate That's fair We added this feature to make VIPM much more responsive when users are opening packages -- VIPM sometimes runs as a background task, so that it doesn't have to reload itself for each of these operations. This can be disabled in the VIPM settings file, here: "C:\ProgramData\JKI\VIPM\Settings.ini" [General] Start VIPM when computer starts?="FALSE" Start VIPM when LabVIEW starts?="FALSE" @Michael Aivaliotis Thanks for helping everyone out. Older versions of VIPM are available to users -- we have a link on the vipm.io/download page for users. However, since older versions of VIPM use outdated LV runtime engines that are longer be supported by NI and don't work well on newer OS'es, we don't encourage users to use them -- it often creates more problems for them, and a support burden for JKI and NI. As such, we ask that people do not post older downloads and instead direct people to get them from either NI or the VIPM websites. Again, thanks for helping people out. Also, I appreciate everyone's feedback -- I know when things don't work well, it's super frustrating. VIPM 2020 had some bugs and left room for improvement, because of all the new features that had to get out the door in time for the LabVIEW 2020 launch date, and we didn't have the typical level of beta testing. I hope 2020.1 resolves those, and if it's still missing things or not working right, let me know and I'll take responsibility for those issues. Kind Regards, -Jim
  22. 4 points
    In an attempt to standardize my handling of formatting timestamps as text, I have added functions to "JDP Science Common Utilities" (a VI support package, on the Tools Network). This is used by SQLite Library (version just released) and JSONtext (next release), but they can also be used by themselves (LabVIEW 2013+). Follows RFC3339, and supports local-time offsets.
  23. 4 points
    I discussed with @Mark Balla and we figured out a way to get all the old videos that used to be on the Tecnova site up to Youtube. It will take a few days but this is in progress. Probably within a week all the videos should be up. I will update this thread with progress.
  24. 4 points
    Hi all, friendly LAVA moderator here. I'd just like to gently remind everyone we are all human, and are at times emotional, and at times frustrated with colleges we interact with. Lets all take a deep breath and try to continue to give criticism in a form that will be most helpful. I know I've at times flown off the handle online, especially on the subject of NXG. I personally don't think I've shared code between projects for anything real project anytime recently. But I can remember times that I did it and didn't have any real problem. Likely because I was mindful of what effected what. X do you have some recent examples of code you shared between projects? What made you make that decision, and why were the other options seen as less desirable? Also it sounds like this isn't explicitly forbidden in NXG.
  25. 4 points
    LabVIEW Community Edition rocks! In order to help kick off this momentous occasion, I've put together an example alarm clock. It is broken down into 6 lessons (so far) taking you from blinking an led through creating an alarm clock with a state machine. To download or learn about LabVIEW Community Edition check out GCentral.org Check out the alarm clock here! <-(http://bit.ly/ChrisCilino_LabVIEWCommunityAndRP)
  26. 4 points
    So first I want to acknowledge some areas we could have done better. I have been involved in a number of discussions around what our migration strategy looks like, and the biggest gap we immediately identified is a lack of clear external messaging, so that is something we are looking to address. I have talked to all different kinds of users, and in a relatively short discussion we are able to align on whether or not NXG is ready for their use case. That is great, but you should be able to make that determination yourself by looking at public documentation, it should not require a call with me or a frustrating session of attempting to migrate an application. NI has tried to provide this in the past with the LabVIEW roadmap, but it doesn't have enough detail for you to make a high confidence decision. For example, it is not possible to differentiate between functionality that is not complete yet versus functionality which was intentionally omitted or intentionally changed. We have also not done a very good job of explaining the background of specific decisions - which leads to some of the feedback in this thread where it seems like we have changed everything for no reason. Certainly I can point to some changes which were mistakes, and generally speaking we have the flexibility to undo those changes, but many of the bigger changes were intentional, designed, tested changes which we believe are an improvement. We intend to do a better job of publicly documenting those decisions. It is hard to overstate the reorganization efforts that have happened within NI over the last couple of years. Last NIWeek Eric Starkloff talked about how we were organizing the company around business units instead of around products, and that has had broad reaching impact, but we were making major shifts in the way we built products in the last couple of years anyway. Like many of the large software companies we have been shifting to a user-centric development model where we actively try to bring the user into the development process instead of thinking we know what they need and developing in secret. A good example of this shift is the introduction of the product owner role in NI R&D, a role focused on ensuring we are delivering the right value to our users. Both the product owners and product planners have long histories of working with LabVIEW, so you should not feel like the decision makers working on LabVIEW NXG are completely detached from LabVIEW - in many cases the decision makers for the two products are the same. There have definitely been teething pains with this shift, but we are getting better at it. I saw several comments about feeling left out of the decision process, and there are certainly some valid concerns, but I would also point to the level of engagement over the last few years where the product owners and product planners have attended and solicited input at the CLA summits, GDevCon and NIWeek. We also have quite a few targeted user engagements when we are working on defining features and workflows. We can absolutely do more, and we plan to, but many significant product decisions have been made as a result of those engagements. Remember that there are a lot of LabVIEW users out there, and we can't talk to all of them. A light-hearted analogy would be to seeing the results of a national poll and saying - 'well nobody asked me!' That being said, I do want to increase my engagement with this community, and there is clearly a lot of passion about making LabVIEW NXG the best it can be. I would love to set up some 1x1 interviews with those of you who are interested so I can better understand how you are using LabVIEW today. I will start a different thread about that. Back to the main point - it is important to understand what LabVIEW NXG is today versus what it will become. LabVIEW NXG today is not ready for most of the applications of this community. You are some of the most advanced LabVIEW users around, and are collectively using nearly every feature in the product. As Stephen said early in this thread - NXG has many nice things, it just isn't ready for him (or most of you) yet. We are trying hard to get there and have made substantial progress, but there are still functionality gaps. We expect that you will continue to use LabVIEW for at least a few more years until NXG is more complete for your workflows. I saw a comment about not wanting to develop an application of thousands of files in NXG, and I agree that I don't consider NXG ready for that either. Similarly - converting a large project from LabVIEW to LabVIEW NXG is not something I would recommend yet either. The Conversion Utility and associated tooling is more effective for converting instrument drivers and libraries. To be honest I was surprised that no one in this thread pointed out that there is currently no way to probe classes, and no way to make custom probes. Yes we are already at version 5.0 and we still haven't built a full replacement for LabVIEW. That is a reflection of the incredible array of features in LabVIEW and the diversity of users and user cases that this community contains. However version 1.0 was not intended as a full replacement for LabVIEW and neither is version 5.0. For a subset of our user base who are building less complex applications NXG is ready for them and they are using it. For example a lot of work went into the workflow of helping a simple user take and process their first measurement, and we are building out from that foundation. When I talked about our reorganization and change in philosophy - that also translates into how we prioritize features and workflows. We are not just racing to recreate every last piece of LabVIEW in LabVIEW NXG. We are trying to understand the problems you were using those features to solve so we can determine if that same solution is the best choice for NXG. I plan on also addressing some of the specific points of feedback in this thread, but this post turned out much longer than I had intended! Hopefully that provides a bit of framing around the current state of LabVIEW NXG. Thanks, Jeff
  27. 4 points
  28. 4 points
    So I just discovered this, this morning and I think it will help out in making VIMs when dealing with supporting a scalar, or 1D array data type. I have an example which is my Filter 1D Array VIM, posted here, which is heavily inspired by OpenG's implementation. In it the developer can filter out a scalar, or a 1D array of something from a 1D array. I did this by adding a Type Specialized structure at the start which checks to see if after building the array, if the data type matched in the incoming array. If so it is a scalar and should be used. I then have another case where the data just goes straight through thinking it must already be a 1D array. But what I realized today is that is unnecessary. If in the VIM my input is set to a 1D array, and we add a build array with only 1 terminal, and that build array is set to Concatenate, then that whole structure isn't needed. A Scalar will become a 1D array with one element, and a 1D array will have no items added to it after the build array. In this example the code simplification isn't much, but someone may have had two cases in a type specialized structure which handle scalar and 1D array separately and using this they could be combined them into one. And one other minor thing, I don't think I will actually be updating the Filter 1D Array VIM to use this, just because knowing if the input is a scalar means other sorting work not shown can be ignored helping performance.
  29. 4 points
    I found this tonight while working on a project: https://remixicon.com/ Really good icon library with modern-looking icons where you can customize the color and size of the icons, then download them as PNG files. I then import them into a LabVIEW pict ring and it's off to the races.
  30. 4 points
    It is not a bug. It should break for any unsigned integers because that's how the "negate" method works.
  31. 4 points
    I'm working on a personal project (more information will be shared about this later) that needs Message Queue Telemetry Transport (MQTT). While searching for LabVIEW libraries for MQTT I found 1 on VIPM, 2 in the NI Forums, and 1 through Goggle on GitHub, as follows: WireQueue-MQTT Driver for LabVIEW by WireFlow AB (this one costs $550) MQTT Client API in native LabVIEW by Peter - daq.io (also on GitHub as LVMQTT) MQTT-LabVIEW by Michal Radziwon Quaxo MQTT LabVIEW by Stefan May This is not unusual for just about anything you might be looking for. In fact searching on GitHub there are 13 results for LabVIEW+MQTT. What was weird is that two of them were almost completely the same, yet neither attribute the other. I don't know which came first. I ended up forking from one of them but I guess I'll attribute both to be safe if I end up using it. However, talking about code confidence, I just found this one: LV-MQTT-Broker by @Francois Normandin. I know Francois, he is a LabVIEW Champion. He has included unit tests. It has full documentation as well as an NIWeek presentation by him and Sarah Zalusky, both of whom are Certified LabVIEW Architects (CLAs). From GitHub I can see he has been actively contributing to it and its open source (which most of them were). Honestly, I wish I had found this one first. Just some words for thought...
  32. 4 points
    Here you go. Set Icon.vi Use it like this: To get back to the original icon just call it with an empty path.
  33. 4 points
  34. 4 points
    I assume you meant this video? There is this older video of Dr. T and Jeff K. introducing a LabVIEW Basics Interactive CD-ROM (~LabVIEW 4), but it's not as exciting as the LabVIEW 5 promo.
  35. 4 points
    Add SuperSecretListboxStuff=True to your labview.ini , reload LabVIEW new menu items will show up when you right click on MLC control. Read this thread
  36. 4 points
    As someone contributing code on LAVA, I would like to see the certified LAVA repository packages made available through the GCentral package search tool.
  37. 4 points
    As a company that uses LabVIEW and has it's own existing internal repository for reuse code, I would like a way for my developers to discover packages in G Central and in our private reuse repository, all from a single portal.
  38. 3 points
    https://www.tiobe.com/tiobe-index/ LabVIEW dropped below 50... Might it be that its rank follows its average user age?
  39. 3 points
    I think the longer my relationship with NI carries on, the more the message seems to be that I'm not considered part of the direction NI is concentrating on at all. In a way, the new announcement is a bit like "this isn't designed for you, do you hear us loud and clear?"
  40. 3 points
    I threw this together, and maybe someone will find it useful. I needed to be able to interact with cmd.exe a bit more than the native system exec.vi primitive offers. I used .NET to get the job done. Some notable capabilities: - User can see standard output and standard error in real-time - User can write a command to standard input - User can query if the process has completed - User can abort the process by sending a ctrl-C command Aborting the process was the trickiest part. I found a solution at the following article: http://stanislavs.org/stopping-command-line-applications-programatically-with-ctrl-c-events-from-net/#comment-2880 The ping demo illustrates this capability. In order to abort ping.exe from the command-line, the user needs to send a ctrl-c command. We achieve this by invoking KERNEL32 to attach a console to the process ID and then sending a ctrl-C command to the process. This is a clean solution that safely aborts ping.exe. The best part about this solution is that it doesn't require for any console prompts to be visible. An alternate solution was to start the cmd.exe process with a visible window, and then to issue a MainWindowClose command, but this required for a window to be visible. I put this code together to allow for me to better interact with HandbrakeCLI and FFMPEG. Enjoi NET_Proc.zip
  41. 3 points
    Personally, I don't mind the logo, or the colour scheme. I prefer the old colours and logo, but no one likes change do they What I resented a little bit is that the webcast has been promoted for about a month, promising 'something extraordinary' In my opinion what was delivered wasn't extraordinary, it was 50 minutes of my life that I won't get back. It offered no value to NI's customers, brought no knew knowledge to the table and just felt like was repeatedly been hit in the face with a PR and marketing hammer. There isn't a problem promoting the amazing things people do with LabVIEW, but it doesn't need such a song and dance. I would have preferred to have seen an open discussion about this 100 year plan that NI mentioned, what areas they are going to focus on in the coming years, how they are planning the transition between current gen and NXG LV. A lot of our careers are pretty heavily tied to NI, we aren't impressed by marketing hot air, we just want to be efficiently kept in the loop regarding what NI is doing and how it is going to affect our companies. (I say 'we', that's my opinion, but I would be surprised if I was the only one who thought this)
  42. 3 points
  43. 3 points
    Hi everyone, Apologies for the slow response - this thread kicked off a number of internal discussions within NI and I was waiting for some of those discussions to shake out before setting up these engagements. First of all - I have set up a calendly link for scheduling time slots for 1x1 discussions. This is my first time trying calendly, so I have set up 30 minute time slots for three days later this week. Let me know if you have any suggestions or tweaks to make this more effective. Thank you in advance to those of you willing to take the time to talk to me, I appreciate it! In the link I specified two different types of discussions, please specify which kind of discussion you would like to have and I will send you a Microsoft Teams link. The first type is a user interview where we discuss how you use LabVIEW today. This is not focused on NXG, it is a way for me to better understand your workflows and pain points that you encounter today. This helps me better inform design decisions we make in NXG. The second type is a feedback discussion focused on NXG - this is more open format where I will do my best to answer your questions and discuss any feedback you have on NXG. Second - I want to address some of the points in this thread about prior engagements with NI. For the last few years we have been very focused on closing the functionality gaps between LabVIEW and LabVIEW NXG - this means we have not done as much iteration on usability and existing functionality as we would have liked. Big gaps like dynamically loading a VI, sub panels, and hardware support were prioritized higher. As we are getting closer and the gaps are getting smaller, we are re-evaluating some of these longstanding complaints and looking to address them. For example Neil mentioned he had a lengthy discussion a few years ago and didn't see any product changes as a result. I discussed this with the product owner that conducted that discussion, and it (along with several other user interviews that expressed similar concerns) led to several significant design changes - including a major redesign of how we handle tearing tabs out of the editor. In that specific case the redesign represented a significant development effort and was deemed lower priority than closing some of the gaps we mentioned earlier, but as we get through some of those higher priority items I would expect us to look at it again. There were several other areas of feedback which did lead to product changes - such as more color on diagram constants and changes to how the palettes work. We know we need to do better at closing the loop so you can see the impact that these discussions have, and we are working on finding the right venue for having those discussions with the broadest possible audience. Regarding dr powells question about areas which I consider significant improvemented in NXG - there are a few which immediately come to mind 1. The workflow of detecting hardware and getting a first measurement has been significantly improved. We have heard consistent feedback from new users that the getting started experience in LabVIEW is difficult - so called 'blank VI syndrome'. We tried a few different ways of addressing this in LabVIEW, but in NXG automatic detection of hardware is improved and reflected directly in the project and measurement panels let users configure their measurement and basic signal processing before being translated to G code. That is not a workflow that impacts many of the more advanced users on this forum, but it makes LabVIEW more approachable to new users which is important to grow the user base. 2. In my opinion the changes we have made to the project are a huge improvement. Requiring a project, using the project and components to manage the dependencies between files, and mirroring the project file structure on disk (which I realize not everyone is in favor of) has eliminated cross linking. Personally I find it speeds up my development for the editor to be picking the disk location of the files I am creating - especially when I am working with classes and creating a lot of files. 3. Components in general are a big improvement over the options in LabVIEW. They simplify dependency management and encourage modular code architecture. They solve many of the limitations of PPLs. Each component translates to an exe or gll when built, which simplifies build specifications. We have also enforced a strict rule of editor run behavior being the same as built application behavior, which should improve difficult to track down bugs where it works fine in the editor but breaks in a built application. 4. Moving to an xml file format will enable us to eventually have a significantly improved integration with git and other third party tools. We are not there today, but that is where we intend to get to. 5. This is less user-visible, but the underlying architecture of NXG is more flexible than LabVIEW and has been designed for extensions and plugins from the beginning. This is easiest with C#, but it means that community members who know C# are able to extend NXG in a myriad of ways that are not possible with LabVIEW. As I said in my previous post - LabVIEW NXG is still a work in-progress. It is not ready for complex projects yet, but we are getting closer and moving quickly. Similar to what Stephen said - I believe the vector is good, and community engagement helps us refine our direction. Please grab a slot for a 1x1 discussion if you want to discuss this further or are willing to talk with me about your current workflows. Thanks!
  44. 3 points
    The IDE was in fact pretty much here when NI launched LabVIEW Web UI: https://www.youtube.com/watch?v=RiY35znIdUg which used Microsoft Silverlight. Since that must have been in the making for several years before it was released, this is more like 10+ years old. Does the IDE look 10 years old or older? The fact that pretty much every traditional LV developer feels horrible pain at the mere look of the IDE is saying either we are all hopelessly outdated or indeed the IDE is a step backwards in terms of nimbleness (agility?). The IDE should be modular, since the underlying code itself is not. In fact, it should be open, offering the ability to third party developers to hook their tools and customization. I am not particularly depressed anymore at the unfolding of this slow motion catastrophe, and in fact I am even willing to believe NI that indeed there is a new crop of developers who are going to prove us all wrong. But right now, this is not the direction we (academia) are going. We want openness (for reproducibility purposes), we want flexibility, customizability, and we are certainly not going to have our users pay runtime licenses for toolkits simply implementing public domain algorithms. I am still much more efficient at programming in LV, but there is zero incentive for me to choose NXG over Python at this point, based on my 25 years of dealing with NI.
  45. 3 points
    NXG actually seems quite good at figuring out what I am looking for with quick drop; I just type the name of the primitive and it appears. This is good as I cannot identify anything because the new icons all look totally weird to me...
  46. 3 points
    As a user collaborating on an idea for un-created code, I want to be able to see what other users are thinking about as well. For example, let's say, in the back of my mind, I have this idea for an open source barbecue thermometer that runs on a Raspberry Pi using LabVIEW. That said, I have several open source project ideas, all of which I find equally interesting, so I randomly pick one of them to start working on. But, maybe there's another community member that wants such a BBQ thermometer as well. If I knew that, I might choose to prioritize the project that someone else is interested in vs. another project that nobody (but me) cares about. Similarly, let's say I have the idea, but don't have the bandwidth to really pull it off in any reasonable timeframe. Someone else sees the idea, however, and wants to help. Knowing that I'm not alone, I might choose to begin work on the idea. Finally, let's say that I have this really cool idea, but I'm a software guy and it would probably require a custom Pi HAT. That's way outside my comfort zone, but maybe someone else is willing to help with that, and make the project a reality. This could even allow for voting, similar to the kudos system on the NI Idea Exchange. Of course, there's no obligation to do anything if your idea gets a bunch of kudos, but it could be incentive and/or drive innovation by others.
  47. 3 points
    Due to bitbucket discontinuing support for Mercurial in 2020, I have migrated the source code to Github. From now on, if there is any development on this project, it will be in the LabVIEW Open Source project: https://github.com/LabVIEW-Open-Source/ui-tools You can use both LAVAG or Github's issue tracking to report any issue you might encounter with this package. (previous repo: https://bitbucket.org/normandinf/ui-tools/)
  48. 3 points
    Thought I'd show an example of "complexity" of a framework, according to my way of thinking, by comparing the priority messages of the NI Actor Framework and the DQMH: Looks the same from an API level (they even use similar icons). Let's look inside; here is the relevant code section for priority messages for the AF: Yikes! And here is the same for DQMH: Ah, much simpler. Now we can see which framework involves more complexity: the DQMH. Wait, what, you say? Isn't the obvious complexity of the AF implementation mean the AF involves more complexity? Well, no, because I, as User of a framework, care nothing about the implementation, I care about application I am building with these APIs. So let's consider the task of sending three high-priority messages, A then B then C. In what order will the three messages be received and acted on? With the Actor Framework the order will be A, then B, then C, ABC, always. With the DQMH we have: 1) if the receiver can execute them faster than they are sent, the order will be ABC 2) if the receiver is handling another message the order will be CBA (as we place on the front of the queue) 3) if the receiver is idle, but executing A takes time (allowing C to get before B), the order will be ACB 4) if busy but finishes after B is sent but before C is sent, the order is BCA 5) as (4) but B is finished executing before C sent, ther order is BAC Thus with the DQMH there are 5 possible orderings of execution, with the probability of the various orderings highly dependant on timing of not-directly-related bits of code (other messages being sent). At best, this is counter-intuitive and potentially confusing during debugging. At worst, one combination is a rare race condition that doesn't show up during testing and causes near-impossible-to-debug errors in deployed code. So that is an example of complexity, and it is certainly accidental, as the DQMH designers did not intend unpredictability of message-handling order when they used enque-in-front as message "priority".
  49. 3 points
    It is good to finally see some movement in OpenG. Git and GitHub are also the right choices (Bitbucket would probably also work). Even novice programmers will be able to participate this way. πŸ‘ That said, the current repository has a few problems: No tags No branches All projects in one repository Changed commit messages (the links in the commit messages are non-functional) It is possible to transform an SVN repository into a Git repository while maintaining all tags and branches and updating the committers (because Git uses email addresses and SVN doesn't). Here are some instructions I used in the past for such jobs (instructions are for Linux of course): https://epicserve-docs.readthedocs.io/en/latest/git/svn_to_git.html For OpenG this process is a bit more complex because of the way the repository is structured (i.e. tags inside folders for each project), so the scripts must be adjusted to take this into account. I also suggest splitting the project into multiple repositories during this process to improve maintainability (unless there is a reason why it needs to be one repository). I could prepare the scripts to automate this process if you are interested.
  50. 3 points
    As a package consumer I would like to be able to subscribe to packages so that I get notified when a new version is available.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.