Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/05/2019 in all areas

  1. 9 points
    So I wasn't there but there was a public announcement at GDevCon about a new edition of LabVIEW called Community Edition which is the LabVIEW Professional version (I read that as application builder included), and completely free with no watermarks for non-commercial use. NI hasn't made any post about timelines, or other details yet but I hear those are in the works. This is obviously a huge thing for LabVIEW as any monetary barrier to entry will discourage new developers from experimenting with LabVIEW. And then there is the fact that those that are familiar with LabVIEW, can keep up with the newest version outside of their company, or when they are between jobs.
  2. 7 points
    The core of our business has changed. Fewer users are developing their own test applications; instead, they're buying something off the shelf like TestStand. Fewer users are developing their own data acquisition software; instead, they're buying something off the shelf like FlexLogger. This trend alters significantly the role of LabVIEW (CG and NXG) in the NI ecosystem -- it becomes far less important to support whole application development (though, of course, we still do and will) and far more important to support "just a bit of customization" when the pre-built tools fail. A lot of software has an endless array of switches and options, but LV provides the ability for a user to write a custom routine to specify the behavior they want in some corner niche of a product. Think like Signal Express, able to generate sine wave, square wave, triangle wave or "pick a VI that generates the wave that you need" wave. What's funny about this is that although the app devs are growing rarer, they're also individually growing more profitable for NI as a whole because the companies still paying to develop custom software are the ones that are generally buying a lot of hardware to do something unique in the world (or not in the world, in the case of SpaceX, Blue Origin, Ad Astra, etc.). So I don't expect the big scale parts of LabVIEW to vanish, but I do expect them to be driven by specific requests from megascale customers rather than from the massed collective. The massed collective will be driving more of the IDE developments. At least, that's my suspicion at this time based on the presentations I've seen.
  3. 7 points
    I've exported the OpenG sources from Sourceforge SVN to Github. It's located here: https://github.com/Open-G I'm hoping this will encourage collaboration and modernization of the OpenG project. Pull requests are a thing with Git, so contributions can be encouraged and actually used instead of dying on the vine.
  4. 6 points
    Thanks AQ, you are the first to actually spell this out in words that make sense to engineers. Not sure too many here are going to like it though! ps: I liked your post due to its honesty and absence of marketing weasel words, not because I think this is a particularly good strategy for NI. Maybe I have just had a weird career but in the 20 years or so I have been developing LabVIEW based solutions virtually never would a custom off-the-shelf piece of software like Signal Express or similar come anywhere close to doing what I needed it to and it would require so much customisation that the benefit would be so low. To me LabVIEW is a programming language or RAD tool and the responsibility of NI is to deliver first class hardware with amazing software to help me bring the two together and that is it.
  5. 6 points
    I don't mind the new green on the landing page of ni.com, but elsewhere on the site the new theme is a bit too much. I wanted to fix the near invisible links that @LogMAN ran into, but got a bit carried away: If anyone is interested in using the blue style, you can download it from here. Be warned it's not perfect, there's still lots of green bits on mouse over etc, but I find overall it makes the site much more readable. If blue isn't your thing, the primary color can be changed by setting the root --forrest-green color to something else.
  6. 6 points
    The more I look at the center logo, the more I believe it captures exactly the kind of excitement generated by the whole operation.
  7. 6 points
    Why are so many things just that little bit harder in or weirder in NXG? I am trying to use it to make my first "real" application, in this case a relatively simple WebVI. I put this list down in the hope someone can tell me I am being dumb and there is a sensible way to do these thing Why can I not easily branch off a wire by clicking on it somewhere? Now I have to right click and select the option to create a wire branch Why can I not right click on a primitive to open the sub-palette for that thing to give me similar items. I can right click and replace or right-click and insert... Example, I have an existing 2D array wire I want to get the size of, there is no way for me to right click the wire to quickly open the array palette and then drop down a Size primitive I have to relearn the whole palette structure as all the icons have changed. OK that is fine so let me explore a bit and poke around but I cannot keep a palette open by pinning it? (OK so it turns out I can do this if I start the browse from the left-hand palette and then weirdly click the << arrow, but I am so used to opening the palette by right clicking on the diagram). Arg, then the pop-up help covers over the next item in the list 😞 The Align menu is so much less usable in that drab gray and single line. There was nothing wrong with the way it is implemented in Current Gen, why change this? The GUI is so dull in general. The colours are washed out and grey everywhere is just depressing. It sounds silly but it makes me not want to use it. Sorry, but MDI is not a suitable technique for anything other than the most trivial of applications. I like the really like the zoom but please let us pan with the middle-mouse or something similar Please pop open menu items as soon as I browse into them, rather than forcing me to click (looking at you Case Structure Cases and Align menus) Why are the icons so confusing. Please can someone explain how the picture below conveys any information that this array concatenation. Why can I not run a Sub VI in a WebVI? In order to test the correctness of a piece of code I have to move it out of the .gcomp to run in isolation, and this actually moves the code on disk What was fundamentally wrong with the Project Window in Current Gen? I have a vertical monitor that I use exclusively for displaying the Project window and it is amazing. I don't particularly like the new implementation but at least let me undock it! I am also not really filled with confidence that as my project grows in size it will not become overwhelming (yet another reason to keep Virtual Folders) This is just a small subset of the items I am currently struggling with. In general I am quite forgiving of new software, but I think NXG has been baking in the oven for something like 8 years! I appreciate that NXG has not been designed for me, rather I suspect it is targeted at a whole new audience of LabVIEW developer. As such I know my muscle memory is going to be really detrimental in getting me up to speed with this new way of doing things so I am trying really hard to not let that get in the way of my journey. Something deep down just makes me worry that the essence of what makes LabVIEW (current gen) so special has been lost in translation. It just feels like too many decisions have been made by people who are not actually very familiar with LabVIEW. This makes me a bit sad as I have no doubt that a ridiculous amount of engineering effort and love has gone into NXG (and am under no illusions at the scale of the task of rewriting current gen). All in all my experience trying to develop a non trivial (not by much though) application in NXG has further cemented my thoughts that I am going to have to stick with current gen for the foreseeable future. That said, strength and courage to NI. I will check back again in a few years. ps: I am really excited for the WebVI technology. Please port it to Current Gen so I can actually use it 🙂
  8. 6 points
    Thanks for putting down all your thoughts and providing examples, Neil. I agree with every point you've made. Have you used the Shared Library Interface editor yet? That's some next level UI inconsistency. I wrote a couple of blog posts on my experience converting a small (< 100 VIs, < 10 classes) LabVIEW project to NXG (see Let's Convert A LabVIEW Project to LabVIEW NXG! Part 1 and Part 2). During the process I made a lengthy list of issues and came to the same conclusions many people have voiced in this thread. Of the issues uncovered during the conversion, some were due to missing features or bugs, some a lack of understanding on my part, but a surprising number were due to interesting design choices. The TL;DR of the blog is there is nothing in NXG for me to want to continue using it, let alone switch to it from LabVIEW. Which is sad because I was really hoping to find something to look forward to. Here's hoping for a LabVIEW NXG: Despecialized Edition!
  9. 6 points
    The New Data Value Ref and Delete Data Value Ref nodes will be able to be in inline VIs (and thus malleable VIs) in LV 2020.
  10. 6 points
    View File Hooovahh's Tremendous TDMS Toolkit This toolkit combines a variety of TDMS functions and tools into a single package. The initial release has a variety of features: - Classes for Circular, Periodic, Size, and Time of Day TDMS generation with examples of using each - Reading and Writing Clusters into TDMS Channels - XLSX Conversion example - File operations for combining files, renaming, moving, and saving in memory to zip - Basic function for splitting TDMS file into segments (useful for corrupt files) - Reorder TDMS Channel with Demo There is plenty of room for improvements but I wanted to get this out there and gauge interests. The variety of classes for doing things, along with VIMs, and class adaptation makes for using them easier. If I get time I plan on making some blog posts explaining some of the benefits of TDMS, along with best practices. Submitter hooovahh Submitted 12/12/2019 Category *Uncertified* LabVIEW Version 2018 License Type BSD (Most common)  
  11. 6 points
    All of the presentations are now on the LabVIEW Wiki. You can find them at: https://labviewwiki.org/wiki/Americas_CLA_Summit_2019 Thanks Kevin Shirey and Mark Balla for producing the videos and all those that volunteered to run the cameras. This is an awesome resource to be able to go re-watch and review these great presentations again or for those that couldn't join us in person to be able to view them as well.
  12. 6 points
    @Jim Kring, it seems to me that the export of the code has gotten a positive response from the community. However I may be wrong. If anyone has any opinion either way, please come forward. As you can see in this thread, it appears the community has rallied around this effort. This is why I emailed you to come here and share your thoughts. In the past, OpenG was a great venue to showcase how a bunch of passionate LabVIEW users can come together and collaborate on something useful. The passion is clearly still there, as shown by the numerous discussions here. The general coding community has moved to Git with GiHub being the hub. This seems like the logical next step. Who knows what this initiative will lead to. However, I’m expecting that placing OpenG in a neutral GitHub repo will provide the spark and the tools to facilitate open collaboration, then the community can drive the future. The community is full of smart people who have a desire for clean tested code. And if issues come up, LAVA discussions (or GitHub issues) are there to hash things out. When LAVA offered to host all OpenG discussions back in 2011. it was clear that the community wanted to help. When @jgcode put his standards together for how code should be discussed at that time, It was an exciting time. Since then, many people have come forward with offers to add new code into OpenG and fix bugs. For example @drjdpowell first offered to include his awesome SQLite toolkit for inclusion into OpenG. He got no response either way. It’s a shame to have a platform and forums to allow people to post and discuss OpenG code and then ignore it. If you have ideas on what the future of OpenG is. I’m hoping it’s to be more transparent and inclusive. Providing the tools, resources and some safety checks along the way, is the best way to facilitate passionate individuals to dive in. Do you think keeping the status quo of the past 10 years makes sense? It seems to me that the community disagrees. What do you think?
  13. 5 points
    @Aristos Queue, I was part of the private preview event and afterwards there were several comments basically saying "I watched all of this and have no idea what NI is announcing". And multiple requests that NI make it clear what they are trying to announce. I thought maybe the public event would be more clear. Nope, dozens of comments were flying in asking what, if anything was changing as the event went on. After the event ended my favorite comment was "That was a great introduction, but when does the actual event start?". Threads on Reddit, LAVA, and NI all have had various amounts of "What does this mean?" other than a new logo and color scheme. After reading and listening to NI's feedback, only your post made it clear to me what NI was trying to say. So while NI marketing may think they are making it loud and clear, the community has also been pretty loud themselves with their statements that they aren't sure what NI was trying to say.
  14. 5 points
    The logo is pretty uninspired and looks lifted from this company. It's going to take some time to get used to the green theme too - in my mind NI = blue + white. I wonder if NXG will get a green coat of paint. I'll reserve judgement on the content until I've seen the webinar, but it's heavy with cringe worthy marketing speak. Also, a moment of silence for Nigel the NI eagle. Soar Ambitiously™, N 🦅
  15. 5 points
    My experiment with NXG is now over. A simple web page has taken about 5x longer than I had planned for. Some of this is due to me underestimating the nuances of the web module but most of it has been me fighting the new IDE. The other night instead of happily diving into some after dinner software development fun I was actually filled with dread at the thought of having to open NXG and finish what I started, it really is that unpleasant to use. For me, NXG is nowhere near usable in a real project that I expect to have to develop, maintain and make money off. Some stuff seems to work, but everything has this toy feel about it. It is ugly, sluggish, unintuitive and absolutely repulsive to develop with. Sorry that sounds harsh, but it has been in development for over 8 years and has an incredibly strong pedigree to compare against. NI have taken almost everything that made current gen so special and thrown it in the bin. NXG is clearly being managed and developed by people who have never actually become intimately familiar with LabVIEW. I will check back in a few years time but at this point I am extremely disappointed and now need to think very strongly about where my professional systems development career is going. Current Gen is going to be sunsetted at some point and will fade into irrelevance due to its closed source nature (not that open sourcing something of its complexity would help now, it is too late for that). I could wait a few years if I had confidence that the ship was sailing in the right direction, but apart from AQ who consistently has the courage to actually even reply to these threads there is virtually nothing coming back from NI and I feel that the HMS NXG-itanic is sailing full steam ahead towards its doom. NI is run by extremely clever people who have no doubt done their sums and analyses and are charting the course for NXG that they think will bring them the most success in the long-run. I have a strong appreciation for just how big an undertaking something like NXG is, but given where it is after 8 years of development it just seems that I am not the target market and there is not too much I can do about it. Happily, given how robust NI hardware and current gen LabVIEW actually are I suspect there will be quite a bit of work supporting old systems for at least another decade (perhaps more).
  16. 5 points
    For a final Case. Sadly there isn't any non-depreciated Items to replace that vi. Which makes this work for Clusterzilla. ArrayToCluster.vim
  17. 5 points
    Hey LAVA friends. I'm going to be doing a live-stream on Youtube next Tuesday April 28, (10AM Pacific) to go over LabVIEW Community Edition. I'd love to see you guys there. It'll be interactive with chat for your questions, and I will be making an attempt to talk to a Raspberry Pi and Arduino. If you're curious about low-cost hardware or just want to find out what's new in the latest LabVIEW. Join me here: https://youtu.be/4HLVqYXpxIo. Edit: If any of you have done any projects with the supported hardware. Let me know and I can mention you or pull you into the discussion. - Thanks.
  18. 5 points
    The main difference between LabVIEW and a C compiled file is that the compiled code of each VI is contained in that VI and then the LabVIEW Runtime links together these code junks when it loads the VIs. In C the code junks are per C source file, put into object files, and all those object files are then linked together when building the final LIB, DLL or EXE. Such an executable image still has relocation tables that the loader will have to adjust when the code is loaded into a different memory address than what its prefered memory address was defined to be at link time. But that is a pretty simple step. The LabVIEW runtime linker has to do a bit more of work, that the linker part of the C compiler has mostly already done. For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. In this case the code junks in each VI are real compiled machine code directly targetted for the CPU. In the past this was done through a proprietary compiler engine that created in several stages the final machine code. It already included the seperation where the diagram was first translated into a directed graph that then was optimized in several steps and the final result was then put through a target specific compiler stage that created the actual machine code. This was however done in such a way that it wasn't to easy to switch the target specific compiler stage on the fly initially so that cross compiling wasn't very easy to add when they developed the Real-Time addition to LabVIEW. They eventually improved that with an unified API to the compiler stages so that they could be switched on the fly to allow cross compilation for the real-time targets which eventually appeared in LabVIEW 7. LabVIEW 2009 finally introduced the DFIR (Dataflow Intermediate Representation) by formalizing the directed graph representation further so that more optimizations could be performed on it and it could eventually be used for LabVIEW 2010 as an input to the LLVM (Low-Level Virtual Machine) compiler infrastructure. While this would theoreticaly allow to leave the code in an intermediate language form that only is evaluated on the actual target at runtime, this is not what NI choose to do in LabVIEW for several reason. The LLVM creates fully compiled machine code for the target which is then stored (in the VI for a build executable or if code seperation is not enabled, otherwise in the compile cache). When you load a VI hierarchy into memory all the code junks for each VI are loaded into memory and based on linker information created at compile time and also stored in the VI, the linker in the LabVIEW runtime makes several modifications to the code junk to make it executable at the location it is loaded and calling into the correct other code junks that each VI consists of. This is indeed a bit more than what the PE loader in Windows needs to do when loading an EXE or DLL, but it isn't really very much different. The only real difference is that the linking of the COFF object modules into one bigger image has already been done by the C compiler when compiling the executable image and that LabVIEW isn't really using COFF or OMF to store its executables as it does all the loading and linking of the compiled code itself and doesn't need to rely on an OS specific binary image loader.
  19. 5 points
    Found a fix for this. It should be fixed in LV 2020. The bug ONLY affects copying from a 1-element cluster of variant to a variant. Or a cluster of cluster of variant to a variant. Or... you get the idea... "any number of cluster-shells all containing 1 element, culminating in a variant" being copied to a variant. This was a fun bug... consider this: The memory layout for an byte-size integer is { 8 bits } The memory layout for a cluster of 1 byte-size integer is { 8 bits } They are identical. "Cluster" doesn't add any bits to the data. That's just the type descriptor for the data in that location. This is true for any 1-element cluster: the memory layout of the cluster is the same as the memory layout for the element by itself. This is true even if the 1 element is a complex type such as a nested cluster of many elements or an array. When a VI is compiling, if data needs to copy (say, when a wire forks), LabVIEW generates a copy procedure in assembly code. For trivial types such as integers, the copy is just a MOV instruction in assembly code. But for compound types, we may need to generate a whole block of code. At some point, the complexity is such that we would rather generate the copy procedure once and have the wire fork call that procedure. We want to generate as few of those as we have to -- keeps the code segment small, which minimizes page faults, among other advantages. We also generate copy procedures for compound coercions (like copying a cluster of 5 doubles into a cluster of 5 integers). Given all that, LabVIEW has some code that says, "I assume that type propagation has done its job and is only asking me to generate valid copy procs. So if I am asked to copy X to Y, I will remove all the 1-element shells from X and all the 1-element shells from Y, and then I will check to see if I have an existing copy proc." Nowhere in LabVIEW will we ever allow you to wire a 1-element cluster of an int32 directly to an int32. So the generator code never gets that case. In fact, the only time that we allow a 1-element cluster of X to coerce directly to X is... variant. The bug was that we were asking for a copy proc for this coercion, and the code was saying, "Oh, I have one of those already... just re-use copy-variant-to-variant." That will never crash, but it is also definitely not the right result! We had to add a check to handle variant special because variant can eat all the other types. So if the destination is variant, we have to be more precise about the copy proc re-use. I thought this was a neat corner case.
  20. 5 points
    At the 2019 Americas Certified LabVIEW Architect Summit, GCentral was introduced to the LabVIEW Community. GCentral is a non profit organization (incorporated September 2019) composed of G community leaders creating a platform for programmers to find/use, contribute, and co-develop G code packages and collaboration resources. While GCentral is leading the charge to solve these problems, we will closely align with the community's needs. This forum is designed to connect GCentral's efforts with the community's needs. Some links to be aware of: GCentral.org LabVIEW Wiki Twitter (@GCentralOrg) LinkedIn Instagram (gcentralorg) Facebook Website GitHub Repo
  21. 4 points
    Hi Everyone, I was just alerted to this discussion (thanks @drjdpowell), so I wanted to be sure I heard all the feedback, to make sure we're staying on top it. Before I dive in, I'll mention there is a version 2020.1 in beta right now (if you can't access this, please be sure you sign up for the beta and/or send me a PM). This addresses many of the points raised here, so please check it out. Also, it's important to mention that VIPM 2020 had a LOT of work (and love) put into it, and the beta+launch was in the middle of COVID-19, so things didn't get as many eyes (i.e. beta testers) as usual. That's unfortunate and we're working hard on the new 2020.1 build. Any feedback/issues you have are important, so please do post them and know we're listening. It's hard to keep tabs on conversations that happen in various LAVA threads, so if you'd like to see something improved/resolved, please do post it in the VIPM forum or PM me. I'll try hard to respond to the good points everyone raised. First, I'll mention that VIPM 2020.1 no longer requires a sign in when installing packages from the VIPM Community Repository. In 2020.0, this was causing issues for some users due to their Enterprise IT/Networking configuration. And, as you've all mentioned, some users really didn't like it, which is fair. There are still some features that use sign-in, like starring packages, and there will be a prompt when those features are invoked. @LogMAN Actually, nothing changed with how VIPM installs itself 2020, as compared to 2019 (and older versions). The issue was that the the VIPM 2020.0 (and older) installer framework (e.g. Advanced Installer) needed to be updated for newer versions of Windows. In VIPM 2020.1 (now in beta -- see link above) we've addressed this issue and it should install without issues. That said, there were some bugs in NI's LabVIEW 2020 installer that causes it to fail to correctly install VIPM 2020, in some cases -- e.g. the issue where it sometimes would fail to start. NI has been working with JKI to fix this. @Neil Pate That's fair We added this feature to make VIPM much more responsive when users are opening packages -- VIPM sometimes runs as a background task, so that it doesn't have to reload itself for each of these operations. This can be disabled in the VIPM settings file, here: "C:\ProgramData\JKI\VIPM\Settings.ini" [General] Start VIPM when computer starts?="FALSE" Start VIPM when LabVIEW starts?="FALSE" @Michael Aivaliotis Thanks for helping everyone out. Older versions of VIPM are available to users -- we have a link on the vipm.io/download page for users. However, since older versions of VIPM use outdated LV runtime engines that are longer be supported by NI and don't work well on newer OS'es, we don't encourage users to use them -- it often creates more problems for them, and a support burden for JKI and NI. As such, we ask that people do not post older downloads and instead direct people to get them from either NI or the VIPM websites. Again, thanks for helping people out. Also, I appreciate everyone's feedback -- I know when things don't work well, it's super frustrating. VIPM 2020 had some bugs and left room for improvement, because of all the new features that had to get out the door in time for the LabVIEW 2020 launch date, and we didn't have the typical level of beta testing. I hope 2020.1 resolves those, and if it's still missing things or not working right, let me know and I'll take responsibility for those issues. Kind Regards, -Jim
  22. 4 points
    In an attempt to standardize my handling of formatting timestamps as text, I have added functions to "JDP Science Common Utilities" (a VI support package, on the Tools Network). This is used by SQLite Library (version just released) and JSONtext (next release), but they can also be used by themselves (LabVIEW 2013+). Follows RFC3339, and supports local-time offsets.
  23. 4 points
    I discussed with @Mark Balla and we figured out a way to get all the old videos that used to be on the Tecnova site up to Youtube. It will take a few days but this is in progress. Probably within a week all the videos should be up. I will update this thread with progress.
  24. 4 points
    Hi all, friendly LAVA moderator here. I'd just like to gently remind everyone we are all human, and are at times emotional, and at times frustrated with colleges we interact with. Lets all take a deep breath and try to continue to give criticism in a form that will be most helpful. I know I've at times flown off the handle online, especially on the subject of NXG. I personally don't think I've shared code between projects for anything real project anytime recently. But I can remember times that I did it and didn't have any real problem. Likely because I was mindful of what effected what. X do you have some recent examples of code you shared between projects? What made you make that decision, and why were the other options seen as less desirable? Also it sounds like this isn't explicitly forbidden in NXG.
  25. 4 points
    LabVIEW Community Edition rocks! In order to help kick off this momentous occasion, I've put together an example alarm clock. It is broken down into 6 lessons (so far) taking you from blinking an led through creating an alarm clock with a state machine. To download or learn about LabVIEW Community Edition check out GCentral.org Check out the alarm clock here! <-(http://bit.ly/ChrisCilino_LabVIEWCommunityAndRP)
  26. 4 points
    So first I want to acknowledge some areas we could have done better. I have been involved in a number of discussions around what our migration strategy looks like, and the biggest gap we immediately identified is a lack of clear external messaging, so that is something we are looking to address. I have talked to all different kinds of users, and in a relatively short discussion we are able to align on whether or not NXG is ready for their use case. That is great, but you should be able to make that determination yourself by looking at public documentation, it should not require a call with me or a frustrating session of attempting to migrate an application. NI has tried to provide this in the past with the LabVIEW roadmap, but it doesn't have enough detail for you to make a high confidence decision. For example, it is not possible to differentiate between functionality that is not complete yet versus functionality which was intentionally omitted or intentionally changed. We have also not done a very good job of explaining the background of specific decisions - which leads to some of the feedback in this thread where it seems like we have changed everything for no reason. Certainly I can point to some changes which were mistakes, and generally speaking we have the flexibility to undo those changes, but many of the bigger changes were intentional, designed, tested changes which we believe are an improvement. We intend to do a better job of publicly documenting those decisions. It is hard to overstate the reorganization efforts that have happened within NI over the last couple of years. Last NIWeek Eric Starkloff talked about how we were organizing the company around business units instead of around products, and that has had broad reaching impact, but we were making major shifts in the way we built products in the last couple of years anyway. Like many of the large software companies we have been shifting to a user-centric development model where we actively try to bring the user into the development process instead of thinking we know what they need and developing in secret. A good example of this shift is the introduction of the product owner role in NI R&D, a role focused on ensuring we are delivering the right value to our users. Both the product owners and product planners have long histories of working with LabVIEW, so you should not feel like the decision makers working on LabVIEW NXG are completely detached from LabVIEW - in many cases the decision makers for the two products are the same. There have definitely been teething pains with this shift, but we are getting better at it. I saw several comments about feeling left out of the decision process, and there are certainly some valid concerns, but I would also point to the level of engagement over the last few years where the product owners and product planners have attended and solicited input at the CLA summits, GDevCon and NIWeek. We also have quite a few targeted user engagements when we are working on defining features and workflows. We can absolutely do more, and we plan to, but many significant product decisions have been made as a result of those engagements. Remember that there are a lot of LabVIEW users out there, and we can't talk to all of them. A light-hearted analogy would be to seeing the results of a national poll and saying - 'well nobody asked me!' That being said, I do want to increase my engagement with this community, and there is clearly a lot of passion about making LabVIEW NXG the best it can be. I would love to set up some 1x1 interviews with those of you who are interested so I can better understand how you are using LabVIEW today. I will start a different thread about that. Back to the main point - it is important to understand what LabVIEW NXG is today versus what it will become. LabVIEW NXG today is not ready for most of the applications of this community. You are some of the most advanced LabVIEW users around, and are collectively using nearly every feature in the product. As Stephen said early in this thread - NXG has many nice things, it just isn't ready for him (or most of you) yet. We are trying hard to get there and have made substantial progress, but there are still functionality gaps. We expect that you will continue to use LabVIEW for at least a few more years until NXG is more complete for your workflows. I saw a comment about not wanting to develop an application of thousands of files in NXG, and I agree that I don't consider NXG ready for that either. Similarly - converting a large project from LabVIEW to LabVIEW NXG is not something I would recommend yet either. The Conversion Utility and associated tooling is more effective for converting instrument drivers and libraries. To be honest I was surprised that no one in this thread pointed out that there is currently no way to probe classes, and no way to make custom probes. Yes we are already at version 5.0 and we still haven't built a full replacement for LabVIEW. That is a reflection of the incredible array of features in LabVIEW and the diversity of users and user cases that this community contains. However version 1.0 was not intended as a full replacement for LabVIEW and neither is version 5.0. For a subset of our user base who are building less complex applications NXG is ready for them and they are using it. For example a lot of work went into the workflow of helping a simple user take and process their first measurement, and we are building out from that foundation. When I talked about our reorganization and change in philosophy - that also translates into how we prioritize features and workflows. We are not just racing to recreate every last piece of LabVIEW in LabVIEW NXG. We are trying to understand the problems you were using those features to solve so we can determine if that same solution is the best choice for NXG. I plan on also addressing some of the specific points of feedback in this thread, but this post turned out much longer than I had intended! Hopefully that provides a bit of framing around the current state of LabVIEW NXG. Thanks, Jeff
  27. 4 points
  28. 4 points
    So I just discovered this, this morning and I think it will help out in making VIMs when dealing with supporting a scalar, or 1D array data type. I have an example which is my Filter 1D Array VIM, posted here, which is heavily inspired by OpenG's implementation. In it the developer can filter out a scalar, or a 1D array of something from a 1D array. I did this by adding a Type Specialized structure at the start which checks to see if after building the array, if the data type matched in the incoming array. If so it is a scalar and should be used. I then have another case where the data just goes straight through thinking it must already be a 1D array. But what I realized today is that is unnecessary. If in the VIM my input is set to a 1D array, and we add a build array with only 1 terminal, and that build array is set to Concatenate, then that whole structure isn't needed. A Scalar will become a 1D array with one element, and a 1D array will have no items added to it after the build array. In this example the code simplification isn't much, but someone may have had two cases in a type specialized structure which handle scalar and 1D array separately and using this they could be combined them into one. And one other minor thing, I don't think I will actually be updating the Filter 1D Array VIM to use this, just because knowing if the input is a scalar means other sorting work not shown can be ignored helping performance.
  29. 4 points
    I found this tonight while working on a project: https://remixicon.com/ Really good icon library with modern-looking icons where you can customize the color and size of the icons, then download them as PNG files. I then import them into a LabVIEW pict ring and it's off to the races.
  30. 4 points
    It is not a bug. It should break for any unsigned integers because that's how the "negate" method works.
  31. 4 points
    I'm working on a personal project (more information will be shared about this later) that needs Message Queue Telemetry Transport (MQTT). While searching for LabVIEW libraries for MQTT I found 1 on VIPM, 2 in the NI Forums, and 1 through Goggle on GitHub, as follows: WireQueue-MQTT Driver for LabVIEW by WireFlow AB (this one costs $550) MQTT Client API in native LabVIEW by Peter - daq.io (also on GitHub as LVMQTT) MQTT-LabVIEW by Michal Radziwon Quaxo MQTT LabVIEW by Stefan May This is not unusual for just about anything you might be looking for. In fact searching on GitHub there are 13 results for LabVIEW+MQTT. What was weird is that two of them were almost completely the same, yet neither attribute the other. I don't know which came first. I ended up forking from one of them but I guess I'll attribute both to be safe if I end up using it. However, talking about code confidence, I just found this one: LV-MQTT-Broker by @Francois Normandin. I know Francois, he is a LabVIEW Champion. He has included unit tests. It has full documentation as well as an NIWeek presentation by him and Sarah Zalusky, both of whom are Certified LabVIEW Architects (CLAs). From GitHub I can see he has been actively contributing to it and its open source (which most of them were). Honestly, I wish I had found this one first. Just some words for thought...
  32. 4 points
    Here you go. Set Icon.vi Use it like this: To get back to the original icon just call it with an empty path.
  33. 4 points
  34. 4 points
    I assume you meant this video? There is this older video of Dr. T and Jeff K. introducing a LabVIEW Basics Interactive CD-ROM (~LabVIEW 4), but it's not as exciting as the LabVIEW 5 promo.
  35. 4 points
    Add SuperSecretListboxStuff=True to your labview.ini , reload LabVIEW new menu items will show up when you right click on MLC control. Read this thread
  36. 4 points
    As someone contributing code on LAVA, I would like to see the certified LAVA repository packages made available through the GCentral package search tool.
  37. 4 points
    As a company that uses LabVIEW and has it's own existing internal repository for reuse code, I would like a way for my developers to discover packages in G Central and in our private reuse repository, all from a single portal.
  38. 3 points
    TL;DR: This is NOT a bug. It is all explainable by the normal behavior of the memory management mechanisms used by LabVIEW, including a memory allocator layer provided by SmartHeap (from MicroQuill). Details: Actually the original bug report in Dec 2013 by Mr Mike (bonjour, Mike!) was pretty accurately analyzed and documented by Ryan P in 2014 and the bug was closed then. Mike's post from today did manage to gain the attention of someone else at NI, who asked me to take a look. I reviewed the VIs from this page and decided I could explain all the behavior with actual numbers. See the enclosed picture of Process Explorer's trace of LabVIEW 2019 (64-bit) memory usage during a session looking at these VIs. The labels I've placed in the picture attempt to explain it all, but I'll summarize by saying that all the "lost" memory (around 422MB) can be explained by the 10M master pointers managed by LV, plus 10M _freed_ small blocks sitting in pages managed by SmartHeap. These _freed_ blocks are not "lost". SmartHeap knows where they are and will let us use them again, although because they are small SH keeps them in special low-overhead pools that are used _only_ for small allocations, and who knows when LV will need 10M small blocks again. These _freed_ blocks were formerly the 10M strings with one "space" character in them, each of which actually take up probably 40 or 48 bytes. Each string block has a LV-managed 32 byte header, plus a 4 byte length, plus 1 byte for the "space" char. LV asks Smartheap for this 37 byte block, and they probably give us a 40 (rounded up to multiple of 8?) or 48 (rounded up to multiple of 16?) byte smallblock to contain our request. Small blocks in SH are low overhead because they only require a single bit to represent their inUse state and require no header of their own. The numbers don't all add up exactly, but they are sufficiently in the ballpark that any slop is explainable by the the fact that a lot of other stuff is going on in the memory management arena in LV. It's complicated. There could be significant amounts of fragmented small free blocks already available. Process Explorer could be reporting in mebibytes vs. megabytes. etc. Hope this helps. Rob
  39. 3 points
    That's a mighty fine VM you got yourself there. Almost like having a VM of this Linux RT target is a super useful tool, that helps troubleshoot and debug features of the embedded UI that are at times "inconsistent" as you put it. For anyone else that finds this useful you should go vote on the idea, and/or contribute to the conversation.
  40. 3 points
    I think the longer my relationship with NI carries on, the more the message seems to be that I'm not considered part of the direction NI is concentrating on at all. In a way, the new announcement is a bit like "this isn't designed for you, do you hear us loud and clear?"
  41. 3 points
    It is growing on me too (with exceptions mentioned before). It actually feels refreshing in some sense, which is probably what they intended. It seems to me that they have totally forgotten about their existing customers. I actually haven't received any invitation, message or notification from NI about any of this (did anyone?). We are the ones that are most excited to use their products now and that doesn't seem to be worth anything. We are also the ones who are passionate about sharing our knowledge and excitement with the next generation of engineers. VIWeek, LAVA, LabVIEW Wiki, OpenG, the Idea Exchange and many more initiatives are prime examples of this. It is very easy for excitement to turn in to frustration if you don't know what is coming next. Don't get me wrong, I'm a strong supporter of NI, LabVIEW and anything that comes with it and I sincerely hope that I can continue to do so for the next decades. I'm just frustrated that so many exciting new things are "dumped" in a way that make me feel left out.
  42. 3 points
    NI Week keynote. Pure marketing BS. Exactly what I expected from it.
  43. 3 points
  44. 3 points
    There are multiple considerations: Public IP address: Your mobile carrier (or Internet service provider) assigns you a public IP address. STATIC public IP address: Be aware that this is an increasingly rare commodity. I don't know which country you live in, but I'd be very surprised if your consumer mobile carrier provides static public IP addresses anymore. You might find a commercial/enterprise provider that still sells static IP addresses, or you can use a Dynamic DNS (DDNS) service like https://www.noip.com/ -- DDNS allows you to connect to an address like neilpate.ddns.net which stays static even if your IP address is dynamic. Unique public IP address PER DEVICE: Unfortunately, if you have 1 SIM card, you will get 1 public IP address to be shared between your Windows PC and all of your cRIOs. This is the same as your home Internet: All the PCs, laptops, tablets, phones, and other smart devices that connect to your home Wi-Fi all share a single public IP address. This is Network Address Translation (NAT) in action. If you really want multiple unique public addresses, you'll need multiple SIM cards. Unique public IP address per SIM card???: Nowadays, you also need to double-check if your carrier even provides you with a unique public IP address at all! Carriers around the world have started implementing Carrier-Grade NAT (CG-NAT) for both mobile and home Internet users. This means your SIM card might share a public IP address with many other SIM cards. If this is the case, then DDNS won't work! Suppose you have 1 public IP address, and each of your devices host a web service at port 443. You can assign a unique port per device on your modem and do port forwarding as you mentioned: Dev PC --> neilpate.ddns.net:54430 (modem) --> 192.168.1.200:443 (Windows PC) Dev PC --> neilpate.ddns.net:54431 (modem) --> 192.168.1.100:443 (cRIO 1) Dev PC --> neilpate.ddns.net:54432 (modem) --> 192.168.1.101:443 (cRIO 2) This means the client program on the Dev PC needs to know to use a non-standard port. You can do this easily in a web browser or a terminal emulator, but I'm not sure that LabVIEW can use a custom port to connect/deploy a cRIO. Alternative solutions You don't necessarily need a public IP address for remote access. Some modems can be configured to automatically connect to a Virtual Private Network (VPN). If you enable VPN access to your office and you ask your modem to connect to that VPN, your devices will be on the same (local) subnet as the Dev PC in your office -- we have done this for a cRIO that's deployed into the middle of a desert. If your modem doesn't support this, you could configure each device to individually connect to the VPN instead. Or, your provider might offer enterprise-level solutions that connect multiple sites to the same VPN. For example, they could offer SIM cards that provide a direct connection to your corporate VPN without the need to configure your modem or devices. Yes, these are commonly solved. The issue is that there are so many possible solutions, so you need to figure out which one works best for your use-case.
  45. 3 points
    Your argument is inconsistent. If it's not a priority then making a change to remove it is allocating resource to "the least important". Leaving it in would be the least impactful. However. If you are going to change it then you might as well make it a "Preference" since that is clearly what it is. You don't seem to have a preference or, at least, are indifferent. So why advocate taking away a feature that other people obviously feel strongly about?
  46. 3 points
    For comment, here is a beta version of the next SQLite Library release (1.11). It has a significant new feature of a "Parameter(s)" input to the "Execute SQL" functions. This can be a single parameter or a cluster of multiple parameters. Uses Variant functions and will be not as performance as a more explicit preparing and binding of a Statement object, but should be easier to code. drjdpowell_lib_sqlite_labview-1.11.0.86.vip
  47. 3 points
    @Neil Pate Thank you so much for sharing your thoughts. I have also been playing with NXG (working through the NI online courses) for the past few days and my impression is very similar to yours. NXG has a lot of interesting and useful features that I really want to use as soon as possible but at the same time there are so many little things that either don't work, are missing or very annoying to use. At this point I'm still interested in learning about all the features of NXG, without any intention to use it for any serious application in the foreseeable future (3-5 years). Nevertheless, this is a chance for me to give feedback to NI on all those little things. With NXG 5.0 around the corner I hope they address many of the "obvious" problems in 4.0. In any case, I intend to treat it as an early-access platform rather than a released product. In my mind NXG 4.0 is really NXG 0.4. One thing that really frustrates me is that there is no platform to suggest ideas and vote on them. The feedback system in NXG is a one-way ticket. You can do it from an open ended wire branch, which is one of those annoying things not intended by NI. Create an open ended wire branch, right click on the end of that wire and create the primitive you desire. You can see in the screenshot that the menu allows you to access the array palette. Not very user friendly imo, but still better than surfing the palettes on the left.
  48. 3 points
    I already have that superpower I once used lots of letters so that if you read the for loops top to bottom the letters spelt out my name and a message. I also once heard that wether you use "a" or "i" depends on if you came from a mathematical or engineering background. What's really stange (for me) is in C and PHP; I use "i". But in Pascal, and Python ; I use "a". I know that to a certain extent it is muscle memory since if I use "i" in Pascal, I nearly always leave out the colon before the equal sign. Maybe it's a coping mechanism because I switch between languages so much.
  49. 3 points
    As a package consumer I would like to be able to subscribe to packages so that I get notified when a new version is available.
  50. 3 points
    We use a variety of frameworks/templates/patterns for our architecture: The Actor Framework is used for asynchronous UI operations and long-running data processing tasks. Queued Message Handlers (not using DQMH or QMH template) handle simple asynchronous tasks. Action Engines encapsulate privately shared data for our translation and general I/O libraries. (Queued) State Machines ensure that everything runs in order. The proxy pattern is used to interface most customer libraries. We have our own frameworks for the test execution engine and test libraries (message based). I find the publicly available frameworks and templates (DQMH, Messanger Library, NI templates, etc...) very valuable for learning and to get things started quickly. More advanced projects, however, require a deeper understanding of the underlying patterns in order to develop your own architecture (which may or may not utilize these frameworks/templates). In your case these are synonymous 😋 "Any sufficiently advanced technology is indistinguishable from magic." (Arthur C. Clarke) For some reason that just popped into my mind... If your entire architecture and thought process is fundamentally based on actors, any small project will of course have to depend on it as well. That is, unless you are willing to rethink (and probably reimplement) the fundamental architecture. Then again, why reinvent the wheel?


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.