Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 09/25/2020 in all areas

  1. Dear Santa NI I am now in my 40s with youngish kids, so despite the fact that all I got for Christmas this year was a Pickle Rick cushion I am not actually complaining. However, I would like to get my order in to the Elves as early as possible. This is my wishlist, in no particular order. I expect this list will not be to everyone's taste, this is ok, this is just my opinion. Make LabVIEW free forever. The war is over, Python has won. If you want to be relevant in 5 to 10 years you need to embrace this. The community edition is a great start but is is probably not enough. Note: I accept it might be necessary to charge for FPGA stuff where I presume you license Xilinx tools. NI is and has always been a hardware company. Make all toolkits free. See the above point. Remove all third party licensing stuff. Nobody makes any money from this anyway. Encourage completely open sharing of code and lead by example. Take all the software engineering knowledge gained during the NXG experiment and start a deep refactor of the current gen IDE. Small changes here please though... we should not have to wait 10 years. Listen to the feedback of your most passionate users during this refactor. NXG failed because you ignored us and just assumed we would consume whatever was placed in front of us. I am talking about the people like those reading this post on Christmas day and their spare time because they are so deeply committed to LabVIEW My eyes are not what they used to be, so please bring in the NXG style vector graphic support so I can adjust the zoom of my block diagram and front panel to suit accordingly As part of the deep refactor, the run-time GUI needs to be modernised. We need proper support for resizable GUIs that react sensible to high DPI environments. Bring the best bits of NXG over to current gen. For example the dockable properties pane. (Sorry not much else comes to mind) Remove support for Linux and Mac and start to prune this cross compatibility from the codebase. I know this is going to get me flamed for eternity from 0.1 % of the users. (You pretty much made this decision for NXG already). Windows 10 is a great OS and has won the war here. Get rid of the 32-bit version and make RT 64-bit compatible. You are a decade overdue here. Add unicode support. I have only needed this a few times, but it is mandatory for a multicultural language in 2021 and going forward Port the Web Module to Current Gen. All the news I have heard is that the Web Module is going to become a standalone product. Please bring this into Current Gen. This has so much potential. Stop adding features for a few years. Spend the engineering effort polishing. Fix the random weirdness we get when deploying to RT Open source as many toolkits as you can. Move the Vision toolkit over to OpenCV and make it open source Sell your hardware a bit cheaper. We love your hardware and the integration with LabVIEW but when you are a big multiple more expensive than a competitor it is very hard to justify the cost. Allow people to source NI hardware through whatever channel makes most sense to them. Currently the rules on hardware purchasing across regions are ridiculous. Bring ni.com into the 21st century. The website is a dinosaur and makes me sad whenever I have to use it Re-engage with universities to inspire the next generation of young engineers and makers. This will be much easier if the price is zero Re-engage with the community of your most passionate supporters. Lately it has felt like there is a black hole when communicating with you Engineer ambitiously? What does this even mean? The people using your products are doing their best, please don't patronise us with this slogan. Take the hard work done in NXG and make VIs into a non-binary format human readable so that we can diff and merge with our choice of SCC tools Remove all hurdles to hand-editing of these files (no more pointless hashes for "protection" of .lvlibs and VIs etc) Openly publish the file formats to allow advanced users to make toolkits. We have some seriously skilled users here who already know how to edit the binary versions! Embrace this, it can only help you. Introduce some kind of virtualenv ala Python. i.e. allow libraries and toolkits to be installed on a per-project basis. (I think this is something JKI are investigating with their new Project Dragon thing) For the love of all that is holy do not integrate Git deeply into LabVIEW. Nobody wants to be locked into someone else's choice of SCC. (That said, I do think everyone should use Git anyway, this is another war that has been won). That is about it for now. All I want is for you guys to succeed so my career of nearly 20 years does not need to be flushed down the toilet like 2020. Love you Neil (Edited: added a few more bullets)
    14 points
  2. “There was this fence where we pressed our faces and felt the wind turn warm and held to the fence and forgot who we were or where we came from but dreamed of who we might be and where we might go...” -- Opening lines of “R is for Rocket” by Ray Bradbury I spent 20 years building this G language of ours. It’s time for me to go enjoy the fruits of that labor as a user! I will still be employed by NI, but I will be working full time for Blue Origin. As part of the NI “Engineer in Residence” program, I will be on loan to Blue Origin to revise their engine and support test systems. They wanted a Certified LabVIEW Architect with deep knowledge of LVOOP, multiple years of experience with Actor Framework, and deep knowledge of cRIO and PXI. I asked, “Can we negotiate on that last part?” They said, “Yes, yes we can.” Turns out, based on the interview, I know more than I thought – apparently some hardware knowledge does rub off just by sitting near it for a couple decades. This new job runs for six months, but it is extensible through the end of 2021 at the discretion of myself and Blue Origin. When I come back, I do not know if I will be returning to LabVIEW. Spaceflight has long been a passion of mine. Over my 20 years with LabVIEW R&D, I have had the chance to help out with various Mars rovers, large telescopes, and rocket launches. It has been awesome, and I’m proud of the language advances I brought to LabVIEW that helped so many users along the way. Now, I am going to focus on just one customer for a while... a customer with very large rocket engines! My last day will be Friday, October 23. I will not have the same availability to respond to posts as I have in the past, but Aristos Queue will still be around on the forums. “And, walking, I went beyond the fence.” -- ending of “R is for Rocket”
    12 points
  3. I did not know. That possibility was not even on my radar. Even though the drumbeat of bad news had been going for a while, most corporations refuse to change direction on a bad decision. NI showed more sentience than I usually expect from massed humans: the sunk cost fallacy is a trap that is very hard to get out of. I figured the very good engineers on NXG would either surge through it and make it fly or we would bankrupt the company trying. That's the pattern established by plenty of other companies. Mixed. I spent 4.5 years directly working on NXG (2011 to 2016) and countless hours in later years working with the NXG team to design a future G. I really wanted it to fly. There is so much good in that IDE, including some amazing things that I just don't see how we ever do in the LabVIEW codebase without just shattering compatibility. But at the same time, I was watching good friends toil on something that the market just wasn't adopting. The software had some problems that were going to take a long time to solve. The issues were all solvable, but the time needed to fix them... that was harder and harder to justify. NXG gave us a GREAT platform for other software: Veristand, FlexLogger, etc. That code is extremely modular and can be repurposed for all sorts of tools. We also learned a heck of a lot by building NXG -- some things that I thought we could never do in LabVIEW now seem possible. NXG gave us a sandbox to learn a whole lot about modern software engineering without putting the delivery schedule for mature software at risk, and those practices [have been|are being] brought back and applied to LabVIEW -- that will decrease cost of maintaining older code. All in all, NXG was valuable -- the expenditure was not a complete loss. I am very sorry to the few customers who did migrate to NXG. We don't have a reverse migration tool, and building one would be absurdly expensive. Leaving those folks stranded is going to hurt -- I hate letting our customers down and just saying, "We have no solution to help you." There aren't many of those folks (that's essentially the problem), but they do exist, and they are basically stuck having to rewrite their NXG apps in some other tool. I can only hope that they pick LabVIEW. I don't know if this will help us or hurt us with customers in the future... on one hand, people may say, "Well, you let us down on NXG, why should we trust you will be there for us on any new products?" On the other hand, this decision was clearly made listening to customer feedback, and it takes a lot of humility to swallow a loss that big, which may make customers trust our judgement more in the future. And, really, there's nothing to compare with the scale of NXG -- an entire computing platform -- so this does seem like something that needs to be judged in isolation. I really like programming in G. I like being able to expand G to make it more powerful. I wanted NXG to succeed because it had the potential to be a better G. It failed. Its failure means more resources for the existing LabVIEW platform, which will directly help our customers in the short run. It leaves open some big questions for the long run. So, in summary: I think it was a decision that had to be made, and I'm happy to work for a company that can learn from new data, then admit a mistake, and then figure out how to correct it.
    12 points
  4. Two questions about this: 1. Does something like this already exist? 2. Is this something that could be useful? Every once in a while I need dynamic UI components that can be generated at runtime. One nice thing to use for this is a picture control; however it doesn't lend itself as well to keeping other pieces of function such as mouse click events and such. I put together a mini library of UI functions for this that has the ability to be extended. The UI can be generated dynamically at runtime and be any picture thing that you can draw. Using Standard layout techniques that you might find in other GUI libraries. The hierarchy generation can always be simplified by using some type of templating string. Example1.vi Front Panel on Pgui2.lvproj_My Computer _ 2021-07-02 14-03-54.mp4
    6 points
  5. So I managed to find the underlying issue and at least one solution to it - sharing the information here. The Icon editor enumerates the font list in linux with command 'fc-list'. With OpenSUSE 43.2 / LV 2016 combo the fontlist looks like this: The listed fonts are essentially a list of the font files with full paths, and that does not work well with the font tool LV uses. If this list looks similar to yours and the fonts are not looking pretty, I have a solution for you - read on. To fix this a command 'fc-list : family' should be used instead, to come up with a list like this: There are two solutions (and I'm sure there are more) - you can decide which one to pursue depending on your expertise. As the Icon Editor in LV2016 (starting with LV2011 I think) is in packed library for execution optimizing purposes, the Icon Editor code can't directly be altered to fix this. One can come up with a solution where the command 'fc-list' is overridden in linux so it will always use the (proper for LabVIEW) format 'fc-list : family'. This may have some unwanted side effects if other programs use the command in similar fashion, so this may not be the best solution. It would be pretty easy to use for assessing whether this could be your problem also. There are multiple trivial ways of implementing this, so I won't be giving an exact solution - here is a list of them: https://lmgtfy.app/?q=override+command+in+linux Darren Nattinger has provided the source code for Enhanced Icon editor in https://forums.ni.com/t5/Enhanced-Icon-Editor/Icon-Editor-Source-Files-for-LabVIEW-2016/m-p/3538808 - You can replace the packed library LabVIEW uses as editor with this source. There are easy 3 step instructions on the site - even I managed to do that. Please give kudos to Darren should you go this way. When you have the shipped Icon Editor replaced with the source, you can directly edit the file in /usr/local/natinst/LabVIEW-2016/resource/plugins/NIIconEditor/Miscellaneous/Font/Linux.vi so it uses the correct form, like this:
    5 points
  6. Does it help to re-ask the question as "where should LabVIEW have a future?" It is not difficult to name a number of capabilities (some already stated here) that are extremely useful to anyone collecting or analyzing data that are either unique, or much simpler, using LabVIEW. They're often taken for granted and we forget how significant they are and how much power they unlock. For example (and others can add more): FPGA - much easier than any text-based FPGA programming, and so powerful to have deterministic computational access to the raw data stream Machine vision - especially combined with a card like the 1473R, though it's falling behind without CoaXPress Units - yes no-one uses them , but they can extend strict programming to validation of correct algorithm implementation Parallel and multi-threaded programming - is there any language as simple for constructing parallel code? Not to mention natural array computations Real-time programming Data-flow - a coherent way of treating data as the main object of interest, fundamental, yet a near-unique programming paradigm with many advantages and all integrated into a single programming environment where all the compilation and optimization is handled under the hood (with almost enough ability to tweak that) Unfortunately NI appear to be backing away from many of these strengths, and other features have been vastly overtaken (image processing has hardly been developed in the last 10 years, GUI design got sidetracked into NXG unfortunately). But the combination of low-level control in a very high-level language seems far too powerful and useful to have no future at all.
    5 points
  7. I've just implemented this and posted a beta: https://forums.ni.com/t5/JDP-Science-Tools/BETA-version-of-JSONtext-1-5-0/m-p/4136116 Handles comments like this: // Supports Comments { "a":1 // like this "b":2 /*or this style of multiline comment*/ "c": 3 /*oh, and notice I'm forgetting some commas A new line will serve as a comma, similar to HJSON*/ "d":4, // except I've foolishly left a trailing one at the end }
    5 points
  8. View File BlueSerialization Serialize and deserialize LabVIEW classes using either JSONtext or TOML Official documentation: https://github.com/justiceb/BlueSerialization Submitter bjustice Submitted 08/05/2021 Category General LabVIEW Version 2018 License Type BSD (Most common)  
    4 points
  9. Are Italian LV developers more prone to producing spaghetti code? 🤨
    4 points
  10. VISA is ok but not everyone uses it so...
    4 points
  11. Version 1.5.4

    947 downloads

    Package for working with JSON. Uses high-speed text parsing, rather than building an intermediate representation as with prior LabVIEW JSON libraries (this is much faster). Allows easy working with "subitems" in JSON format, or one can convert to/from LabVIEW types. Uses the new "malleable VIs" of LabVIEW 2017 to convert to any LabVIEW type directly. JSON text makes use of a form a JSON Path notation, allowing easy and rapid access to the subitems of interest. Requires LabVIEW 2017 and install by VIPM 2017 or later. Original conversation about JSONtext. On the LabVIEW Tools Network. JDP Science Tools group on NI.com. Copyright 2017 JDP Science Limited
    4 points
  12. I have been coming round to supporting comments in JSONtext, at least for ignoring comments on reading (which is quite simple to implement, I think). And possibly features to be more forgiving of common User error, such as missing commas.
    4 points
  13. Coming from my personal experience, I still lean towards no. I had a discussion with Nancy Hanson about this and we came the the conclusion that the CLA was not a destination, but the opening of doors to learn (yes, this was alluding to the CLA Summits). Personally, I had 0 experience using OOP when I got my CLA. But after my second CLA Summit, I found an application that deserved a very basic OOP architecture. The CLA Summit opened that door for me. Now I would say ~50% of what I do is OOP. There is still A LOT you can do effectively without OOP. And keep in mind that part of a CLA is to make architectures that your less experienced developers can use and understand. If they can't use OOP, then your OOP architectures will not be effective. So should it be REQUIRED? No. Highly recommended? Absolutely.
    4 points
  14. I have made public a document detailing an old internal feature of LabVIEW that will be of great interest to those of you deploying Packed Project Libraries. Until recent conversations with a customer, I never considered that this would have much utility. The problem this solves: First, you build a packed project library (PPL) from source. Then, you write a VI that calls that PPL. It works fine. But now you load the caller VI under a different target in your project. The caller VI breaks because it tries to load the PPL, and the PPL refuses because it isn't built for the new target. Packed project libraries are compiled for one and only one specific target. How can you write ONE caller VI that will load DIFFERENT libraries depending upon the target without adding Conditional Disable structure complications? https://forums.ni.com/t5/Community-Documents/Resolution-of-Pseudopaths-in-LabVIEW-Per-Target-Invocation-of/ta-p/4087124
    4 points
  15. So this was posted on the NI forums: https://forums.ni.com/t5/LabVIEW/Our-Commitment-to-LabVIEW-as-we-Expand-our-Software-Portfolio/td-p/4101878?profile.language=en
    3 points
  16. I like resizable UIs, and that often means custom code to handle some of the operations in the UI. One such feature I like is a vertical array, that shows more rows, as the window size gets larger and can show more. I've used code like this a few times and figured the community might like it too. When doing this there are times when I want a user to be able to add more rows, and some times that it needs to either be static, or set by some other outside variable. So in this demo is also a way to link a vertical scrollbar that is a separate control, and have it move, and get set, so that it looks like it is linked to the array control. But this will coerce, and not allow more rows to be seen, than the number of elements in the array. At the moment this only supports 1D arrays, and only in the vertical direction. Code is saved in 2020. Array Resize and Scrollbar Link.zip
    3 points
  17. Or even better: Replace "Write To Text file"/"Read From Text File" with "Write To Binary File"/"Read From Binary File". The output of "Flatten to String" is not text. (String != Text)
    3 points
  18. Right-Click on the "Read From Text File" method and unselect "Convert EOL". The number "13" (0xD or Carriage Return) gives it away. If you look at the HEX representation of what you save and what you read from file, you'll notice the 0xD gets converted to 0xA unless you uncheck that box. Tip: If you deal exclusively with HEX/binary data, you might want to save it using byte arrays instead. The result will be the same in your file, but you won't get bitten by hidden config parameters in your node...
    3 points
  19. LabVIEW Software Engineers, here's your opportunity to work with us at SpaceX! I'm hiring for a Sr. Software Engineer on the Ground Software team. We build control and monitoring systems running the Falcon launchpads, mission systems launching astronauts to the International Space Station onboard Dragon, and a fleet of sea-faring recovery vessels bringing spacecraft and rockets back to shore. The screens you see in Mission Control are the UIs we build. Read more about the role and apply here: https://boards.greenhouse.io/spacex/jobs/5480997002?gh_jid=5480997002
    3 points
  20. Yes, I don't assume you use many new features of LabVIEW from the last 10 years if you still develop in LabVIEW 2009.
    3 points
  21. NI didn't say they would be porting NXG features to 2021, but to future versions of LabVIEW. Technically such a promise would have been unfulfillable, since at the time the NXG demise was announced, LabVIEW 2021 was basically in a state where anything that was to be included in 2021 had to be more or less fully finished and tested. A release of a product like LabVIEW is not like your typical LabVIEW project where you might make last minute changes to the program while testing your application at the customer side. For a software package like LabVIEW, there is a complete code freeze except for breaking bug fixes, then there is a testing, packaging and testing again cycle for the Beta Release, which typically takes a month or two alone, then the Beta phase of about 3 to 4 months and finally the release. So about 6 months before the projected release date, anything that is not considered ready for prime time is simply not included in the product, or sometimes hidden behind an undocumented ini file setting. Considering that, the expectation to see any significant NXG features in LabVIEW 2021 was simply blue eyed and irrational. I agree with you that LabVIEW is a unique programming environment that has some features that are simply unmatched by anything else. And there are areas where its age is clearly showing such as lack of proper Unicode support, and related to that the lack of support for long path names. Personally I feel like I could tackle the lower level part of full Unicode support in LabVIEW including full Unicode path support quite easily if I was part of the development team, but have to admit that the higher level integration into front panels and various interfaces is a very daunting task that I have no idea how I would solve it. Still, reworking the lower level string and path management in LabVIEW to fully support Unicode would be a first and fundamental step to allow the other task of making this available to the UI in a later stage. This low level manager can exist in LabVIEW even if the UI and higher level parts don't yet make use of it. The opposite is not possible. That is just one of many things that need some serious investment to make the whole LabVIEW platform again viable for further development into the future. This example also shows that some of the work needed to port NXG features back to LabVIEW require first some significant effort that will not immediately be visible in a new LabVIEW version. While such a change as described above is definitely possible to do within a few months, the whole task of making whole LabVIEW fully Unicode capable without breaking fundamental backwards compatibility, is definitely something that will take more than one LabVIEW version to eventually fully materialize. There are a few lower hanging fruits that can help prepare for that and should have been done years ago already but were discarded as "being already fixed in NXG" but the full functionality just for full Unicode support in LabVIEW is going to be a herculean task to pull off, without going the path of NXG to reinvent LabVIEW from scratch (which eventually proved to be an unreachable feat). My personal feelings about the future of LabVIEW are mixed. Not so much because LabVIEW couldn't have a future but because of the path NI as a company is heading. They have been changing over the last few years considerably, from an engineering driven to a management driven company. While in the past, engineers had some real say in what NI was going to do, nowadays it's mostly managers who see Excel charts, sale numbers and the stock market exchange as the main decision making thing for NI. Anything else has to be subordinated to the bigger picture of a guaranteed minimum yearly growth percentage and stock price. The traditional Test & Measurement market NI has served for much of its existence is not able to support those growth numbers anymore. So they are making heavy inroads into different markets and seem to consider the traditional T&M market by now just as a legacy rather than a significant contributor to their bottom line.
    3 points
  22. Working on the next JSONtext functionality, which is features to improve support of JSON Config Files. See https://forums.ni.com/t5/JDP-Science-Tools/BETA-version-of-JSONtext-1-6/td-p/4146235
    3 points
  23. I've encountered a black imaq image display in exes, solved by unchecking the box to allow running in a later runtime version. Don't know if that is related to your problem.
    3 points
  24. I've been toying with the idea of implementing a new TOML library for LabVIEW. I've been using OpenG variant config for years, but I would prefer to use a more standardized format for my ini config files. TOML is the best candidate for this. Erdosmiller's library is pretty good, but as the author points out, it is no longer maintained, and it didn't gracefully handle all of the datatypes that I wanted to use. It would be great to have the flexibility of jsontext but for TOML format. I'll post back here if I manage to get the project off the ground.
    3 points
  25. Update: I used the dll call from the link @dadreamer provided, and made a Messenger-Library "actor" that I can use for debugging. Already found a couple of bugs with it.
    3 points
  26. Shaddap-a you face!
    3 points
  27. I agree with all your points. Definitely on making LabVIEW free for all purposes, if not even open source. NI may hang on to the mega-costumers for a while with its current business model. But eventually it'll get marked as a legacy software and slowly replaced by younger people with newer ideas and experience in different, more accessible languages. The idea that a company can sell a programming language these days is ridiculous when there are so many free alternatives. I am not counting the community edition. It needs to be free for any purpose.
    3 points
  28. I don't really expect many new language features or UX improvements in LabVIEW just because they stop working on NXG. From what we know there are only a few knowledgeable people at NI who are intimately familiar with the codebase and some of its intricate details which fundamentally drive LabVIEW. There are also many customers who rely on that technology for their own business. Because of that, NI can't just throw more developers at it and change LabVIEW fundamentally unless they find a way to stay compatible or take a bold step and do breaking changes (which are inevitable in my opinion). LabVIEW will probably stay what it is today and only receive (arguably exciting) new features that NI will leverage from the NXG codebase to drive their business. Unfortunately NI hasn't explained their long-term strategy (I'll assume for now that they are still debating on it). In particular what LabVIEW/G will be in the future. Will it be community-driven? Will it be a language that anyone can use to do anything? Will it be the means to drive hardware sales for NI and partners? Will it be a separate product altogether, independent of NI hardware and technology? There are also a lot of technology-related topics they need to address. Does LabVIEW Support Unicode? - National Instruments Comparing Two VIs in LabVIEW - National Instruments (ni.com) Error 1316 While Using .NET Methods in LabVIEW - National Instruments (ni.com) Using NULL Values or Pointers in LabVIEW - National Instruments (ni.com) Not to forget UX. The list is endless and entirely different for any one of us. If and when these will be addressed is unknown. Don't get me wrong, I'm very excited and enthusiastic about LabVIEW and what we can do with it. My applications are driven by technology that other programming languages simply can't compete with. Scalability is through the roof. Need to write some data to text file? Sure, no problem. Drive the next space rocket, land rover, turbine engine, etc.? Here is your VI. The clarity of code is exceptional (unless you favor spaghetti). The only problem I have with it is the fact that it is tied to a company that want's to drive hardware sales.
    3 points
  29. The first time you mentioned this I thought it was a nice gesture, now I think you are just desperate for friends...or an alcoholic. I'm down.
    3 points
  30. I want to remind once again that all this information is just to have fun playing with LabVIEW and not intended for real projects use. I believe you all understand that. 🙂 Not that a big opening here and even not an opening for some ones, but I found this interesting enough to make a thread. As you may already know, when some library is being called using CLF Node, LabVIEW enters ExtFuncWrapper first to do some guard checks to prevent itself from a silent crash and output some error message to the user instead. I've always considered that wrapper boring enough to study, thus never looked inside. But when once again I faced that I can't call some function through CLFN and have to write my own wrapper library, I asked myself why we cannot call some code by its pointer as in almost any well-known text language?.. Therefore I decided to know how ExtFuncWrapper calls the function. It turned out that ExtFuncWrapper receives the function's pointer (along with the parameters struct pointer) and calls that pointer later as it is (i.e., not doing any manipulations with it). So we can use it to call any function and even any code chunk directly from the diagram! After further research I found ExtFuncWrapper not very convenient to use, because to bypass some checks the parameters struct should be prepared accordingly before the call. But there are many ExtFunc... wrappers in labview.exe and ExtFuncCBWrapper is much easier to use. It has the following prototype: int32_t ExtFuncCBWrapper(uintptr_t CodeChunk, int32_t UseTLS, void *CodeParams); Here CodeChunk is our func / code pointer, UseTLS is 0 as we don't use LabVIEW's Thread Local Storage and CodeParams is our parameters struct. When called ExtFuncCBWrapper runs that CodeChunk, passing CodeParams to it, so we can freely use it later to do what we want. Nuff said, here are the samples. This one increments a Numeric. ExtFuncCBWrapper-Increment.vi As you can see, I'm passing a Numeric value as CodeParams pointer into ExtFuncCBWrapper and in the assembly I have to pull that pointer out to deal with my parameters. I'm not that excellent in asm codes, so I used one of many online x86 compilers-decompilers out there. It's even simplier in 64-bit IDE as the first parameter is already written into RCX. Okay, here goes a more advanced example - it calculates a sum of two input Numerics. ExtFuncCBWrapper-SumOfTwo.vi Here I'm passing a cluster of three parameters as CodeParams pointer (two Numerics and the resulting Sum) and in the asm I'm grabbing the first parameter, adding it to the second one and writing the result into the third one. Pretty simple operations. Now let's do some really wild asm on the diagram! 😉 The latter example calls MessageBox function from user32.dll. ExtFuncCBWrapper-MsgBox.vi This is what Rolf calls a diagram voodoo. 😃 I have to provide 4 parameters to MessageBox, 2 of which are string pointers. Thus I'm getting these pointers and writing them into my params cluster (along with the panel handle and the dialog type). When ExtFuncCBWrapper is called, in the asm code I have to use the prologue and epilogue pieces to prevent the stack corruption as I'm pushing 4 parameters later onto the stack to provide them to MessageBox. After the call I'm writing the result into the function return parameter. In 64-bit IDE the prologue/epilogue is somewhat simplier. Maybe you already noticed that I'm doing VirtualProtect before calling ExtFuncCBWrapper. This is done to pass through Windows Data Execution Prevention (DEP) protection. I'm setting execute and read/write access for the memory page with my code, otherwise LabVIEW refuses to run my code and throws an exception. Surprisingly it is thrown only in 64-bit IDE, but in 32-bit LV I can just wire the U8 array to ExtFuncCBWrapper, not going through that DSNewPtr-MoveBlock-VirtualProtect-DSDisposePtr chain. I did not start to figure out such a behaviour. Well, to be honest, I doubt that someone will find all these samples really useful for his/her tasks, because these are very low-level operations and it's much easier to use a common CLFN or a helper DLL. They are here just to show that the things described are definitely doable from an ordinary diagram, and that doesn't require writing any libraries. With a good asm skills it's even possible to realize callback functions or call some exotic functions (like C++ class methods). Some things might be improved also, e.g. embedding a generic assembly compiler to have a possibility to write the codes instead of raw bytes on the diagram. Ideally it's even possible to implement an Inline Assembly Node. Unfortunately, I have neither the time nor much desire to do it myself.
    3 points
  31. Fortunately Dataflow G have released this toolkit - dataflowg/g-audio: An audio library for LabVIEW. (github.com) It satisfies all the requirements 🤩!!!
    2 points
  32. I came across this topic whilst trying to figure out the text width for UTF-16 strings - the solution we used was to write the text/font to the caption of a string control with 'size to text' enabled and then read the Caption.Area Width property: I think the only issue is that LabVIEW adds a few pixels of spacing around the caption - but I think this will be a constant that can be removed.
    2 points
  33. I take it you don't have a dedicated connection between the two devices. Things that can break the system that come to mind is the routing and if there are duplicate IP addresses on the network.
    2 points
  34. There are many links on the Internet to tell you how to configure git to use custom tools for VI. Many are wrong. Yesterday, I and another developer outside NI worked through the sequence and got it working repeatably on both of our machines. Here is the process. Save both of the attached files someplace permanent on your hard drive that is outside of any particular git repo. We used C:\Users\<<username>>\AppData\Local\Programs\GIT\bin _LVCompareWrapper.sh_LVMergeWrapper.sh Modify your global git config file. It is saved at C:\Users\<<username>>\.gitconfig You need to add the following lines: [mergetool "sourcetree"] cmd = 'C:/Users/smercer/AppData/Local/Programs/GIT/bin/_LVMergeWrapper.sh' \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" trustExitCode = true [difftool "sourcetree"] cmd = 'C:/Users/smercer/AppData/Local/Programs/GIT/bin/_LVCompareWrapper.sh' \"$REMOTE\" \"$LOCAL\" [merge] tool = sourcetree [diff] tool = sourcetree That's it. There are lots of ways to edit the .gitconfig from the command line or by using SourceTree's UI... if you know those ways, go ahead and use them.
    2 points
  35. You didn't and can't. Unless you are interviewing, there's not a lot you can do. When I interview I get them to do a 30 minute test. It's a test that they cannot complete in 30 mins. I tell them that they are not expected to finish, it's not expected to work, there is no right answer and I just want to see their thought process when tackling a task - just make sure it's tidy enough so I can read it. This is all true. After, we discuss the task. What they thought, what they were and weren't happy with, what issues they think their code has and why they did certain things. Additionally I ask things like how would they improve *my* test. Did it make them too uncomfortable, would they have preferred a specific result/correct answer etc. A good engineer will be able to articulate problems with interpretation of the instructions, the assumptions and decisions they had to make and what they would change now they know the task and we have discussed it with my input. What code they actually write is almost irrelevant and only serves as a common talking point but you'll spot the fakers straight away. You very quickly find out who is an engineer and who is a code monkey.
    2 points
  36. There is and it is called Interapplication Communication. Which one to use depends on the platform and your familiarity with a specific method. - File IO (multiplatform, trivial to setup, difficult to manage as you have to deal with concurrency of two applications trying to access the same file) - DDE (Windows only, ultra ancient and much older than legacy, don't ever use it if you can help it) - Shared memory (super high throughput, complicated to use, pretty different APIs and methods on each platform) - (D)COM (Windows only, complicated to use from anything else than C(++). - Pipes (Standard method on Unix, LabVIEW supports it on Unix platforms only, Windows has pipes too but it is a fairly obscure and seldom used feature) - TCP/IP (multiplatform, native support in LabVIEW, almost as high throughput as Shared Memory when both sides are on the same machine since the network socket uses internally shared memory to transfer the data between the two endpoints, can also work over the network where client and server are on different machines) - If both sides are LabVIEW based, you can also use the VI Server interface. That goes over TCP/IP too under the hood, but gives you a fairly easy method to directly access VIs in the other process. The TCP/IP method is by far the standard method nowadays. It is ubiquitous on pretty much every platform, from the tiny embedded controller up to super duper ultra high performance computers. It has a few minor challenges such as proper network setup, the very distinctive server/client model and if you do not want to have to configure the client and server in some ways for their IP address and port number, you need additional complicated services that let you discover the according resources on the network. Most of them do however cause more trouble than they solve, so for your home grown kitchen sink automation, the easiest is to simply leave these things configurable in an ini file or something.
    2 points
  37. I'm left to assume you are referring to VI Week, which was put together in a week or two, completely by the community.
    2 points
  38. Here's my tuppence. Get an indication from your IT dept when thy expect the issue to be resolved then get your project manager to bill the time against IT's budget. Add to your project plan the time they indicated and add a note that the project delivery date will be be delayed by at least that amount of time. Keep doing that if/when their indicated time expires until they resolve the issue. If you have weekly project meetings, make sure it is in the minutes and get an action on IT to resolve the issue and require status updates from IT so it never drops off the agenda. Ensure the IT issue is elevated to a high project risk category. Then. Download the VISA executable installer from "http://download.ni.com/evaluation/labview/ekit/other/downloader/" and tell your project manager you *may* have a temporary, sub-optimal solution which you are investigating for this particular package but that you fully expect continuing issues that may not have similar solutions even if you are successful this time, which is not guaranteed..
    2 points
  39. I updated the Bug report to mention that this should be in the help. Also, I talk a lot about use cases for the Error Ring (including mentioning several of the points I brought up in this post) in my What To Expect When You're Expecting an Error presentation here: http://bit.ly/dnatterrors
    2 points
  40. Yes it is, both ways. I'm glad you like it There is a new release which adds support for composition of maps and sets. Here is an example of what you can do with it: This is actually a very smart way of doing it. My library takes the keys and values apart into individual arrays. A key-value-pair is certainly easier to work with...
    2 points
  41. Well, seems like I had to say that LabVIEW also should behave as RTE does, but it does not for some obscure reason. So, my bad in phrasing there. It's not so easy to answer, not knowing how Vision internals work. I suppose, it has something to do with the way, how Vision's memory manager allocates memory. Perhaps it's more optimized to work in LabVIEW and less (or not) optimized for EXE's. I noticed that in IDE IMAQ Create takes nearly the same amount of time to run (0,03 to 0,06 ms), while in RTE that amount starts at 0,03 and raises on each iteration. Here are the shots to illustrate. IDE: RTE: Maybe someone from NI could elaborate on these differences?.. By the way, I found two more similar issues [1, 2] and the reason behind each one was not clarified.
    2 points
  42. I checked with LabVIEW R&D, they said there is no way to determine this information in G code.
    2 points
  43. In the world of C it is up to us to declare variables such as those Refnums as 'volatile' so the compiler knows they may change despite the appearance of being loop invariants. I would say it is a bug that the LV compiler does not treat Refnums as volatile. I'd say most of your workarounds would work, but are in (slight) danger of being optimized away as the compiler improves. Personally, I'd drop an 'Always Copy' node instead of the IPES.
    2 points
  44. The other day, I wrote up a lengthy response to a thread about NXG on the LabVIEW Champions Forum. Fortunately, my computer blue-screened before I could post it--I kind of had the feeling that I was saying too much and it was turning into a "drunk history of NXG" story. Buy me some drinks at the next in-person NIWeek/CLA/GLA Summit/GDevCon, and I'll tell you what I really think! First, I'll say that abandoning NXG was a brave move and I laud NI for making it. I'm biased; I've been advocating this position for many years, well before I left NI in 2014. I called it "The Brian Powell Plan". I'm hopeful, but realistic, about LabVIEW's future. I think it will take a year or more for NI to figure out how to unify the R&D worlds of CurrentGen and NXG--how to modify teams, product planning, code fragments, and everything else. I believe the CurrentGen team was successful because it was small and people left them alone (for the most part). Will the "new world without NXG" return us to the days of a giant software team where everyone inside NI has an opinion about every feature and how it's on the hook for creating 20-40% revenue growth? I sure hope not. That's what I think will take some time for NI to figure out. Will the best of NXG show up easily in CurrentGen? No! But I think the CurrentGen LabVIEW R&D team might finally get the resources to improve some of the big architectural challenges inside the codebase. Also, NI has enormously better product planning capability than they did when I left. I am optimistic about LabVIEW's future.
    2 points
  45. I think this is a great decision. Admitting they made a mistake is a bold and courageous step. Onwards and upwards from here.
    2 points
  46. There is something strange about how this library is managed. For some reason it seems to work if the VI is part of a project, but not as a standalone VI. Standalone As part of a project I did not reproduce the entire example, so it might fail. At least you can try adding your VI to a project and see if it works (make sure the assembly is listed under Dependencies).
    2 points
  47. We modify code while LV is running all the time 🙂 We even have a Quick Drop shortcut that toggles a Sub-VI's Loading Option between "Load with callers" and "Reload for each call".
    2 points
  48. Just as an extra reference point, this post contains a redone winutil.llb file where all the references to the private winutil DLL have been removed and as far as possible replaced to compatible direct calls to the according Windows APIs. In addition the Call Library Nodes have been updated to be compatible for 32-bit and 64-bit operation and I also added one or two functions from this example into the library as well. In order to support seamless 32-bit/64-bit support one has to make sure to use the new Windows control contained in that lib for everything and modify any other Call Library Nodes your project may use accordingly too. https://forums.ni.com/t5/LabVIEW/How-to-run-an-exe-as-a-window-inside-a-VI/m-p/4096356#M1179928
    2 points
  49. Remember that you are a User, and you only look at User Interface. When you "open a class" a cluster UI widget is loaded to represent the class data. But that widget is not the actual cluster. When you "open a subVI", the corresponding front panel is loaded with UI widgets to correspond to the block diagram terminals. But for most subVIs, that front panel is never loaded if it is not opened. This is unfortunately counter intuitive, as every subVI has a User Interface (Front Panel) a new programmer will think that is significant, while an experienced LabVIEW programmer will intuitively understand that the front panel of most subVIs doesn't really exist at runtime.
    2 points
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.