Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,931
  • Joined

  • Last visited

  • Days Won

    271

Everything posted by Rolf Kalbermatter

  1. Ok, you should have specified that you were comparing it with tools written in C 🙂 The typical test engineer has definitely no idea about all the possible ways C code can be made to trip over its feet and back then it was even less understood and frameworks that could help alleviate the issue were sparse and far between. What I could not wrap my head around was your claim that LabVIEW never would crash. That's very much controversial to my own experience. 😁 Especially if you make it sound like it is worse nowadays. It's definitely not but your typical use cases are for sure different nowadays than they were back then. And that is almost certainly the real reason you may feel LabVIEW crashes more today then it did back then.
  2. One BBF (Big Beauttiful F*cking) Global Namespace may sound like a great feature but is a major source of all kinds of problems. From a certain system size it is getting very difficult to maintain and extend it for any normal human, even the original developer after a short period. When I read this I was wondering what might cause the clear misalignment of experience here with my memory. 1) It was ironically meant and you forgot the smiley 2) A case of rosy retrospection 3) Or are we living in different universes with different physical laws for computers LabVIEW 2.5 and 3 were a continuous stream of GPFs, at times so bad that you could barely do some work in them. LabVIEW 4 got somewhat better but was still far from easy sailing. 5 and especially 5.1.1 was my first long term development platform. Not perfect for sure but pretty usable. But things like some specific video drivers for sure could send LabVIEW frequently belly up as could more complicated applications with external hardware (from NI). 6i was a gimmick, mainly to appease the internet hype, not really bad bad far from stable. 7.1.1 ended up to be my next long term development platform. Never touched 8.0 and only briefly 8.2.1 which was required for some specific real-time hardware. 8.6.1 was the next version that did get some use from me. But saying that LabVIEW never crashed on me in the 90ies, even with leaving my own external code experiments aside, would be a gross under-exaggeration. And working in the technical support of NI from 1992 to 1996 for sure made me see many many more crashes in that time.
  3. True there is no active license checking in LabVIEW until 7.1. And as you say, using LabVIEW 5 or 6 as a productive tool is not wise, neither is blabbing about Russian hack sites here. What someone installs on his own computer is his own business but expecting such hacks to be done out of pure love for humanity is very naive. If someone is able to circumvent the serial check somehow (not a difficult task) they are also easily able to add some extra payload into the executable that does things you rather would not want done on your computer.
  4. I know it runs (mostly), installation is a slightly different story. But that's still no justification to promote pirated software no matter how old.
  5. LabVIEW 5 is almost 30 years old! It won't run on any modern computer very well if at all. Besides offering software even if that old like this is not just maybe illegal but definitely. So keep browsing your Russian crack sites but leave your offerings away from this site, please!
  6. Wow, over 2 hours build time sounds excessive. My own packages are of course not nearly as complex but with my simplistic clone of the OpenG Package Builder it takes me seconds to build the package and a little longer when I run the OpenG Builder relinking step for pre/postfixing VI names and building everything into a target distribution hierarchy beforehand. Was planning for a long time to integrate that Builder relink step directly into the Package Builder but it's a non-trivial task and would need some serious love to do it right. I agree that we were not exactly talking about the same reason for lots of VI wrappers although it is very much related to it. Making direct calls into a library like OpenSSL through Call Library Nodes, which really is a collection of several rather different paradigms that have grown over the course of over 30 years of development, is not just a pain in the a* but a royal suffering. And it still stands for me, solving that once in C code to provide a much more simple and uniform API across platforms to call from LabVIEW is not easy, but it eases a lot of that pain. It's in the end a tradeoff of course. Suffering in the LabVIEW layer to create lots of complex wrappers that end up often to be different per platform (calling convention, subtle differences in parameter types, etc) or writing fairly complex multiplatform C code and having to compile it into a shared library for every platform/bitness you want to support. It's both hard and it's about which hard you choose. And depending on personal preferences one hard may feel harder than the other.
  7. Actually it can be. But requires undocumented features. Using things like EDVR or Variants directly in the C code can immensely reduce the amount of DLL wrappers you need to make. Yes it makes the C code wrapper more complicated and is a serious effort to develop, but that is a one time effort. The main concern is that since it is undocumented it may break in future LabVIEW versions for a number of reason, including NI trying to sabotage your toolkit (which I have no reason to believe they would want to do, but it is a risk nevertheless).
  8. Well, if you stick to strict OOP principles, modularizing it through plugin mechanism and similar should be fairly easy to do! It takes a bit of time to create the necessary plugin mechanisms and requires about at least 3 iterations before you end up with something that really works but that is still magnitudes easier than waiting on a 18k VI project to load every time and fall asleep between edit operations. That's one more reason why I usually have a wrapper shared library that adapts the original shared library interface impedance to the LabVIEW Call Library Interface impedance. 😀
  9. One thing I have seen in the past running really havoc with the LabVIEW editor and/or compiler were circular dependencies. Very easy to end up with even in moderately sized projects if one uses globals. Absolutely unavoidable without a proper design and avoiding globals almost entirely, except in very carefully chosen places, for large projects. The LabVIEW editor/precompiler does pretty much a full pass over the internal data graph for every edit operation. With circular dependencies the graph gets effectively infinite in length and while the system has checks in place to detect such circular references and abort the parsing at some point, it seems not able to do that safely just on the first occurrence without missing some paths, so goes on longer than is most of the times necessary. First sign usually shows up as frequent inability to build the project without obscure errors, especially for realtime targets. Things go ok much longer for builds on Windows, but drop the project code into a realtime target and builds and/or deploys will cause all kind of hard to explain errors. A 18k VI project! That's definitely a project having grown out into a mega pronto dinosaur monster. I can't imagine to even consider creating such a beast. My biggest projects were probably somewhere around 5000 and that was already getting very painful to do any work on. And caused me to modularize it eventually, with parts moved into realtime targets. The cost for the additional hardware were actually smaller than the time lost keep trying to get the monster to build and work, despite that NI realtime hardware is anything but cheap. But I inherited in the long ago past a project that consisted only of maybe 100 VIs. However it consisted of a main VI that was something like 15MB in size (the other VIs were mostly just simple accessors to drivers and ... shudder ... several dozen global variables), with the main VI being one huge loop with sequence structures inside case structures, inside loops, inside more sequence structures, inside even more case structures and loops and this continued for a few more levels like that. Not one shift register, everything was put in globals and written and read back 100ds of times. Editing that VI was a painful exercise, select a wire or node, wait 5 seconds, move the wire or node, wait 5 seconds ... . I have no idea how the original developer ever got this to that point without going insane, but more likely he was insane to begin with already. 😀 I was several days busy to just get the diagram a bit cleaned up by adding some shift registers to manage the actual data more efficiently, identify common code constructs that appeared all over the place over and over and place them into subVIs, and get everything to a state that was reasonably workable before I could really go and refactor that application. In the end I had maybe 500 or so VIs and a main VI that was well below 1MB with a proper state machine and almost no sequence structures anymore. And it run reliably and when you pushed the stop button you did not have to wait half an eternity before the application was able to detect that. The biggest irony was that the application actually was working with an enum state with some 100 or more states, maintained in a global and in almost every sequence frame there was a case structure that would have one or a few cases for a specific state and a default that was doing pretty much nothing. It was a state machine turned inside out and then put into a cascade of sequences!
  10. Libre Office/Open Office is an open source implementation of an Office suite of applications. It delivers similar application as the Microsoft Office package and they work in many ways similar but are not the same. If you are a seasoned Excel, Word or whatever user, they require a little bit of relearning and getting used to. They support a 3rd party interface similar to the Microsoft Office Active X interface but it's not the same and will not work with the Report Generation Toolkit (RPT). One would need to write extra RPT plugins for that, except that while NI made the RPT plugin based, they kind of borked the instantiation of plugins by not making it easily extendable. The available plugins are hardwired in the implementation and extending that would require modifications to the NI implementation, which is considered by most developers a no go as you have to modify the Toolkit on every installation and reapply the modification anytime the Toolkit is reinstalled/updated by NI, which might or might not break your modifications too.
  11. There is a good chance that Microsoft eventually dropped support in Office for 32-bit applications. What I wrote above was true 2015 - 2020. Haven't ever tried to use Excel from LabVIEW since, and generally use Libre Office anyhow if I need an Office application.
  12. It could be made to work in the past. Basically the Office interfaces are all ActiveX based. ActiveX is very well able to invoke an ActiveX Automation Server out of process through an ActiveX proxy server process in the background. If the ActiveX Automation Server is properly registered this happens transparently in the background without extra user interactions. Unfortunately the so called Click to Run MS Office installers that are nowadays used, either forget to do the 32-bit registration of their Automation Server component or somehow bork it up. I have been in the past able to fix that on different machines by running a Repair Install from the Windows Applications control panel.
  13. I've got the same HA Yellow. Still need to actually set it up properly. Got it installed and started up but not integrated with the devices in the house. So many other things to do too. I like that the CM4 module uses real flash memory rather than the SD card of the normal RPi. Much more reliable for a box that typically is put in some corner or your metering cabinet and then left on its own for pretty much all of the time.
  14. They still have something akin to the Alliance Member program. Not sure if it is still called that. Used to work at one too, but am now in academia. As to running LabVIEW directly on a HA, that is not currently possible. Well it may be possible with some emulation if you get an x86_64 emulator running on your ARM HA hardware but that is: 1) a major project to get running, with lots of obstacles, many tricks and a huge chance that what worked yesterday suddenly fails for unexplainable reasons 2) a taxing solution on the poor ARM CPU in your typical HA box The current Hobbyist Toolkit is maybe the most promising solution at this point. It can deploy compiled VIs to a Raspberry Pi and run them headless on the Raspi. But as it is now it's a bit of a pitta. It requires its own chroot environment to provide an ARM environment that is compatible with the ARM CPU in the low cost NI RIO hardware. This is distinctively different from the ARM binary mode typically running on your Raspberry Pi or any other modern ARM hardware. It is 32-bit, and uses the so called softFPU mode where FPU commands are emulated on the ARM core itself, rather than using Neon or similar FPU hardware integrated in all modern ARM chips. And new Raspberry Pi OSes including what HA is using (when running on Raspi hardware) have all changed to 64-bit nowadays, which is with the current Hobbyist Toolkit still a bit of a hurdle but can be worked around if you know what you are doing. There is some talk from NI that they may maybe support a native Raspberry Pi version of this, where the LabVIEW program is deployed to the actual Raspi itself rather than into a chroot container on the Raspi. If that is ever going to see the light of the public, how and in what form is completely unclear. There are several challenges, some technical such as making sure the LabVIEW runtime properly can interact with the window manager on the Raspi (that should be fairly trivial as it is the pretty much the same as what LabVIEW for Linux needs) but also more economical/marketing: How to justify the effort and cost of developing and especially maintaining such a solution without any tangible income in the form of hardware sales? Making it an extra licensed feature is also not very feasible, people are usually not willing to pay 100ds of bucks for a software license for something to run on a 50 to 100 bucks hardware platform. And even with that, your development would still be on a Windows box, in the same way as you develop code for the NI RIO hardware, where you have that hardware as a target in your LabVIEW project. Writing VIs and debugging them happens on your Windows box and when you are confident that it works, you deploy the resulting compiled VI code to the target and let it run there. This so far only works under Windows. And porting that to a Linux host is a major undertaking that I'm not sure NI has invested any real effort into so far. Directly running the LabVIEW IDE on the Raspberry Pi is probably even more unlikely to happen any time soon.
  15. LabVIEW DSC does this with an internal tag name in the Control and the according configuration dialog allows to configure that tag name.
  16. I assume that support for the old *.cdf NI-Max format for installation onto pre LabVIEW 2020 RT targets is not a topic anymore and you guys rather expect *.ipk files? Maybe add a download feed to the OpenG Github project for this? 🙂
  17. I come across them regularly here at the university. There are quite a few setups that are fairly old already and just need some modifications or updates and they work and work and work and work like a clock once setup properly. Developing for them gets harder as you have to use LabVIEW <= 2019 for that.
  18. Actually I do vagualy remember that there was an issue with append mode when opening a ZIP file but that is so far ago, I'm not really sure it was around 4.2 times rather than a lot earlier. I'll have to get back to the 5.0.x version and finally get the two RT Linux versions compiled. Or are you looking at old legacy RT targets here?
  19. You may also want tell people where you can actually download or at least buy this. Although if you want to sell it, do not expect to many reactions. It is already hard to get people to use such toolkits when you offer them for free download.
  20. Reformatting the post which is more easy on the eyes and uses a normal font and doesn't require a 200" monitor to read properly would already help a lot. Pretty much all of that Dynamic Signal stuff with Express VI complexity would be possible to do in a simple VI that uses a lot less code and screen space. And you are not using an Excel file but a tab separated text file, which is of course more than capable to do what you need, and can be read by any application that can read text files.
  21. That question seems very unspecific. LabVIEW base package? What has that to do with PXIe? Why high end (and price) hardware on one side and lowest cost LabVIEW license on the other? LabVIEW Base is simply the IDE. No application builder, no source code control help tools, no analysis libraries, no toolkits and no nothing. You still can get pretty far with it, especially if you use VIPM and OpenG libraries, but yes it is limited.
  22. If your system is not resource constrained I kind of doubt that the 550kB VI size is the issue. It would more likely seem to be a corrupted VI somehow. While it may still load for simple executing, LV Compare tries to access a lot more information in the VI resources to make a proper comparison, and if some of that is corrupted in one of the two VIs it may simply crash over such inconsistent data.
  23. System style controls adhere to the actual system control settings and adapt to whatever your platforms currently defined visual style is. This includes also color and just about any other optical aspect aside of size of the control. If you customize existing controls by adding elements you have to be very careful about Z order of the individual parts. If you put a glyph on top of a sub-part with a user interaction you basically shield that sub-part from receiving the user interaction since the added glyph gets the event and not knowing what to do with it will simply discard it.
  24. It's funny that this one found the actual error almost fully and then as you try to improve on it it goes pretty much off into the woods. I have to say that I still haven't really used any of the AI out there. I did try in the beginning as I was curious and I was almost blown away by how eloquent the actual answers sounded. This was text with complete sentences, correct sentence structure and word usage that was way above average text you read nowadays even in many books. But reading the answers again and again I could not help a feeling of fluffiness, cozy and comfortable on top of that eloquent structure, saying very little of substance with a lot of expensive looking words. My son tried out a few things such as letting it create a Haiku, a specific Japanese style of poem, and it consequently messed it up by not adhering to the required rhyme scheme despite pointing it at the error and it apologizing and stating the right scheme and then going to make the same error again. One thing I recently found useful is when I try to look for a specific word, and don't exactly know what it was, I blame this on my age. When looking on Google with a short description they now often produce an AI generated text at the beginning which surprisingly often names the exact word I was looking for. So if you know what you are looking for but can't exactly remember the exact word it can be quite useful. But to research things I have no knowledge about is a very bad idea. Equally letting it find errors in programming can be useful, vibe coding your software is going to be a guaranteed mess however.
  25. I'm pretty sure that exists, at least the loading of a VI from a memory buffer, if my memory doesn't completely fail me. How to build a VI (or control) into a memory buffer might be more tricky. Most likely the VI server methods for that would be hidden behind one of the SuperSecretPrivateSpecialStuff ini tokens. Edit: It appears it's just the opposite of what I thought. There is a Private VI method Save:To Buffer that seems to write the binary data into a string buffer. But I wasn't able to find any method that could turn that back into a VI reference.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.