Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,937
  • Joined

  • Last visited

  • Days Won

    272

Everything posted by Rolf Kalbermatter

  1. Actually there is ZLIB Inflate and ZLIB Deflate and Extended variants of both that take in a string buffer and output another one. Extende allows to specify which header format to use in front of the actual compressed stream. But yes I did not expose the lower level functions with Init, Add, and End. Not that it would be very difficult other than having to consider a reasonable control type to represent the "session". Refnum would work best I guess.
  2. I can understand that sentiment. I'm also just doing some shit that I barely can understand.🤫
  3. You seem to have done all the pre-research already. Are you really not wanting to volunteer? 😁
  4. They absolutely do! The current ZIP file support they have is basically simply the zlib library AND the additional ZIP/UNZIP example provided with it in the contribution order of the ZLIB library. It used to be quite an old version and I'm not sure if and when they ever really upgraded it to later ZLIB versions. I stumbled over that fact when I tried to create shared libraries for realtime targets. When creating on for the VxWorks OS I never managed to load it at all on a target. Debugging that directly would have required an installation of the Diabolo compiler toolchain from VxWorks which was part of the VxWorks development SDK and WAAAAYYY to expensive to even spend a single thought about trying to use it. After some back and forth with an NI engineer he suggested I look at the export table of the NI-RT VxWorks runtime binary, since VxWorks had the rather huge limitation to only have one single global symbol export table where all the dynamic modules got their symbols registered, so you could not have two modules exporting even one single function with the same name without causing the second module to fail to load. And lo and behold there were pretty much all of the zlib zip/unzip functions in that list and also the zlib functions itself. After I changed the export symbol names of all the functions I wanted to be able to call from my OpenG ZIP library with an extra prefix I could suddenly load my module and call the functions. Why not use the function in the LabVIEW kernel directly then? 1) Many of those symbols are not publicly exported. Under VxWorks you do not seem to have a difference between local functions and exported functions, they all are loaded into the symbol table. Under Linux ELF, symbols are per module in a function table but marked if they are visible outside the module or not. Under Windows, only explicitly exported functions are in the export function table. So under Windows you simply can't call those other functions at all, since they are not in the LabVIEW kernel export table unless NI adds them explicitly to the export table, which they did only for a few that are used by the ZIP library functions. 2) I have no idea which version NI is using and no control when they change anything and if they modify any of those APIs or not. Relying on such an unstable interface is simply suicide. Last but not least: LabVIEW uses the deflate and inflate functions to compress and decompress various binary streams in its binary file formats. So those functions are there, but not exported to be accessed from a LabVIEW program. I know that they did explicit benchmarks about this and the results back then showed clearly that reducing the binary size of data that had to be read and written to disk by compressing them, resulted in a performance gain despite the extra CPU processing for the compression/decompression. I'm not sure if this would still hold with modern SSD drives connected through NVE but why change it now. And it gave them an extra marketing bullet point in the LabVIEW release notes about reduced file sizes. 😁
  5. You make it sound trivial when you list it like that. 😁
  6. Great effort. I always wondered about that, but looking at the zlib library it was clear that the full functionality was very complex and would take a lot of time to get working. And the biggest problem I saw was the testing. Bit level stuff in LabVIEW is very possible but it is also extremely easy to make errors (that's independent of LabVIEW btw) so getting that right is extremely difficult and just as difficult to proof consistently. Performance is of course another issue. LabVIEW allows a lot of optimizations but when you work on bit level, the individual overhead of each LabVIEW function starts to add up, even if it is in itself just tiny fractions of microseconds. LabVIEW functions do more consistency checking to make sure nothing will crash ever because of out of bounds access and more. That's a nice thing and makes debugging LabVIEW code a lot easier, but it also eats performance, especially if these operations are done in inner loops million of times. Cryptography is another area that has similar challenges, except that security requirements are even higher. Assumed security is worse than no security. I have done in the past a collection of libraries to read and write image formats for TIFF, GIF and BMP. And even implemented the somewhat easier LZW algorithm used in some TIFF and GIF files. On the basic it consists of a collection of stream libraries to access files and binary data buffers as a stream of bytes or bits. It was never intended to be optimized for performance but for interoperability and complete platform independence. One partial regret I have is that I did not implement the compression and decompression layer as a stream based interface. This kind of breaks the easy interchangeability of various formats by just changing the according stream interface or layering an additional stream interface in the stack. But development of a consistent stream architecture is one of the more tricky things in object oriented programming. And implementing a decompressor or compressor as a stream interface is basically turning the whole processing inside out. Not impossible to do, but even more complex than a "simple" block oriented (de)compressor. And also a lot harder to debug. Last but not least it is very incomplete. TIFF support is only for a limited amount of sub-formats, the decoding interface is somewhat more complete while the encoding part only supports basic formats. GIF is similar and BMP is just a very rudimentary skeleton. Another inconsistency is that some interfaces support the input and output to and from IMAQ while others support the 2D LabVIEW Pixmap, and the TIFF output supports both for some of the formats. So it's very sketchy. I did use that library recently in a project where we were reading black/white images from IMAQ which only supports 8 bit greyscale images but the output needed to be 1-bit TIFF data to transfer to a inkjet print head. The previous approach was to save a TIFF file in IMAQ, which was stored as 8-bit grey scale with only really two different values and then invoke an external command to convert the file to 1 bit bi-level TIFF and transfer that to the printer. But that took quite a bit of time and did not allow to process the required 6 to 10 images per second. With this library I could do the full IMAQ to 1-bit TIFF conversion consistently in less than 50 ms per image including writing the file to disk. And I always wondered about what would be needed to extend the compressor/decompressor with a ZLIB inflate/deflate version which is another compression format used in TIFF (and PNG but I haven't considered that yet). The main issue is that adding native JPEG support would be a real hassle as many PNG files use internally a form of JPEG compression for real life images.
  7. Ok, you should have specified that you were comparing it with tools written in C 🙂 The typical test engineer has definitely no idea about all the possible ways C code can be made to trip over its feet and back then it was even less understood and frameworks that could help alleviate the issue were sparse and far between. What I could not wrap my head around was your claim that LabVIEW never would crash. That's very much controversial to my own experience. 😁 Especially if you make it sound like it is worse nowadays. It's definitely not but your typical use cases are for sure different nowadays than they were back then. And that is almost certainly the real reason you may feel LabVIEW crashes more today then it did back then.
  8. One BBF (Big Beauttiful F*cking) Global Namespace may sound like a great feature but is a major source of all kinds of problems. From a certain system size it is getting very difficult to maintain and extend it for any normal human, even the original developer after a short period. When I read this I was wondering what might cause the clear misalignment of experience here with my memory. 1) It was ironically meant and you forgot the smiley 2) A case of rosy retrospection 3) Or are we living in different universes with different physical laws for computers LabVIEW 2.5 and 3 were a continuous stream of GPFs, at times so bad that you could barely do some work in them. LabVIEW 4 got somewhat better but was still far from easy sailing. 5 and especially 5.1.1 was my first long term development platform. Not perfect for sure but pretty usable. But things like some specific video drivers for sure could send LabVIEW frequently belly up as could more complicated applications with external hardware (from NI). 6i was a gimmick, mainly to appease the internet hype, not really bad bad far from stable. 7.1.1 ended up to be my next long term development platform. Never touched 8.0 and only briefly 8.2.1 which was required for some specific real-time hardware. 8.6.1 was the next version that did get some use from me. But saying that LabVIEW never crashed on me in the 90ies, even with leaving my own external code experiments aside, would be a gross under-exaggeration. And working in the technical support of NI from 1992 to 1996 for sure made me see many many more crashes in that time.
  9. True there is no active license checking in LabVIEW until 7.1. And as you say, using LabVIEW 5 or 6 as a productive tool is not wise, neither is blabbing about Russian hack sites here. What someone installs on his own computer is his own business but expecting such hacks to be done out of pure love for humanity is very naive. If someone is able to circumvent the serial check somehow (not a difficult task) they are also easily able to add some extra payload into the executable that does things you rather would not want done on your computer.
  10. I know it runs (mostly), installation is a slightly different story. But that's still no justification to promote pirated software no matter how old.
  11. LabVIEW 5 is almost 30 years old! It won't run on any modern computer very well if at all. Besides offering software even if that old like this is not just maybe illegal but definitely. So keep browsing your Russian crack sites but leave your offerings away from this site, please!
  12. Wow, over 2 hours build time sounds excessive. My own packages are of course not nearly as complex but with my simplistic clone of the OpenG Package Builder it takes me seconds to build the package and a little longer when I run the OpenG Builder relinking step for pre/postfixing VI names and building everything into a target distribution hierarchy beforehand. Was planning for a long time to integrate that Builder relink step directly into the Package Builder but it's a non-trivial task and would need some serious love to do it right. I agree that we were not exactly talking about the same reason for lots of VI wrappers although it is very much related to it. Making direct calls into a library like OpenSSL through Call Library Nodes, which really is a collection of several rather different paradigms that have grown over the course of over 30 years of development, is not just a pain in the a* but a royal suffering. And it still stands for me, solving that once in C code to provide a much more simple and uniform API across platforms to call from LabVIEW is not easy, but it eases a lot of that pain. It's in the end a tradeoff of course. Suffering in the LabVIEW layer to create lots of complex wrappers that end up often to be different per platform (calling convention, subtle differences in parameter types, etc) or writing fairly complex multiplatform C code and having to compile it into a shared library for every platform/bitness you want to support. It's both hard and it's about which hard you choose. And depending on personal preferences one hard may feel harder than the other.
  13. Actually it can be. But requires undocumented features. Using things like EDVR or Variants directly in the C code can immensely reduce the amount of DLL wrappers you need to make. Yes it makes the C code wrapper more complicated and is a serious effort to develop, but that is a one time effort. The main concern is that since it is undocumented it may break in future LabVIEW versions for a number of reason, including NI trying to sabotage your toolkit (which I have no reason to believe they would want to do, but it is a risk nevertheless).
  14. Well, if you stick to strict OOP principles, modularizing it through plugin mechanism and similar should be fairly easy to do! It takes a bit of time to create the necessary plugin mechanisms and requires about at least 3 iterations before you end up with something that really works but that is still magnitudes easier than waiting on a 18k VI project to load every time and fall asleep between edit operations. That's one more reason why I usually have a wrapper shared library that adapts the original shared library interface impedance to the LabVIEW Call Library Interface impedance. 😀
  15. One thing I have seen in the past running really havoc with the LabVIEW editor and/or compiler were circular dependencies. Very easy to end up with even in moderately sized projects if one uses globals. Absolutely unavoidable without a proper design and avoiding globals almost entirely, except in very carefully chosen places, for large projects. The LabVIEW editor/precompiler does pretty much a full pass over the internal data graph for every edit operation. With circular dependencies the graph gets effectively infinite in length and while the system has checks in place to detect such circular references and abort the parsing at some point, it seems not able to do that safely just on the first occurrence without missing some paths, so goes on longer than is most of the times necessary. First sign usually shows up as frequent inability to build the project without obscure errors, especially for realtime targets. Things go ok much longer for builds on Windows, but drop the project code into a realtime target and builds and/or deploys will cause all kind of hard to explain errors. A 18k VI project! That's definitely a project having grown out into a mega pronto dinosaur monster. I can't imagine to even consider creating such a beast. My biggest projects were probably somewhere around 5000 and that was already getting very painful to do any work on. And caused me to modularize it eventually, with parts moved into realtime targets. The cost for the additional hardware were actually smaller than the time lost keep trying to get the monster to build and work, despite that NI realtime hardware is anything but cheap. But I inherited in the long ago past a project that consisted only of maybe 100 VIs. However it consisted of a main VI that was something like 15MB in size (the other VIs were mostly just simple accessors to drivers and ... shudder ... several dozen global variables), with the main VI being one huge loop with sequence structures inside case structures, inside loops, inside more sequence structures, inside even more case structures and loops and this continued for a few more levels like that. Not one shift register, everything was put in globals and written and read back 100ds of times. Editing that VI was a painful exercise, select a wire or node, wait 5 seconds, move the wire or node, wait 5 seconds ... . I have no idea how the original developer ever got this to that point without going insane, but more likely he was insane to begin with already. 😀 I was several days busy to just get the diagram a bit cleaned up by adding some shift registers to manage the actual data more efficiently, identify common code constructs that appeared all over the place over and over and place them into subVIs, and get everything to a state that was reasonably workable before I could really go and refactor that application. In the end I had maybe 500 or so VIs and a main VI that was well below 1MB with a proper state machine and almost no sequence structures anymore. And it run reliably and when you pushed the stop button you did not have to wait half an eternity before the application was able to detect that. The biggest irony was that the application actually was working with an enum state with some 100 or more states, maintained in a global and in almost every sequence frame there was a case structure that would have one or a few cases for a specific state and a default that was doing pretty much nothing. It was a state machine turned inside out and then put into a cascade of sequences!
  16. Libre Office/Open Office is an open source implementation of an Office suite of applications. It delivers similar application as the Microsoft Office package and they work in many ways similar but are not the same. If you are a seasoned Excel, Word or whatever user, they require a little bit of relearning and getting used to. They support a 3rd party interface similar to the Microsoft Office Active X interface but it's not the same and will not work with the Report Generation Toolkit (RPT). One would need to write extra RPT plugins for that, except that while NI made the RPT plugin based, they kind of borked the instantiation of plugins by not making it easily extendable. The available plugins are hardwired in the implementation and extending that would require modifications to the NI implementation, which is considered by most developers a no go as you have to modify the Toolkit on every installation and reapply the modification anytime the Toolkit is reinstalled/updated by NI, which might or might not break your modifications too.
  17. There is a good chance that Microsoft eventually dropped support in Office for 32-bit applications. What I wrote above was true 2015 - 2020. Haven't ever tried to use Excel from LabVIEW since, and generally use Libre Office anyhow if I need an Office application.
  18. It could be made to work in the past. Basically the Office interfaces are all ActiveX based. ActiveX is very well able to invoke an ActiveX Automation Server out of process through an ActiveX proxy server process in the background. If the ActiveX Automation Server is properly registered this happens transparently in the background without extra user interactions. Unfortunately the so called Click to Run MS Office installers that are nowadays used, either forget to do the 32-bit registration of their Automation Server component or somehow bork it up. I have been in the past able to fix that on different machines by running a Repair Install from the Windows Applications control panel.
  19. I've got the same HA Yellow. Still need to actually set it up properly. Got it installed and started up but not integrated with the devices in the house. So many other things to do too. I like that the CM4 module uses real flash memory rather than the SD card of the normal RPi. Much more reliable for a box that typically is put in some corner or your metering cabinet and then left on its own for pretty much all of the time.
  20. They still have something akin to the Alliance Member program. Not sure if it is still called that. Used to work at one too, but am now in academia. As to running LabVIEW directly on a HA, that is not currently possible. Well it may be possible with some emulation if you get an x86_64 emulator running on your ARM HA hardware but that is: 1) a major project to get running, with lots of obstacles, many tricks and a huge chance that what worked yesterday suddenly fails for unexplainable reasons 2) a taxing solution on the poor ARM CPU in your typical HA box The current Hobbyist Toolkit is maybe the most promising solution at this point. It can deploy compiled VIs to a Raspberry Pi and run them headless on the Raspi. But as it is now it's a bit of a pitta. It requires its own chroot environment to provide an ARM environment that is compatible with the ARM CPU in the low cost NI RIO hardware. This is distinctively different from the ARM binary mode typically running on your Raspberry Pi or any other modern ARM hardware. It is 32-bit, and uses the so called softFPU mode where FPU commands are emulated on the ARM core itself, rather than using Neon or similar FPU hardware integrated in all modern ARM chips. And new Raspberry Pi OSes including what HA is using (when running on Raspi hardware) have all changed to 64-bit nowadays, which is with the current Hobbyist Toolkit still a bit of a hurdle but can be worked around if you know what you are doing. There is some talk from NI that they may maybe support a native Raspberry Pi version of this, where the LabVIEW program is deployed to the actual Raspi itself rather than into a chroot container on the Raspi. If that is ever going to see the light of the public, how and in what form is completely unclear. There are several challenges, some technical such as making sure the LabVIEW runtime properly can interact with the window manager on the Raspi (that should be fairly trivial as it is the pretty much the same as what LabVIEW for Linux needs) but also more economical/marketing: How to justify the effort and cost of developing and especially maintaining such a solution without any tangible income in the form of hardware sales? Making it an extra licensed feature is also not very feasible, people are usually not willing to pay 100ds of bucks for a software license for something to run on a 50 to 100 bucks hardware platform. And even with that, your development would still be on a Windows box, in the same way as you develop code for the NI RIO hardware, where you have that hardware as a target in your LabVIEW project. Writing VIs and debugging them happens on your Windows box and when you are confident that it works, you deploy the resulting compiled VI code to the target and let it run there. This so far only works under Windows. And porting that to a Linux host is a major undertaking that I'm not sure NI has invested any real effort into so far. Directly running the LabVIEW IDE on the Raspberry Pi is probably even more unlikely to happen any time soon.
  21. LabVIEW DSC does this with an internal tag name in the Control and the according configuration dialog allows to configure that tag name.
  22. I assume that support for the old *.cdf NI-Max format for installation onto pre LabVIEW 2020 RT targets is not a topic anymore and you guys rather expect *.ipk files? Maybe add a download feed to the OpenG Github project for this? 🙂
  23. I come across them regularly here at the university. There are quite a few setups that are fairly old already and just need some modifications or updates and they work and work and work and work like a clock once setup properly. Developing for them gets harder as you have to use LabVIEW <= 2019 for that.
  24. Actually I do vagualy remember that there was an issue with append mode when opening a ZIP file but that is so far ago, I'm not really sure it was around 4.2 times rather than a lot earlier. I'll have to get back to the 5.0.x version and finally get the two RT Linux versions compiled. Or are you looking at old legacy RT targets here?
  25. You may also want tell people where you can actually download or at least buy this. Although if you want to sell it, do not expect to many reactions. It is already hard to get people to use such toolkits when you offer them for free download.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.