Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,789
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. VxWorks is quite special. It looks on many fronts like a Posix platform, but that is only a thin and not complete layer above the lower level and very specialized APIs. Programming to that lower level interface is sometimes required for specific operations but documentation was only available as part of the very expensive developer platform with according compiler. It's of academic interest now since VxWorks has been deprioritized by WindRiver in favor of their own Linux based RT platform. And NI has long ago stopped using it and never made the move to anything beyond 6.3 of the OS. It was anyhow only intended for the PowerPC hardware since they moved to that platform as power efficient embedded targets were not really an option on x86 based hardware at that time. But with the PowerPC loosing pretty much all markets, it was a dead end (at some point in time it was the most used embedded CPU solution, many printers and other devices, where users never ever saw anything of the internal hardware, were running on PowerPC). It was hard to port any reasonably sized code to VxWorks because of the higher level APIs often being very similar to other Posix platforms like Linux, but not always working exactly that way or not providing certain functionality on that level. Accessing the lower level API was very difficult because of the very limited documentation about it that could be found without investing an arm and a leg into the developer platform from WindRiver. But once that porting was done there was fairly little maintenance required both because the API stayed fairly consistent and NI didn't move to a different version (except VxWorks 6.1 to 6.3 between LabVIEW 8.2 and 8.5).
  2. Unfortunately, Apple manages to almost consistently break backwards compatibility with earlier versions for anything but the most basic "Hello World" application. And yes that is only a mild exaggeration of the current state of affairs. For an application like LabVIEW there is almost no hope to be compatible over multiple OS versions without some tweaks. Partly this is caused by legacy code in LabVIEW that uses OS functions in a way that Apple has declared depreciated versions ago, partly it is simply because that is considered quite normal among Apple application developers. For someone used to program to the Windows API, this situation is nothing short of mind boggling.
  3. It seems they are going to make normal ordering of perpetual licenses possible again. While the official stance was that the perpetual licenses were gone, the reality was that you could still order them but you had to be VERY insisting, and have some luck to know the right local NI sales person, to be able to order them. That will of course not help with a current Macintosh version of LabVIEW. Still, maybe some powers to be might decide that reviving that is also an option. Kind of doubt it as I have experience with trying to support Mac versions of LabVIEW toolkits that contain external compiled components and the experience is nothing short of "dramatic". But if there would be a client teasing NI convincingly about ordering a few thousand seats of LabVIEW if there was a Mac version available, I'm sure they would think very hard about that. 😁
  4. It's Open Source (on SourceForge) and I started developping it more than 25 years ago. There never was any license involved but yes at that time Python 2.2 or thereabout was the actual version. I did some updates to also make it work in 2.3 and 2.5 and minor attempts to support 2.7 but had at that time lost interest in tinkering with it as I was getting more involved with Lua for LabVIEW and two scripting solutions next to each other seemed a bit excessive to work with. The shared library necessary to interface Python with LabVIEW definitely won't work out of the box with Python 3. There were simply to many changes with Python 3 to the internals as well as datatype system that that could work without some changes to the shared library interface code (the change to Unicode strings instead of ASCII is only one of them, but quite far reaching one). Also there is absolutely no support present for Python environments such as offered by Anaconda and the like. The main reason for starting with LabPython was actually that I had been trying to reverse engineer the script host interface that LabVIEW had introduced to interface to HiQ, and later Matlab. When searching for an existing scripting language that had an embedding interface to integrate into other applications to use as a test case, I came across a project called Python, that was still somewhat obscure at that time. I didn't particularly like Python, and that its inventor Guido van Rossum was actually Dutch did not affect my choice. And when reaching out to the Python community about how to embed Python in another application, I was frankly told that while there was an embedding API available in Python, there was little to no interest in supporting that and I was pretty much on my own trying to use that. It still seemed the most promising option as it was Open Source and had actually a real embedding API. I did not even come across Lua at that time, although before version 5.0 Lua had anyways fairly limited capabilities to integrate it in other applications. So I developed a Python script server for that interface to allow integration of Python, and even got help from someone from inside NI who was so friendly to give me the function prototype declarations that such an interface needed to support in order for LabVIEW to recognize the server and not crash when trying to load it. After it was done and started to work, I was less than thrilled by the fact that the script was actually a compile time resource, so could not be altered by the user of a LabVIEW application but only by its developer. As more of an afterthought, I added a programmatic interface to the already existing shared library and the main functionality of LabPython was born. As those old LabVIEW script nodes have been depreciated several years ago by NI, it would be definitely not a wise choice to try to build anything new based on that tech. Not even sure if LabVIEW 2023 and newer even would allow LabPython to be loaded as a script server. But its programmatic interface should be still usable, although for quite a few reasons not with Python 3 without some serious tinkering in the C code of the shared library interface.
  5. Duplicate post from here: https://forums.ni.com/t5/LabVIEW/Read-from-INI-file-to-application-cluster/td-p/4369322
  6. I am actually working on it but it is a bit more involved than I had anticipated at first. There is a certain impedance mismatch between what a library like open62541 offers as interface, and what LabVIEW needs to be able to properly interface to. I can currently connect to a server and query the available nodes, but querying the node values is quite a bit of work to adapt the strict LabVIEW type system to a more dynamic data type interface like what OPC UA offers. More advanced things like publish-subscribe are an even more involved thing to solve in a LabVIEW friendly way. And I haven't even started interfacing to the server side of of the library!
  7. That's it! It didn't work for our use case as it can't really work around the issues of LabVIEW being unable to support two different platforms at the same time being loaded. As such it had not really significant advantages to the MGI Solution Builder in the way we had started using it.
  8. Not sure about 2011 to be honest, but no you do not have to have all dependencies included in a PPL. You can have a PPL depend on other PPLs and configure the build to exclude that dependency from you PPL build, so that it remains external. This has of course to be down from bottom up, which is quite a work. Only PPL dependencies and other binary dependencies can be excluded from being pulled into a PPL. So if you want code that has to be shared between your PPL and other PPLs or your exe, that code needs to be in its own PPL, so each of those can refer to it. Yes it is not trivial and you need to plan before you start programming. You need to have a clear hierarchy overview and be able to cleanly modularize your code into different PPLs. Tools like the MGI Solution Builder definitely help with that as you can script the creation of a whole hierarchy of PPLs to be compiled in the correct order. Someone from NI was busy creating another solution that could build PPLs and in the process of building them also relink any dependencies on lvlib's into dependencies of lvlibp's but that didn't quite finish.
  9. Well, basically your program does never read anything from the serial port so your sending anything like *IDN? to it is totally superfluous and even wrong. As soon as it starts up it starts to spew a line of text every 500 ms, no matter if there is anyone listening or not. Basically, you want to startup your LabVIEW program - initialize the serial port with the correct parameters, leaving the Enable Termination character on as you now do - do one read of at least 100 bytes or more, possibly even multiple times to make sure the serial port input buffer is cleaned from any partial or old data - do NOT send anything to the device - then do a VISA Read with something like 100 bytes at least, every 500 ms, DO NOT USE Bytes at Serial port!!!!!!! You should see a string like "Temperature: <number> °C | Humidity: <number> % | Air Quality: <number>". The degree sign ° and the pipe symbol | might however pose a problem. No sure what locale your Arduino device uses, but it may not be the same as your Windows computer uses and then those characters will not match with what you expect.
  10. There are several alternatives for the NI GPU Toolkit that are considerably more up to date and actually still maintained. https://www.ngene.co/gpu-toolkit-for-labview https://www.g2cpu.com/
  11. Definitely can echo things. PPLs work fairly well when you only use one platform (Windows x86 and x64 are two different platforms in that respect). Basically a PPL is quite similar like a DLL in that respect, it is binary compiled code and only works in the LabVIEW platform that it was created in. In addition you also have to watch out about LabVIEW versions, although with the feature to make a PPL loadable in a newer LabVIEW version since about 2017 or so, this is slightly less of a problem, but not entirely. There are possible issues with executing a PPL in a newer LabVIEW version than in what it was created. Where things really get wonky is if you want to support multiple platforms in LabVIEW. Different platform versions of PPLs in the same project is absolutely out of questions. You can't have a project that references a PPL under your My Computer target and the same PPL in a Realtime target in that project (same in name only, they obviously need to have been recompiled for each target). LabVIEW will get into a rats about that and render both targets as broken since it will try to match the two incompatible PPLs to both targets. But it is even worse than that! Even if you separate the two targets into their own projects you have to be extremely careful to never load both at the same time. For some reason the context isolation between LabVIEW targets (including targets in different project files that should be fully isolated in theory) simply doesn't work for PPLs. It seems that LabVIEW only maintains one global list of loaded PPLs across all possible contexts and that of course messes royally with the system. Instead PPLs should be managed based on the context they are referenced in and there should be no sharing at all between them. There is also an unfinished feature in LabVIEW that allows to install PPLs and other support files in target specific sub directories, so that you could theoretically have PPLs for all the different targets on disk and reference them with the same symbolic path which then resolves to the target specific PPL. But it has many bugs and doesn't quite work as intended on some platforms and as long as PPLs are not managed on a context base it is also of limited usefulness even if it would fully work.
  12. The change to "librarize" all OpenG functions is a real change in terms of requiring any and every caller to need to be recompiled. This can't be avoided so I'm afraid you will either have to bite the sour apple and make a massive commit or keep using the last OpenG version that was not moved to libraries (which in the long run is of course not a solution).
  13. And what is the program on your ESP32 doing? Does it even listen on the according serial port? Does it know what it should do when seeing an *IDN?<new line> on that port? What does it send back when seeing that command? The ESP32 is a capable microcontroller board but it needs a program that can implement the reading of your sensors and react to commands from your LabVIEW program and send something back. And that program needs to be implemented by you in one of the supported programming languages for the ESP32. Most likely you will want to use ESP-IDF as a plugin in either Eclipse or VSCode.
  14. The only thing I have found to work is to maintain separate projects for 32-bit and 64-bit and have them each build into a seperate location on disk. Anything else is going to mess up your projects almost beyond repair possibilities. That applies to both the projects to build your PPLs, possibly one project per PPL in the case I worked on, as well as for the applications using those PPLs. Using Symlinks to map the build locations of the PPLs to a common path that is referenced by all the projects making use of PPL dependencies (including PPL build projects), helps with maintenance of the projects as they all only need to reference a general location for dependencies rather than an architecture specific location.
  15. 5.0.1 and in the meantime 5.0.2 has been since released. One issue, but that is not really new and existed before: Don't disable mass compile after install, it may take some time but it sure fixes stale shared library paths in the VI and I have so far not found a way that makes those paths automatically fixup at package creation, since the path seems to need to be absolute. The two possible approaches I'm currently considering: 1) use a so called Symbolic Path (/<user.lib>/_OpenG.lib/lvzip/lvzip.*). Disadvantage: - only works if installed into default location 2) use Specify Library Name on diagram for the Call Library Node and calculate its path at runtime. Disadvantage: - makes the shared library be not a visible component to the VIs, so that the shared library needs to be added explicitly in every application/shared library/assembly/source distribution build in order to be available in such - extra execution time for the dynamic calculation of the path
  16. If you use the chroot trick that NI/Digilent did for the Linx/Hobbyist Toolkit it is theoretically doable but far from easy peasy. And you still have to sit down with the NI/Emerson lawyers as I told you before. However I doubt you want to run your Haibal library in a specially compiled Debian kernel running in a chroot inside your Jetson Linux distro. That is necessary since the entire NI library tree and LabVIEW Runtime is compiled for the arm softeabi that NI uses for their Xilinx XC7000 based realtime targets. And yes you can NOT run a LabVIEW VI without LabVIEW runtime on the target! Period! And that NI did put the NI Linux RT source online has nothing to do with that they want to be nice to you or let you build your own LabVIEW realtime hardware but simply because it is the most easy way to comply with the GPL license requirements for the Linux kernel. But those license requirements do not extend to any software they create to run on that Linux kernel, since the kernel license has an explicit exemption for that. Without that exemption there would simply not be any commercial software to run on Linux. And I understand what you want but that does not make it feasible. I WANT to win the jackpot in the lottery too but so far it never happened and quite certainly never will. 😀
  17. If and how the DLL uses exceptions is completely up to the DLL and there are actually several way this could work but only the DLL writer can tell you (if it is not explained in the documentation). Windows has a low level exception handling mechanisme that can be used from the Win32 API. It is however not very likely that a DLL would make use of that. Then you have the structured exception handling or its namesakes from different C++ compilers. And here things get interesting since each compiler builder was traditionally very protective about their own implementation and were very trigger happy about suing anyone trying to infringe on the related patents. It means that GCC for a very long time could not use the Microsoft SEH mechanism and therefore developed their own method that was avoiding Microsoft patents. So if your DLL uses exceptions, and doesn't handle them before returning from a function call to the caller, you might be actually in a bit of a bind as you really need to know what sort of exception handling was used. And if you use a different compiler than what was used for the DLL, you will most likely be unable to properly catch those exceptions anyhow, causing even more problems. Basically, a DLL interface is either C++ oriented and then needs to be interfaced by the same compiler that was used for the DLL itself anyhow, or it is a standard C interface and then it should NEVER pass unhandled exceptions to the caller since that caller has potentially no way to properly catch them. One exception are Win32 exceptions that the LabVIEW Call Library Node actually is prepared to catch and turn into the well feared 1097 error everybody likes so much, unless you disable the error handling level completely in the Call Library Node configuration dialog. 😁 Your example code, while probably condensed and not the whole thing, does however ignore very basic error handling that comes into play long before you even get into potential exceptions. There is no check for the Load_Extern_DLL() to return a valid HANDLE. Neither do you check for the function pointers you get from that DLL to be all valid. p2_CallbackFunction_t is rather superfluous and a bit misleading. The _t ending indicates it to be a type definition but in fact you simply declare a stack variable and assign the reference to the CallBack function to it. Luckily you then pass the contents of that variable to the function, so the fact that that storage is on the stack and will disappear from memory as soon as your function terminates is of no further consequence. But you could just as well simply pass the CallBack function itself to that function and completely forget about the p2_CallbackFunction_t variable declaration. Once your function returns, you have indeed no way to detect exceptions from the DLL anymore as there is no stack or call chain on which such an exception could be passed up. The way this should be done is by the DLL handling all exceptions internally and passing an according error indication through the CallBack function in an error variable. It can't use the CallBack function to pass up exceptions either since the CallBack function is called by the DLL, so the exception handling can't go from the callback to LabVIEW but only from the callback to the caller, which is indeed yes ... big drumrolls ... the actual DLL. If your DLL doesn't catch all exceptions properly and handles them by translating them to an error code of some sort and passing that through the callback or some other means to the calling application, then it is not finished in terms of asynchronous operation through callbacks. Exceptions only can pass up through the call chain, but if there is no call chain such as with a callback function, there is no path to pass up exceptions either.
  18. Hmm, not trying to criticize you but having 100 (or even 25) little windows that all display data and allow control too, seems to me to be a pretty difficult UX. It's definitely not something I would immediately turn to. Probably would have more like a list box that shows the information for each device, possibly in a tree structure, and letting the user select one and then make the controls for that available in a separate section of the screen where the control will be specific to the selected device. Shaun's example, while nice technically, shows the difficulty of that approach very well even without much of user control. The graph in there is pretty much way to small for any usable feedback.
  19. I would likely use a Table or MultiColumn List Control.
  20. The whole ADS library overhead in an application adds about 0.0something seconds to the whole build time of any project. As long as you have a linear hierarchy in object class dependencies, there is virtually no overhead at all beyond what you would have for the involved number of VIs anyhow. Once you happen to create circular dependencies the app builder will go into making overtime to resolve everything properly and your build times start to grow into the sky. At some point you can get all kinds of weird build errors that are sometimes almost impossible to understand. Untangling class hierarchies is in that case a very good (and usually also extremely time intensive) refactoring step to get everything to build nicely again.
  21. For this type of functionality it is absolutely not Booh. 😀 It is OOP in the sense that it uses LabVIEW classes as an easy means of instantiating different driver implementations at runtime. One interface (or a thin interface class in pre LabVIEW 2020) and then a child implementation to call the DLL, one to call VISA USB Raw and possibly others. On top of that one driver as a class to do MPSSE I2C and another for MPSSE SPI. That way it is very easy to plugin a different low level driver depending what interface you want to use. D2XX DLL interface on Windows, D2XX so interface or D2XX VISA USB Raw on Linux and LabVIEW RT. With a little extra effort in the Base Class implementation for a proper factory pattern implementation, the choice which actual driver to load can be easily done at runtime. I did the same with other interfaces such as Beckhoff ADS, one driver using LabVIEW native TCP, another interfacing the Beckhoff ADS DLL, one for Beckhoff ADS .Net and one for Beckhoff ADS ActiveX, although the ActiveX one has a bug in the type library that tells LabVIEW to use a single reference byte as data buffer for the read function and there is no way to overwrite the type lib information except patching the ActiveX DLL itself. The Typelib should declare that as an array datatype but instead simply declares it as a byte passed by reference. The same thing for a C compiler but two very different things for a high level type description. The base class implements the runtime instantiation of the desired driver and the common higher level functionality to enumerate resources and read and write data IO elements using the low level driver implementation. For platforms that don't support a specific "plugin" you simply don't install that class.
  22. USB Raw will of course require your LabVIEW code to use the right USB control endpoints, pipes and binary bit patterns and if the new chip is not 100% upwards compatible with the old, that means a different driver (or one with conditional code based on the chip as detected at communication initialization). Without a detailed specification of the low level USB protocol for the new chip that is not gonna work. Using the FTDI provided D2XX driver should fix that as that driver somehow makes sure to talk to each chip as needed, as long as FTDI supports that chip with this driver. Some of the FTDI chips need a different driver such as D3XX. It's definitely easier than trying to do USB Raw communication but requires a bit of C programming knowledge to be able to figure out the proper Call Library Node configuration for each function. The existing LabVIEW wrapper as provided from FTDI is outdated and only supports 32-bit, possibly even with a bug or two even in 32-bit, but it is a starting point.With a little love it can definitely be made to work for modern LabVIEW in 32-bit and 64-bit and for Windows and Linux. I may revisit my earlier attempts of writing a fully LabVIEW OOP based solution for this with pluggable low level driver using USB Raw or FTDI D2XX and a pluggable high level driver for the I2C and SPI MPSSEE modes of the chips that have such an engine build in, but I haven't currently a use case for this and that tends to push this project always towards the end of the "projects that are nice to develop" queue.
  23. It may be possible, but a few 100 subpanels somehow makes my alarm nerve tingle quite strongly! Sub Panels are not the light weight thing like Windows windows, where they made each control in a classic Win32 application its own sub-window. And they moved away from that idea with more modern UI frameworks too, and likely for some reason. You really may be stressing LabVIEW's window management capabilities beyond reasonable borders with so many subpanels present at the same time.
  24. Not just for Linux. In Windows it is the same. But I'm not sure, I never used the D2XX driver on Linux until now. My understanding is that as long as you do not open a VCP interface for a device, the hardware is not really in use and should be accessible by the D2XX driver. In the worst case you may have to configure udev or whatever is used on your system to manage automatic mounting of USB devices, to not install a particular device as tty device, for instance based on the serial number or similar. By default most Linux systems are nowadays configured to automatically install any device they have a standard driver for as the according device. In the past this was often the opposite where you had to mount the device manually or add an according rule to udev for automatic mounting at boot time.
  25. That's not how those drivers work. The device drivers for the FTDI chips (the RS-232422/485 at least) installs in Linux as a serial port driver. That means that they get installed as serial ports under /dev, usually as /dev/ttyUSB* with increasing number and can be accessed through NI-VISA just as any other serial port. There is no exported API in these drivers that you can interface to with a Call Library Node. If you want to access the MPSSE engine in those chips (to do bit bang, I2C or SPI for instance) instead of the Virtual Comm Port interface the standard Linux driver provides, you'll have to hunt down the D2XX driver for your Linux distribution and install that instead. https://www.ftdichip.com/old2020/Support/Documents/AppNotes/AN_220_FTDI_Drivers_Installation_Guide_for_Linux.pdf This document is from 2017. Your Linux distribution may have changed things considerably in the meantime, so your mileage may vary, but the general distinction between standard VCP and FTDI proprietary interfaces to the chip through the D2XX driver remains. There exists LabVIEW libraries to interface to the D2XX shared library and also to access the two FTDI provided wrapper libraries to do MPSSE SPI and MPSSE I2C but they are heavily Windows minded, meaning that they try to link to the .DLL version instead of the .so and there might be other API specific details that differ between Windows and Linux. Also last time I checked the two MPSSE VI interfaces here on LAVA were not cleanly prepared to handle 64-bit shared libraries as they were developed when 64-bit LabVIEW was still in its early days and nobody bothered to install and use that.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.