Jump to content

Rolf Kalbermatter

  • Posts

  • Joined

  • Last visited

  • Days Won


Rolf Kalbermatter last won the day on November 23

Rolf Kalbermatter had the most liked content!

Profile Information

  • Gender
  • Location

LabVIEW Information

  • Version
    LabVIEW 2011
  • Since

Recent Profile Visitors

16,904 profile views

Rolf Kalbermatter's Achievements

  1. If you upgraded the cRIO base system to LabVIEW 2018 I do not see how the old rtexe still could be running. It means the chassis got restarted several times, much of it got probably wiped for good and even if the old rtexe would still be there, it could not load into the new 2018 runtime that is now on the system. So yes for first tests it should be as simple as opening the main VI and pressing the Run button. Even if you had an old version of the application deployed to the target and set to run automatically (which would have to be done in LabVIEW 2018) LabVIEW will tell you that there is already a deployed application running on the target and that it needs to terminate that in order to upload the new VI code and then run it.
  2. That's the Unix way of creating interapplication mutexes. There are functions in Unix that allow for atomic check for this file and creating it if it doesn't exist and return a status that indicates if the file already existed (another process has gotten the lock first) or if it was created (we are allowed to use the locked resource). If that process then doesn't remove the lock file when it shutdown, for instance because it crashed unexpectedly, then the file remains on disk and prevents other instances of the process from starting. If you are sure that no other process is currently running, that could have created this lock, you can indeed delete it yourself.
  3. But those ASM source code files are just some glue code that were initially needed for linking CINs. CINs used originally platform specific methods to use object files as loadable modules. They then had to be able to link back to the LabVIEW kernel to call those manager functions and there was no easy method to do that in standard C. CallLVRT was the calling gate through which each of those functions had to go through when trying to call a LabVIEW manager function. In order to reference the internal LV function table required at that point assembly code to work and there were also some initialization stubs that were required. These assembly files were normally not really compiled as there were some link libraries too that did this, but I guess with 2.5, the release was a bit rushed and there were definitely files released in the cintools directory that were a bit broader than strictly needed. With LabVIEW 5 for 32-bit platforms (2000/95/98/NT) NI replaced this custom call gate to the LabVIEW kernel through a normal module export table. On these platforms the labview.lib link library replaced the CallLVRT call gate through a simple LoadLibrary/GetProcAddress interface. With LabVIEW 6i, which sacked Watcom C and the according Windows 3.1 release, the need for any special assembly files at least for the Windows platform was definitely history. There were strictly speaking 3 types of external code resources. CINs which could be loaded into a CIN node, LSBs which were a special form of object file that could be loaded and linked to by CINs (it was a method to have global resources that were used by multiple CINs) and then there were DRVRs which were similar to CINs but specifically meant to be used by the Device interface nodes. DRVRs were never documented for people outside NI but were an attempt to implement an interface for non-Macintosh platforms that resembled the Macintosh Device Manager API. In hindsight that interface was notoriously troubled. Most Macintosh APIs were notorious for exposing implementation private details to the caller, great for hackers but a total pita for application developers as you had to concern yourself with many details of structures that you had to pass around between APIs. And when Apple changed something internal in those APIs it required them to make all kinds of hacks to prevent older applications from crashing, and sometimes they succeeded in that and sometimes they did not! 😀 serpdrv was a DRVR external code resource that translated the LabVIEW Device Interface node calls to the Windows COMM API.
  4. Never heard anything about the complete source code being accidentally released. And I doubt that actually happened. They did include much of the headers for all kinds of APIs in 2.5 and even 3.0 but that are just the headers, nothing more. Lots of those APIs were and still are considered undocumented for public consumption and in fact a lot of them have changed or were completely removed or at least removed from any export table that you could access. Basically, what was documented in the Code Interface Reference Manual was and is written in stone and there have been many efforts to make sure they don't change. Anything else can and often has been changed, moved, or even completely removed. The main reason to not document more than the absolutely necessary exported APIs is two fold. 1) Documenting such things is a LOT of work. Why do it for things that you consider not useful or even harmful for average users to call? 2) Anything officially documented is basically written in stone. No matter how much insight you get later on about how useless or wrong the API was, there is no way to change it later on or you risk crashes from customer code expecting the old API or behavior. Those .lib libraries are only the import libraries for those APIs. On Linux systems the ELF loader tries to search all the already loaded modules (including the process executable) for public functions in its export tables and only if that does not work will it search for the shared library image with the name defined in the linker hints and then try to link to that. On Windows there is no automatic way to have imported functions link to already loaded modules just by function name. Instead the DLL has to be loaded explicitly and at that point Windows checks if that module is already loaded and simply returns the handle to the loaded module if it is in memory. The functions are always resolved against a specific module name. The import library does something along these lines and can be generated automatically by Microsoft compilers when compiling the binary modules. HMODULE gLib = NULL; static MgErr GetLabVIEWHandle() { if (!gLib) { gLib = LoadLibraryA("LabVIEW.exe"); if (!gLib) { gLib = LoadLibraryA("lvrt.dll"); if (!gLib) { /* trying to load a few other possible images */ } } } if (gLib) return noErr; return loadErr; } MgErr DSSetHandleSize(UHandle h, size_t size) { MgErr (*pFunc)(UHandle h, size_t size); MgErr err = GetLabVIEWHandle(); if (!err) { pFunc = GetProcAddress(hLib, "DSSetHandleSize"); if (pFunc) { return pFunc(h, size); } } return err; } This is basically more or less what is in the labview.lib file. It's not exactly like this but gives a good idea. For each LabVIEW API (here the DSSetHandleSize function) a separate obj file is generated and they are then all put into the labview.lib file. Really not much to be seen in there. In addition the source code for 3.0 only compiled with the Apple CC. Metroworks for Apple, Watcom C 9.x and the bundled C Compiler for SunOS. None of them ever had heard anything about 64 bit CPUs which were still some 10 years in the future. And none was even remotely able to compile even C89 conformant C code. LabVIEW source code did a lot of effort to be cross platform, but the 32-bit pointer size was deeply engrained in a lot of code and required substantial refactoring to make 64-bit compilation possible for 2009. The code as is from 3.0 would never compile in any recent C compiler.
  5. Interesting word 😀. Learned a new thing and that this was only invented in 2003. As to knowing English better than you do, you definitely give me more praise than I deserve.
  6. Of course it looks familiar. When not programming LabVIEW (or the occasional Python app) I program mainly in C. It's easy to typecast and easy to go very much ashtray. C tries to be strictly typed and then offers typecasting where you can typecast a lizard into an elephant and back without any compiler complaints 😀. Runtime behavior however is an entirely different topic 🤠
  7. I was just recently trying to find out where the cutoff point is. In my memory 2016 was the version that stopped with 32-bit support. But looking on the NI download page it claims that 2016 and 2017 are both 32-bit and 64-bit. But that download page is riddled with inconsistencies. For 2017 SP1 for Linux you can download a Full and Pro installer which are only 64-bit. Oddly enough the Full installer is larger than the Pro. The 2017 SP1 Runtime installer claims to support both 32-bit and 64-bit! For 2017 and 2016 only the Runtime installer is downloadable plus a Patch installer for the 2017 non-SP1 IDE, supposedly because they had no License Manager integration before 2017 SP1 on non-Windows platforms. Here too it claims the installer has 32-bit and 64-bit support. So if the information on that download page is somewhat accurate, the latest version for Linux that was still supporting 32-bit installation was 2017. 2017 SP1 was apparently only 64-bit. Of course it could also be that the download page is just indicating trash. It's not the only thing on that page that seems inconsistent.
  8. Easy typecasting in C is the main reason for a lot of memory corruption problems, after buffer overflow errors, which can be also caused by typecasting. Yes I like the ability to typecast but it is the evil in the room that languages like Rust try to avoid in order to create more secure code. Typecasting and secure code are virtually exclusive things.
  9. It's the same that LabVIEW adopted many years ago when they added the PCRE function to the string palette in addition to their own older Match Pattern function that was obviously Perl inspired (and likely generated with Bison Yacc), but more limited (not necessarily a bad thing, PCRE is a real beast to tame 😀). Not sure about the exact version used in LabVIEW but since it appeared around LabVIEW 8.0 I think, it would be likely version 6.0 or maybe 5.0, although I'm sure they upgraded that in the meantime to a newer version and maybe even PCRE2.
  10. You actually will need the NI VC2015 Runtime or another compatible Microsoft Visual C Runtime. Since the Visual C Runtime 2015 and higher is actually backwards compatible, it is indeed not strictly needed for Windows 10 or 11 based systems since they generally come with a newer one. However that is not guaranteed. If your Windows 10 system is not meticulously updated, it may have an older version of this Runtime Library installed than your current LabVIEW 2021 installation requires. And that will not work, it's only backwards compatible, meaning a newer installed version can be used by an application compiled towards an older version, not the other way around. You mentioning that the system will be an embedded system makes me think that it is neither a state of the art latest release, not that it will be regularly updated.
  11. LabPython needs to know which Python engine to use. It has some heuristics to try to find one but generally that is not enough. But as Neil says. For some things, old-timers can have their charm, but in software you are more and more challenging the Gods by using software that is actually 20 years or more old!!!
  12. That's because I don't have a solution that I would feel comfortable with to share. It's either ending into a not so complicated one off for a specific case solution or some very complicated more generic solution that nobody in his sane mind ever would consider to touch even with a 10 foot pole.
  13. Unfortunately one of the problems with letting an external application invoke LabVIEW code, is the fundamentally different context both operate in. LabVIEW is entirely stackless as far as the diagram execution goes, C is nothing but stack on the other hand. This makes passing between the two pretty hard, and the possibility of turnarounds where one environment calls into the other to then be called back again is a real nightmare to handle. In LabVIEW for Lua there is actually code in the interface layer that explicitly checks for this and disallows it, if it detects that the original call chain originates in the Lua interface, since there is no good way to yield across such boundaries more than once. It's not entirely impossible but starts to get so complex to manage that it simply is not worth it. It's also why .Net callbacks result in locked proxy callers in the background, once they were invoked. LabVIEW handles this by creating a proxy caller in memory that looks and smells like a hidden VI but is really a .Net function and in which it handles the management of the .Net event and then invokes the VI. This proxy needs to be protected from .Net garbage collection so LabVIEW reserves it, but that makes it stick in a locked state that also keeps the according callback VI locked. The VI also effectively runs out of the normal LabVIEW context. There probably would have been other ways to handle this, but none of them without one or more pretty invasive drawbacks. There are some undocumented LabVIEW manager functions that would allow to call a VI from C code, but they all have one or more difficulties that make it not a good choice to use for normal LabVIEW users, even if it is carefully hidden in a library.
  14. And that is almost certainly the main reason why it hasn't been done so far. Together with the extra complication that filter events, as LabVIEW calls them, are principally synchronous. A perfect way to make your application blocking and possibly even locking if you start to mix such filter events back and forth. Should LabVIEW try to prevent its users to shoot themselves in their own feet? No of course not, there are already enough cases where you can do that, so one more would be not a catastrophe. But that does not mean that is MUST be done, especially when the implementation for that is also complex and requires quite a bit of effort.
  15. it’s a general programming problem that exists in textual languages too although without the additional problem of tying components in the same strict manner together as with LabVIEW strict typedefs used in code inside different PPLs. But generally you need to have a strict hierarchical definition of typedefs in programming languages like C too or you rather sooner than later end in header dependency hell too. More modern programming languages tried to solve that by dynamic loading and linking of code at runtime, which has it’s own difficulties. LabVIEW actually does that too but at the same time it also has a strict type system that it enforces at compile time, and in that way mixes some of the difficulties from both worlds. One possible solution in LabVIEW is to make for all your data classes and pass them around like that. That’s very flexible but also very heavy handed. It’s also easily destroying performance if you aren’t very careful how you do it. One of the reason LabVIEW can be really fast is because it works with native data types, not some more or less complex data type wrappers.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.