Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. While I think that the remark in itself wasn't helpful I do understand where it comes from. In many open source projects trying to interface to them from another software is like trying to continuously keep a moving target in focus. Granted, maintaining backwards compatiblity can be a painful process and there is something to say about starting with a clean slate at some point. And of course often the open source programmer is dedicating his own free time to the cause, so it is really his decision if he rather spends it to keep the software compatible or develop new exciting features and change whatever is needed to change during that without considering the possible consequences. Still I think a bit more discipline wouldn't normally hurt. It's sometimes the difference between a cool but for many applications pretty unusable solution and a really helpful and useful piece of software. Another thing are changes made on purpose for the sake of disallowing their use from certain types of clients. That I have a pretty ambivalent feeling about. It seldom prevents what they try to block, but it causes lots of mischief for the users. The IMAQdx link you provided refers to a forward compatibility issue. That is something that is very difficult to provide. There are techniques to help with that somewhat but they more often than not tend to take up more code and complexity than the entire rest of the library, so in short basically never worth the effort. Working in regulated industries might be an exception here.
  2. Well Python 2.3 should be indeed ok, although I never tested with numpy and similar in that version. But that is so old, it's like requiring people to work with Linux 2.2 kernels or Windows 2000. Right! You can spend many man hours to get LabPython working correctly with current version, quite a few more man hours to get the PostLVUserEvent() working as well (it's asynchronous operation and while no rocket science really, involved enough that I have to wrap my mind around it every time again, when trying to implement it somewhere). Or you implement a client server RPC scheme in LabVIEW and Python and just pass around the information that way. The second is a lot easier, easily expandable by other people with absolutely no c knowledge, and much easier to debug too.
  3. Of course. I never said otherwise. But we were not really discussing LabPython at this point since it has quite a few issues that would require some serious investment into the code. The solutions we were discussing where more along the lines of running Python in its own process and communicate between Python and LabVIEW through some means of interapplication communication like nanomsg, Zeromq or a custom made TCP/IP or UDP server client communication scheme. Refer to this post for a list of problems that I'm aware of for the current version of LabPython.
  4. I have not moved anything to github and am very unlikely to do. Besides that I find git not very easy to use I have come across way to many projects that were taken from somewhere, put on github and then abandoned. The 4.0.0.4 version of LabPython is on the old CVS repository for the LabPython project on sourceforge, but I did add the LabPython project with some initial improvements to the shared library to the newer SVN repository of the OpenG Toolkit project on sourceforge. That is as far as I'm concerned the current canonical version of LabPython, although no new release package has been created for a few reasons: -The changes I did to the C code are only a few minimal imrovements to make LabPython compile with the Python 2.7 headers. Only very brief testing has been done with that. More changes to the C code and a lot more testing would be needed to make LabPython compatible with Python 3.x. - More changes need to be made to the code to allow it to properly work in a 64 bit environment. Currently the pointer to the LabPython private management structure which also maintains the interpreter state is directly passed to LabVIEW and then treated as a typed log file refnum. LabVIEW refnums however are 32 bit integers, so a 64 bit pointer will not fit into that. The quick and dirty fix is to change the refnum to a 64 bit integer and configure all CLNs to pass it as a pointer sized variable to the shared library. But that will only work fro LabVIEW 2009 on onwards which probably isn't a big issue anymore. The bigger issue is that a simple integer will not prevent a newby user to wire just about anything to the control and cause the shared library to crash hard when it tries to access the invalid pointer. -There is currently serious problem when trying to use non-thread safe Python modules like numpy and similar from within LabPython. These modules assume that its functions are always executed from within the same OS thread and context. LabPython doesn't enforce that and LabVIEW happily will call it from multiple threads if possible, which makes those modules simply fail to work. LabPython tries to use the interpreter lock that the Python API does provide, but either that is not enough or they changed something between Python 2.3/2.4 and later versions in this respect that makes LabPython not correctly use this lock. Getting this part debugged will be a major investment. Documentation about the interpreter lock and thread safety of the Python interpreter are scarce and inconsistent.
  5. I can only echo neds remarks. Calling any of the LabVIEW manager functions from a different process than LabVIEW itself is doomed to fail. If you wanted to call this function through the Python ctypes interface, the according Python interpreter has to run inside the LabVIEW process, just as what LabPython attempts to do. Trying to do that from a seperate Python execution interpreter is doomed without proper interprocess communication like through nanomsg, ZeroMQ or your own TCP/IP, UDP deamon. This is no fault of LabVIEW or Python but simply proper process separation through protected mode memory and similar involved techniques, fully in effect since Windows NT.
  6. I'm not sure I understand you well here. If the library offers to install semaphore callbacks that is of course preferable from a performance viewpoint but you can still choose to protect it on the calling side by a semaphore instead (and you could even use an implicit serialization by packing all CLNs into the same VI with an extra function selector and setting the VI to not be reentrant) instead of wrapping each CLN into an optain semaphore and release semaphore. A library offering semaphore callback installation is pretty likely to only use them around critical code sections so yes there might be many function calls that don't invoke a semaphore lock at all as it is not needed there. Even when it is needed it may choose to do so only around critical accesses, freeing the semaphore during (relatively) lengthy calculations so that other parallel calls are not locked, which can result in quite a bit of performance when called from a true multitasking system like LabVIEW.
  7. As has been already pointed out, there are a number of possible reasons why a library could be not thread safe. The most common being the use of global variables in the library. One solution here is to always call the library from the same thread. Since a thread can't split magically into two threads, that is a safe method to call such a library. Theoretically a library developer could categorize each function if it makes use of any global and sort the library API's into safe functions who don't access any global state and into non-safe functions who need to be called in a protected way. Another way is to use a semaphore. That can be done explicitedly by the caller (what drjdpowell describes) or in the library itself but the later has the potential to lockup if the library uses multiple global resources that are each protected by their own semaphore. OpenSSL which Shaun probably refers to, requires the caller to install callback functions that provide the semaphore functionality and which OpenSSL then uses to protect access to its internal global variables. Without having installed those callbacks OpenSSL is not threadsafe and dies catastrophally rather sooner than later when called from LabVIEW in multithreaded mode. An entirely different issue is thread local storage. That is memory that the OS reserves and associates with every thread. When you call a library that uses TLS from a multithreaded environment you have to make sure that the current thread has the library specific TLS slots initialized to the correct values. The OpenGL library is such a library and if you checkout the LabVIEW examples you will see that each C function wrapper on entry copies the TLS values from the current refnum to the TLS and on exit restores those values from TLS back into the refnum. In a way it's another way of global storage but requires a completely different approach. But for all of these issues guaranteeing that all library functions are always called from the same thread solves the problems too.
  8. Well, Lua for LabVIEW would give you a lot of the things you hope for but it is not free. So that is the main reason I didn't really push it as a viable option.
  9. While finding the root cause is of course always a good thing, networking is definitely not something that you can simply rely to work always uninterrupted. Any stable networking library will have to implement some kind of retry scheme at some point. HTTP did this traditionally by usually reopening a connection for every new request. Wasteful but very stable! Newer HTTP communication supports a keep alive feature, but with the additional provision to close the connection on any error anyways and on the client side reconnecting again on every possible error including when the server closed the connection forcefully despite being asked to please keep it alive. Most networks and especially TCP/IP were never designed to guarantee uninterrupted connections. What TCP guarantees is a clear success or failure on any packet transmission and also proper order of successful packets in the same order as they were send, but nothing more. UDP on the other hand doesn't even guarantee any of these.
  10. It's no magic really, although I haven't used it myself yet. I make use of other features related to so called UserDataRefnums that are although not really documented a bit more powerful and flexible than the (IMHO misnamed) "DLLs Callbacks". Basically each Call Library Node instance has its own copy of an InstanceDataPointer. This is simply a pointer-sized variable that is associated with a specific Call Library Node. You have the three "Callback functions" Reserve(), Unreserve() and Abort(), each with the same prototoype MgErr (*proc)(InstanceDataPtr *instanceState); So each of them gets a reference to the the Call Library Node instance specific pointer-sized variable location.You could store in there directly any 32 bit information (it's of course 64-bit on 64-bit LabVIEW but you do not want to store more than 32-bits in there for compatibility reasons for the case where you might need to support 32-bit LabVIEW and OSes, such as Pharlap, VxWorks and NI Linux ARM targets) but more likely you will allocate a memory block in Reserve() and return the pointer to that memory block in this parameter. In addition you should make sure the memory is initialized in a meaningful way for your other functions to work properly. The Unreserve() callback is called before LabVIEW wants to unload the VI containing the CLN in order to deallocate anything that might have been allocated or opened by the other functions in the InstanceDataPointer including the InstanceDataPointer itself. Abort() obviously will be called by LabVIEW when the user aborts the VI hierarchy. Now these three functions in itself are not very helpful on their own but where it gets really useful is when you add the special function parameter "InstanceDataPointer" to the parameter list in the Call Library Node configuration. This parameter will not be visible on the diagram for that Call Library Node. Instead LabVIEW will pass the same InstanceDataPointer to the library function as what is passed to the three callback functions. Your function can then store extra information during execution of the function in that InstanceDataPointer that Abort() can use to properly abort any operation that the function itself might have started in the background, including closing files, aborting any asynchronous operation it started, etc, etc. Depending on the complexity you can probably even get away with not implementing the Reserve() function specifically but instead have each function invocation check if the InstanceDataPointer is NULL and then allocate the necessary resources at that point. It may be a performance optimization in not allocating an InstanceDataPointer on load of the VI but only on first execution, so if someone only loads the code without ever starting it, you won't unnecessarily allocate it. If you ever had the "joy" of using Windows API functions with asynchronous operation you will recognize this scheme from the LPOVERLAPPED data pointer those functions use. Remains to stress the fact that every Call Library Node instance has its own private InstanceDataPointer. So if you have 10 Call Library Nodes on your diagram all calling the same library function you still end up with at least 10 InstanceDataPointers. I say here "at least" since this would be multiplied with the number of clones that exist for this particular VI when you have a reentrant VI. As to providing ready made samples with code, that is a crux with this kind of advanced functionality. As it involves asynchronous programming it really is a rather advanced topic. Anyone who understands the explanation as above will pretty readily be able to apply it for their specific application and others who don't won't be helped much with an example that doesn't match their specific use case almost perfectly. Even I get myself regularly lost in the pointer nirvanas where an asynchronous task is accessing the wrong pointer somewhere that the debugger is having a hard time to reach into.
  11. I would guess that it has to do with dynamic dispatch. Most likely dynamic dispatch would get significantly slower (and I'm talking here more than a few 100 nanoseconds which some people already found an insurmountable problem when NI changed something in the dynamic dispatch code between LabVIEW 2014 and 2015) if there was the possibility that a VI is not already loaded!
  12. While the middle-layer is indeed an extra hassle, since you have to compile a shared library for every platform you want to support, it is for many cases still a lot easier than trying to play C compiler yourself on the LabVIEW diagram. Especially since not all LabVIEW platforms are equal in that respect (with 32 bit and 64 bit being one but by far not the only possible obstacle). Yes you can use conditional compile structures in LabVIEW to overcome this problem too, but at this point I really feel like using duct tape to hold the Eiffel tower together. Maintenance of such a VI library is a nightmare in the long run. Not to forget about performance. If you use a middle layer shared library you can often directly use the LabVIEW datatype buffers to pass to the lower layer library functions, with MoveBlock you often end up copying any and every data back and forth multiple times. And smithd points out another advantage of a middle layer. You can make sure that all the created objects are properly deallocated on a LabVIEW abort. Without that the whole shenanigan is staying lingering in memory until you close LabVIEW completely, possibly also keeping things like file locks, named OS pipes, OS events and semaphores alive that prevent you from rerunning the software again.
  13. There is no way to directly access LaVIEW controls from a Python script. You would have to somehow write a Python module and an accompanying LabVIEW module that can communicate with each other. But I'm not sure that is the approach I would choose. It requires the Python script to know your LabVIEW user interface exactly in order to be able to reference controls on it, which I find to be a rather brittle setup. Technically LabPython is best suited when you can write a library of Python routines that you then call from your LabVIEW code. In that way your LabVIEW program does provide all the information the script would need by passing it as parameters to the routines. Calling back from a Python script into LabVIEW was never really the main intention when I developed LabPython back in the old days :-). We eventually did Lua for LabVIEW which does support some limited calling back into LabVIEW (limited in that it only works from LabVIEW to Lua or Lua to LabVIEW but not in a recursive loop back and forth) but that is in fact one of the most complicated (and brittle) parts of Lua for LabVIEW. From the initial introduction of Lua for LabVIEW in LabVIEW 6 or so until the latest LabVIEW version, almost all problems that arose with a new LabVIEW release were related to this part of the package. As to support for LabPython the most likely place to get any feedback at all is probably here, but there are not many people using it nowadays and I haven't written any Python script in at least 10 years. I did a few minor updates in the past to the LabPython shared library to fix some minor quirks but in order to make it work properly with Python 3.0 and newer it would require some real work, also on the C side of the code. It was developed for Python 2.3 or so and works pretty ok up to Python 2.7.x but 3.0 added several changes that also have effect on the C code behind LabPython.
  14. My experience with this is that under Windows it is pretty easy and non-problematic but if you end up having numerous class hierarchy levels that depend on each other and build the various classes all into its own PPL you have to be careful if you do this for Linux realtime targets. For some reasons only known to I don't know who, if you for whatever reason rebuild one of the base class packed libraries you absolutely have to rebuild every depending class packed library or LabVIEW will start to complain that the depending classes can't be loaded. I have no idea what the reason is, I did assume that a packed library is an isolated container that only exports its public interface to the outside world, so as long as nothing on the signature of the public methods changes this should be a no-brainer, but that doesn't seem to be the case for NI Linux RT targets. I didn't seem to have these problems on Windows nor VxWorks realtime targets.
  15. If your packed library is really just a wrapper around your child class implementation, a better way would most likely be to employ a default naming scheme for the PPL name that follows the class name. Then using "Get Default Class from Path" you simply load the class into memory at runtime and cast it to the Base class and then you can call all the Base class methods and properties from that and the dynamic dispatch will make that the child methods are invoked.
  16. You did redistribute the No Debug version of your DLL? The Debug version will link to a different C runtime library that is not redistributable to other computers and only works on PCs where you have the Visual C compiler installed.
  17. Actually you should not really need to change anything code wise. The Linux kernel sources support to be compiled for just about any architecture that is out there, even CPUs that you would be nowadays hard pressured to find hardware to run it on. Of course depending on where you got your kernel sources they might not contain support for all possible architectures, but the kernel project supports a myriad of target architectures, provided you can feed the compiler toolchain with the correct defines. Now figuring out all the necessary defines for a specific hardware is a real challenge. For many of them the documentation is really mostly in the source code only. Here come various build systems into play that promis to make this configuration easier by allowing you to select different settings from a selection and then generating the necessary build scripts to drive the C toolchain with. What is the real challenge, is the configuration that needs to be done to tell the make toolchain for which target arch you want to compile, what hardware modules to link statically and what modules to compile as dynamic kernel modules if any. Without a thorough understanding of your various hardware components that are specific to your target that can be a very taunting task. Obviously there are certain popular targets that you will find more readily some sample configuration scripts than others. To make matters even more interesting, there isn't just one configuration/build system. Yocto which is what NI uses, used to be a pretty popular one for embedded systems a few years ago but lost a bit of traction some time ago. It seems to be back in activity a bit but the latest version is not backwards compatible with the version NI used for their NI Linux RT system. And NI probably does not see any reason to upgrade to the newest version as long as the old one works for what they need. It uses various other tools from other projects such as Open Embedded or BitBake internally. Buildroot is another such build system to create recipe based builds for embedded Linux. The real challenge is not to change the C code of the kernel to suit your specific hardware (that should basically be not necessary except adding driver modules for hardware components that the standard kernel does not have support for out of the box). It is to get the entire build toolchain installed correctly so that you can actually start a build successfully and once you got that, select the correct configuration settings so that the compiled kernel will run on your hardware target and not just panic right away. This last part should be fairly simple for a Virtual Box VM since the hardware that is emulated is very standard and shouldn't be hard to configure correctly.
  18. The first problem here is that a cRIO application on most platforms has by definition no UI . Even on the few cRIO that support a display output, it is far from a fully featured GUI. What you describe is true for most libraries (.so files) but definitely not the Linux kernel image. Chances that the zimage or whatever file the NI Linux kernel uses on a cRIO will run on a VM are pretty small. The kernel uses many conditional compile statements that include and exclude specific hardware components. The according conditional compile defines are made in a specific configuration file, that needs to be setup according to the target hardware the compiled image is supposed to run on. This configuration file is then read by the make script that you use when compiling the kernel and will cause the make script to invoke the gcc compiler with a shed-load of compiler defines in the command line for every c module file that needs to be compiled. It's not just things like the CPU architecture that are hardcompiled into the kernel but also things about additional peripheral devices including memory management unit, floating point support and umptien other things. While Linux also supports a dynamic module loader for kernel module drivers and can use them extensively for things like USB, network, SATA and similar things, there needs to be a minimum basic set of drivers that is available very early on during the boot process, in order to provide the necessary infrastructure for the Linux kernel to lift itself out of the swamp on its own hairs. This hardware drivers need to be statically linked into the kernel. But a Linux kernel compiler can decide to also compile additional modules statically into the kernel, supposedly for faster performance but it just as well works for tying a kernel more tightly to a specific hardware platform. So most likely even if you can retrieve the bootable image of a cDAQ or cRIO device, and install it in a VM, the loading of the actual Linux kernel will very likely fail during the initial boot procedure of the kernel itself. If you get a kernel panic on the terminal output you know at least that it did attempt to load the kernel but it just as well could already fail before the kernel even got any chance to initialize the terminal, if the bootloader even found the kernel image at all. I seem to remember that NI uses busybox as initial bootloader, so that would be the first thing one would need to get into, in order to try to debug the actual loading of the real kernel image.
  19. The problem isn't even Linux. Even if you get NI Linux RT compiled and running you aren't even halfway. That is the OS kernel but it does not include LabVIEW Runtime, NI-VISA, NI-DAQmx, NI-this and NI-that. Basically a nice target with promises of additional real-time capabilities to run all your favorite Open Source tools like Python etc. on. Yes you have all the other libraries like libcurl, libssl, libz, libthisandthat, with each having their own license again, but they are completely irrelevant when you want to look at this as a LabVIEW realtime target. Without the LabVEW runtime library, and at least a dozen other NI libraries, such a target remains simply another embedded Linux system, even if you manage to install onto it every possibly open source library that exists on this planet. Technically it may be possible to then take all that additional stuff from an existing x86 NI Linux target and copy it over to your new NI Linux bare target. But there are likely pitfalls with some of these components requiring specific hardware drivers in the system to work properly. And in terms of licensing, when you go beyond the GPL covered Linux kernel that NI Linux in itself is, and other open source libraries, you are definitely outside any legal borders without a specific written agreement with the NI legal department.
  20. But on which hardware? You can't run an ARM virtual machine on a PC without some ARM emulation somewhere. Your PC uses an x86/64 CPU that is architecturally very different to the ARM and there needs to be some kind of emulation somewhere, either an ARM VM inside an ARM on x86 emulator or the ARM emulator inside the x86 VM. There might be ways to achieve that with things like QEMU, ARMware and the likes but it is anything but trivial and is going to add even more complexity to the already difficult task of getting the NI Linux RT system running under an VM environment. Personally I wonder if downloading the sources for NI Linux RT and recompiling it for your favorite virtual machine environment is not going to be easier! And no I don't mean to imply that that is easy at all, just easier than adding also an emulation layer to the whole picture and getting to work that as well.
  21. That's not the idea. Getting an ARM emulator to run inside an x86 or x64 VM is probably a pipe dream. However the higher end cRIOs (903x and 908x) and several of the cDAQ RT modules use an Atom, Celeron or better x86/64 compatible CPU with an x64 version of NI-Linux. That one should theoretically be possible to run in a VM on your host PC, provided you can extract the image.
  22. I'm pretty convinced that the Notifiers, Queues, Semaphores and such all use internally the occurrence mechanism to do their asynchronous operation. The Wait on Occurrence node is likely a more complex wrapper around the internal primitive that waits on the actual occurrence and is being used by those objects internally but there might be a fundamental implementation detail in how the OnOccurrence() function, which is what the Wait on Occurrence ultimately ends up calling, (and all those other nodes when they need to wait) in LabVIEW is implemented that takes this much of time.
  23. That would seem to me to be posted to the wrong thread. You probably meant to reply to the thread about the Timestring function and yes that is not as easy as changing the datatype. The manager functions are documented and can't simply change at will. Anything that was documented officially in any manual has to stay the same typewise or CIN or DLL compiled with an older version of the LabVIEW manager functions will suddenly start to misbehave when run from within a LabVIEW version with the new changed datatype and vice versa!! The only allowable change would be to create a new function CStr DateCStringU64(uInt64 secs, ConstCStr fmt); implement it and test it to do the right thing and then use it from the DateTime node. However a timestamp in LabVIEW 7 and newer is not a U64 but in fact more like a fixed point 64,64 bit value with a 64 bit signed integer that indicates the seconds and a 64 bit unsigned integer indicating the fractional seconds. So This function would better be using that datatype then. But the whole DateCString and friends function is from very old LabVIEW days. The fact that it does return a CStr rather than a LStrHandle is already an indication. And to make things even worse to make such a function actually work with the date beyond 2040 one really has to rewrite it in large parts and there is no good way to guarantee that such a rewritten function would actually really produce exactly the same string in every possible situation. That sounds to me like a lot of work to provide functionality with little to now real benefit, especially if you consider that the newer formatting functions actually do work with a much larger calendar range already, although they do not work for the entire theoretically possible range of the 128 bit LabVIEW timestamp. In fact that timestamp can cover the range of +- 8'000'000'000'000'000'000 seconds relative to January 1, 1904 GMT (something like +-250'000'000'000 years, which is way beyond the expected lifetime of our universe) with a resolution of 2^-64 or ~10^-19 seconds which is less than a tenth of an atto second. However I do believe that the least significant 32 bits of the fractional part are not used by LabVIEW at all, which still makes it having a resolution of 2^-32 or less than 10^-9 seconds or about 0.25 nano seconds.
  24. Your guess is very accurate! This node exists since the beginnings of multiplatform LabVIEW and the functions you mention do so too. It is logical that this node internally uses these functions. Now, why didn't they just change the node to use the more modern timestamp functions that the Format into String uses? Well, it's all about compatibility. While it is theoretically possible to just call the according format functions with the default timestamp format of %<%x %X>T to achieve the same, there is a good chance that the explicit code used in the two LabVIEW manager functions you mentioned, might generate a slightly different string format in some locales or OS versions, since that function queries all kinds of Windows settings to generate a local specific date and time string the very hard way. The formatting function was completely rewritten from scratch up to handle the many more possibilities of the Format into String format codes somewhere around LabVIEW 6, including the new timestamps. So if they changed the primitive to internally just call a Format into String with the according format string, there would have been a very good chance that existing code using that primitive would have failed when it was to narrow-minded when parsing the generated string for some aspects (a very bad idea anyhow to try to parse a string containing a locale specific time or date, but I have seen that often in inherited code!). One principle that LabVEW always has tried to follow is, that existing code that gets upgraded to a new version is simply continuing to work as before, or in the worst case is giving you an explicit warning at load time that there is something possibly going to change for a specific functionality. Testing all the possible incompatibilities with all the possible variations of OS version, language variants, etc. is a big nightmare and you still have no guarantee that you caught everything, since many of those locale settings can be customized by the user. The format you want to use is more likely %<%X>T as that node produces a local specific data string, whereas your format string specifies a locale independent fixed format.
  25. That might be true in the current version of that VI, but at some point that was probably not the case and then the typecast can be necessary. Although recent versions of LabVIEW don't automatically turn into floating point if you remove any explicit type specification in the path of a shift register, they still will turn to that whenever you edit something that causes LabVIEW to have to decide between incompatible datatypes. An explicit typecast somewhere makes sure the code will be forced to that type and cause a broken arrow if something gets incompatible with it. The alternative is to have a case like initialize or similar where you explicitly set the shift register to a certain default value through a constant.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.