Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I would guess that it has to do with dynamic dispatch. Most likely dynamic dispatch would get significantly slower (and I'm talking here more than a few 100 nanoseconds which some people already found an insurmountable problem when NI changed something in the dynamic dispatch code between LabVIEW 2014 and 2015) if there was the possibility that a VI is not already loaded!
  2. While the middle-layer is indeed an extra hassle, since you have to compile a shared library for every platform you want to support, it is for many cases still a lot easier than trying to play C compiler yourself on the LabVIEW diagram. Especially since not all LabVIEW platforms are equal in that respect (with 32 bit and 64 bit being one but by far not the only possible obstacle). Yes you can use conditional compile structures in LabVIEW to overcome this problem too, but at this point I really feel like using duct tape to hold the Eiffel tower together. Maintenance of such a VI library is a nightmare in the long run. Not to forget about performance. If you use a middle layer shared library you can often directly use the LabVIEW datatype buffers to pass to the lower layer library functions, with MoveBlock you often end up copying any and every data back and forth multiple times. And smithd points out another advantage of a middle layer. You can make sure that all the created objects are properly deallocated on a LabVIEW abort. Without that the whole shenanigan is staying lingering in memory until you close LabVIEW completely, possibly also keeping things like file locks, named OS pipes, OS events and semaphores alive that prevent you from rerunning the software again.
  3. There is no way to directly access LaVIEW controls from a Python script. You would have to somehow write a Python module and an accompanying LabVIEW module that can communicate with each other. But I'm not sure that is the approach I would choose. It requires the Python script to know your LabVIEW user interface exactly in order to be able to reference controls on it, which I find to be a rather brittle setup. Technically LabPython is best suited when you can write a library of Python routines that you then call from your LabVIEW code. In that way your LabVIEW program does provide all the information the script would need by passing it as parameters to the routines. Calling back from a Python script into LabVIEW was never really the main intention when I developed LabPython back in the old days :-). We eventually did Lua for LabVIEW which does support some limited calling back into LabVIEW (limited in that it only works from LabVIEW to Lua or Lua to LabVIEW but not in a recursive loop back and forth) but that is in fact one of the most complicated (and brittle) parts of Lua for LabVIEW. From the initial introduction of Lua for LabVIEW in LabVIEW 6 or so until the latest LabVIEW version, almost all problems that arose with a new LabVIEW release were related to this part of the package. As to support for LabPython the most likely place to get any feedback at all is probably here, but there are not many people using it nowadays and I haven't written any Python script in at least 10 years. I did a few minor updates in the past to the LabPython shared library to fix some minor quirks but in order to make it work properly with Python 3.0 and newer it would require some real work, also on the C side of the code. It was developed for Python 2.3 or so and works pretty ok up to Python 2.7.x but 3.0 added several changes that also have effect on the C code behind LabPython.
  4. My experience with this is that under Windows it is pretty easy and non-problematic but if you end up having numerous class hierarchy levels that depend on each other and build the various classes all into its own PPL you have to be careful if you do this for Linux realtime targets. For some reasons only known to I don't know who, if you for whatever reason rebuild one of the base class packed libraries you absolutely have to rebuild every depending class packed library or LabVIEW will start to complain that the depending classes can't be loaded. I have no idea what the reason is, I did assume that a packed library is an isolated container that only exports its public interface to the outside world, so as long as nothing on the signature of the public methods changes this should be a no-brainer, but that doesn't seem to be the case for NI Linux RT targets. I didn't seem to have these problems on Windows nor VxWorks realtime targets.
  5. If your packed library is really just a wrapper around your child class implementation, a better way would most likely be to employ a default naming scheme for the PPL name that follows the class name. Then using "Get Default Class from Path" you simply load the class into memory at runtime and cast it to the Base class and then you can call all the Base class methods and properties from that and the dynamic dispatch will make that the child methods are invoked.
  6. You did redistribute the No Debug version of your DLL? The Debug version will link to a different C runtime library that is not redistributable to other computers and only works on PCs where you have the Visual C compiler installed.
  7. Actually you should not really need to change anything code wise. The Linux kernel sources support to be compiled for just about any architecture that is out there, even CPUs that you would be nowadays hard pressured to find hardware to run it on. Of course depending on where you got your kernel sources they might not contain support for all possible architectures, but the kernel project supports a myriad of target architectures, provided you can feed the compiler toolchain with the correct defines. Now figuring out all the necessary defines for a specific hardware is a real challenge. For many of them the documentation is really mostly in the source code only. Here come various build systems into play that promis to make this configuration easier by allowing you to select different settings from a selection and then generating the necessary build scripts to drive the C toolchain with. What is the real challenge, is the configuration that needs to be done to tell the make toolchain for which target arch you want to compile, what hardware modules to link statically and what modules to compile as dynamic kernel modules if any. Without a thorough understanding of your various hardware components that are specific to your target that can be a very taunting task. Obviously there are certain popular targets that you will find more readily some sample configuration scripts than others. To make matters even more interesting, there isn't just one configuration/build system. Yocto which is what NI uses, used to be a pretty popular one for embedded systems a few years ago but lost a bit of traction some time ago. It seems to be back in activity a bit but the latest version is not backwards compatible with the version NI used for their NI Linux RT system. And NI probably does not see any reason to upgrade to the newest version as long as the old one works for what they need. It uses various other tools from other projects such as Open Embedded or BitBake internally. Buildroot is another such build system to create recipe based builds for embedded Linux. The real challenge is not to change the C code of the kernel to suit your specific hardware (that should basically be not necessary except adding driver modules for hardware components that the standard kernel does not have support for out of the box). It is to get the entire build toolchain installed correctly so that you can actually start a build successfully and once you got that, select the correct configuration settings so that the compiled kernel will run on your hardware target and not just panic right away. This last part should be fairly simple for a Virtual Box VM since the hardware that is emulated is very standard and shouldn't be hard to configure correctly.
  8. The first problem here is that a cRIO application on most platforms has by definition no UI . Even on the few cRIO that support a display output, it is far from a fully featured GUI. What you describe is true for most libraries (.so files) but definitely not the Linux kernel image. Chances that the zimage or whatever file the NI Linux kernel uses on a cRIO will run on a VM are pretty small. The kernel uses many conditional compile statements that include and exclude specific hardware components. The according conditional compile defines are made in a specific configuration file, that needs to be setup according to the target hardware the compiled image is supposed to run on. This configuration file is then read by the make script that you use when compiling the kernel and will cause the make script to invoke the gcc compiler with a shed-load of compiler defines in the command line for every c module file that needs to be compiled. It's not just things like the CPU architecture that are hardcompiled into the kernel but also things about additional peripheral devices including memory management unit, floating point support and umptien other things. While Linux also supports a dynamic module loader for kernel module drivers and can use them extensively for things like USB, network, SATA and similar things, there needs to be a minimum basic set of drivers that is available very early on during the boot process, in order to provide the necessary infrastructure for the Linux kernel to lift itself out of the swamp on its own hairs. This hardware drivers need to be statically linked into the kernel. But a Linux kernel compiler can decide to also compile additional modules statically into the kernel, supposedly for faster performance but it just as well works for tying a kernel more tightly to a specific hardware platform. So most likely even if you can retrieve the bootable image of a cDAQ or cRIO device, and install it in a VM, the loading of the actual Linux kernel will very likely fail during the initial boot procedure of the kernel itself. If you get a kernel panic on the terminal output you know at least that it did attempt to load the kernel but it just as well could already fail before the kernel even got any chance to initialize the terminal, if the bootloader even found the kernel image at all. I seem to remember that NI uses busybox as initial bootloader, so that would be the first thing one would need to get into, in order to try to debug the actual loading of the real kernel image.
  9. The problem isn't even Linux. Even if you get NI Linux RT compiled and running you aren't even halfway. That is the OS kernel but it does not include LabVIEW Runtime, NI-VISA, NI-DAQmx, NI-this and NI-that. Basically a nice target with promises of additional real-time capabilities to run all your favorite Open Source tools like Python etc. on. Yes you have all the other libraries like libcurl, libssl, libz, libthisandthat, with each having their own license again, but they are completely irrelevant when you want to look at this as a LabVIEW realtime target. Without the LabVEW runtime library, and at least a dozen other NI libraries, such a target remains simply another embedded Linux system, even if you manage to install onto it every possibly open source library that exists on this planet. Technically it may be possible to then take all that additional stuff from an existing x86 NI Linux target and copy it over to your new NI Linux bare target. But there are likely pitfalls with some of these components requiring specific hardware drivers in the system to work properly. And in terms of licensing, when you go beyond the GPL covered Linux kernel that NI Linux in itself is, and other open source libraries, you are definitely outside any legal borders without a specific written agreement with the NI legal department.
  10. But on which hardware? You can't run an ARM virtual machine on a PC without some ARM emulation somewhere. Your PC uses an x86/64 CPU that is architecturally very different to the ARM and there needs to be some kind of emulation somewhere, either an ARM VM inside an ARM on x86 emulator or the ARM emulator inside the x86 VM. There might be ways to achieve that with things like QEMU, ARMware and the likes but it is anything but trivial and is going to add even more complexity to the already difficult task of getting the NI Linux RT system running under an VM environment. Personally I wonder if downloading the sources for NI Linux RT and recompiling it for your favorite virtual machine environment is not going to be easier! And no I don't mean to imply that that is easy at all, just easier than adding also an emulation layer to the whole picture and getting to work that as well.
  11. That's not the idea. Getting an ARM emulator to run inside an x86 or x64 VM is probably a pipe dream. However the higher end cRIOs (903x and 908x) and several of the cDAQ RT modules use an Atom, Celeron or better x86/64 compatible CPU with an x64 version of NI-Linux. That one should theoretically be possible to run in a VM on your host PC, provided you can extract the image.
  12. I'm pretty convinced that the Notifiers, Queues, Semaphores and such all use internally the occurrence mechanism to do their asynchronous operation. The Wait on Occurrence node is likely a more complex wrapper around the internal primitive that waits on the actual occurrence and is being used by those objects internally but there might be a fundamental implementation detail in how the OnOccurrence() function, which is what the Wait on Occurrence ultimately ends up calling, (and all those other nodes when they need to wait) in LabVIEW is implemented that takes this much of time.
  13. That would seem to me to be posted to the wrong thread. You probably meant to reply to the thread about the Timestring function and yes that is not as easy as changing the datatype. The manager functions are documented and can't simply change at will. Anything that was documented officially in any manual has to stay the same typewise or CIN or DLL compiled with an older version of the LabVIEW manager functions will suddenly start to misbehave when run from within a LabVIEW version with the new changed datatype and vice versa!! The only allowable change would be to create a new function CStr DateCStringU64(uInt64 secs, ConstCStr fmt); implement it and test it to do the right thing and then use it from the DateTime node. However a timestamp in LabVIEW 7 and newer is not a U64 but in fact more like a fixed point 64,64 bit value with a 64 bit signed integer that indicates the seconds and a 64 bit unsigned integer indicating the fractional seconds. So This function would better be using that datatype then. But the whole DateCString and friends function is from very old LabVIEW days. The fact that it does return a CStr rather than a LStrHandle is already an indication. And to make things even worse to make such a function actually work with the date beyond 2040 one really has to rewrite it in large parts and there is no good way to guarantee that such a rewritten function would actually really produce exactly the same string in every possible situation. That sounds to me like a lot of work to provide functionality with little to now real benefit, especially if you consider that the newer formatting functions actually do work with a much larger calendar range already, although they do not work for the entire theoretically possible range of the 128 bit LabVIEW timestamp. In fact that timestamp can cover the range of +- 8'000'000'000'000'000'000 seconds relative to January 1, 1904 GMT (something like +-250'000'000'000 years, which is way beyond the expected lifetime of our universe) with a resolution of 2^-64 or ~10^-19 seconds which is less than a tenth of an atto second. However I do believe that the least significant 32 bits of the fractional part are not used by LabVIEW at all, which still makes it having a resolution of 2^-32 or less than 10^-9 seconds or about 0.25 nano seconds.
  14. Your guess is very accurate! This node exists since the beginnings of multiplatform LabVIEW and the functions you mention do so too. It is logical that this node internally uses these functions. Now, why didn't they just change the node to use the more modern timestamp functions that the Format into String uses? Well, it's all about compatibility. While it is theoretically possible to just call the according format functions with the default timestamp format of %<%x %X>T to achieve the same, there is a good chance that the explicit code used in the two LabVIEW manager functions you mentioned, might generate a slightly different string format in some locales or OS versions, since that function queries all kinds of Windows settings to generate a local specific date and time string the very hard way. The formatting function was completely rewritten from scratch up to handle the many more possibilities of the Format into String format codes somewhere around LabVIEW 6, including the new timestamps. So if they changed the primitive to internally just call a Format into String with the according format string, there would have been a very good chance that existing code using that primitive would have failed when it was to narrow-minded when parsing the generated string for some aspects (a very bad idea anyhow to try to parse a string containing a locale specific time or date, but I have seen that often in inherited code!). One principle that LabVEW always has tried to follow is, that existing code that gets upgraded to a new version is simply continuing to work as before, or in the worst case is giving you an explicit warning at load time that there is something possibly going to change for a specific functionality. Testing all the possible incompatibilities with all the possible variations of OS version, language variants, etc. is a big nightmare and you still have no guarantee that you caught everything, since many of those locale settings can be customized by the user. The format you want to use is more likely %<%X>T as that node produces a local specific data string, whereas your format string specifies a locale independent fixed format.
  15. That might be true in the current version of that VI, but at some point that was probably not the case and then the typecast can be necessary. Although recent versions of LabVIEW don't automatically turn into floating point if you remove any explicit type specification in the path of a shift register, they still will turn to that whenever you edit something that causes LabVIEW to have to decide between incompatible datatypes. An explicit typecast somewhere makes sure the code will be forced to that type and cause a broken arrow if something gets incompatible with it. The alternative is to have a case like initialize or similar where you explicitly set the shift register to a certain default value through a constant.
  16. Of course not! ADO stands for ActivX Database Objects and ActiveX is a Windows only technology. Depending of the actual database server you want to access there are several possibilities but not all of them are readily doable in LabVIEW for Linux. If your database driver is implementing the whole communication directly in the LabVIEW VI level such as the MySQL driver here which access the MySQL server directly through TCP/IP communication, then you are fine. Accessing the unixODBC driver is another possibility which keeps the LabVIEW part independent of the actual database driver implementation. This project does provide such a LabVIEW library however, it is not always easy to get a working ODBC driver for a specific database server. Microsoft officially supports Linux clients with their latest SQL server but I have not tried that at all, and if you talk about the NI Linux realtime targets an additional problem is the architecture (ARM based for the low cost targets and x64 based for the high end targets) and the fact that NI Linux RT isn't a normal standard Linux system but in several aspects a slimmed down Linux kernel that some precompiled binaries may not work on, and to expect Microsoft to give you the source code of their SQL server libraries to compile your own binaries for a specific target is of course pretty hopeless.
  17. Well, to claim that it could not have any effect on cRIO sales is rather bold. But my point was not that you can not do that, it's part of any competition that your product could have a negative effect on the baseline of another product. My point was that NI has a license deal with Xilinx which might contain wording about with what hardware the Xilinx tools that NI bundles with their LabVIEW FPGA Toolkit can be used and that might exclude non NI hardware. I'm not sure such limitations exist but it would not surprise me if it does and if it does, NI might be obligated to prevent use of the Xilinx tools included in the LabVIEW FPGA Toolkit with non NI hardware, both technically as well as legally, independent if they want to or not.
  18. Right! I did check and when using shared reentrant clone VIs, then it also works in LabVIEW 2009. In my initial tests I did use the default preallcoated reentrancy of those VIs and that of course can't work as LabVIEW would then have to preallocate an indefinite amount of clones due to recursion and that would for sure crash! So LabVIEW 2009 it will stay!
  19. Well recursion worked before but only if you opened a reference to the VI explicitedly. Since LabVIEW 2012 you can place a reentrant VI directly into its own diagram.
  20. A colleague recently tried to use the OpenG Variant Configuration File Library and found that the loading and saving of more complex structures was pretty slow. A little debugging quickly showed the culprit which is in the way the recursion in that library is resolved by opening a VI reference to itself to call the VI recursively. In LabVIEW 2012 and later the solution to this problem is pretty quick and painless: Just replace the Open VI Reference, Call VI by Reference and Close VI Reference by the actual VI itself. Works like a charm and loading and saving times are then pretty much in par with explicitly programmed VIs using the normal INI file routines (cutting down from 50 seconds to about 500ms for a configuration containing several hundred clustered items). Now I was wondering if there is anyone who would think that updating this library to be LabVIEW 2012 and later would be a problem?
  21. Definitely! NI licensed the Xilinx toolchain from Xilinx to be distributed as part of the FPGA toolkit and there will be certainly some limitations in the fine print that Xilinx requires NI to follow as part of that license deal. They do not want ANY customer to be able to rip out the toolchain from a LabVIEW FPGA installation to program ANY Xilinx FPGA hardware with and not having to buy the toolchain from Xilinx instead, which starts at $2995 for a node locked Vivado Design HL license, which I would assume to be similar to what NI bundles, except that NI also bundles the older version for use with older cRIO systems. So while NI certainly won't like such hardware offerings, as it hurts their cRIO sales to some extend, they may contractually be obligated to proceed on such attempts to circumvent the Xilinx/NI license deal, if they want to or not.
  22. Hard to say anything conclusive without the ability to debug the libraries in source (and no I don't volunteer to do that, that would be the original developers task). Generally .Net only looks at the GAC and the current process' executable directory when trying to load assemblies. This has been done on purpose since the old way of locating DLLs all over the place in various default and not so default places has created more trouble than it actually solved. An application can then register additional directories explicitedly for a .Net context. LabVIEW seems to maintain seperate .Net contexts per application instance and a project is an application instance in LabVIEW, isolating almost everything from any other application instance eventhough you run it in the same LabVIEW IDE process. For project application instances LabVIEW also registers the directory in which the project file resides as a. Net assembly location. This may or may not have anything to do with your issue, but from the description of your issues, it could be that one of your assemblies is trying to load some other assembly and not properly catching the exception when that fails. But this is really all guesswork without a deeper look into the actual .Net components involved. If you can't get the original developer of the .Net component to look into this issue for you with a source code debugger, I see not a lot of chances to get this working.
  23. Brian (Hoovah) has explained it all very well, including the option to use the loop conditional terminal to avoid the usually unneccessary looping through the remaining iterations.
  24. There is definitely a change depending if you use a shift register or not for the error cluster with shift register without error before loop (n >= 1) n times do nothing n times do nothing error before loop (n = 0) error is visible after loop error has magically disappeared error in loop execution x of n 0 .. x -1 executes 0 .. n executes first error in loop is passed out unless you create an autoindexing error array, only the last error of the loop execution is passed out Generally only the purple situation in the loop without shift register for the error cluster is sometimes preferable above what the shift register would cause. The red ones are definitely not desirable in any code that you do not intend to throw away immediately.
  25. The page you link to is not very detailed. But it says under Network Protocol: TCP/IP, HTTP, DHCP, DNS You can forget the last two, they do not mean anything for the actual accessibility, but HTTP means most likely that you can access a periodically refreshed still image (JPEG format) from it if you can figure out the right URL path. TCP/IP with H.264 hints most likely at support live streaming with the right driver. You will need to look for an IP Camera Driver for your OS that supports the H.264 compression. Alternatively you can most likely install something like https://ip-webcam.appspot.com/ to access at least the JPEG interface on your camera. This driver translates the JPEG images into a DirectX interface that you can then interface to with the IMAQdx driver software from NI to get the images into LabVIEW IMAQ.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.