-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
The first problem here is that a cRIO application on most platforms has by definition no UI . Even on the few cRIO that support a display output, it is far from a fully featured GUI. What you describe is true for most libraries (.so files) but definitely not the Linux kernel image. Chances that the zimage or whatever file the NI Linux kernel uses on a cRIO will run on a VM are pretty small. The kernel uses many conditional compile statements that include and exclude specific hardware components. The according conditional compile defines are made in a specific configuration file, that needs to be setup according to the target hardware the compiled image is supposed to run on. This configuration file is then read by the make script that you use when compiling the kernel and will cause the make script to invoke the gcc compiler with a shed-load of compiler defines in the command line for every c module file that needs to be compiled. It's not just things like the CPU architecture that are hardcompiled into the kernel but also things about additional peripheral devices including memory management unit, floating point support and umptien other things. While Linux also supports a dynamic module loader for kernel module drivers and can use them extensively for things like USB, network, SATA and similar things, there needs to be a minimum basic set of drivers that is available very early on during the boot process, in order to provide the necessary infrastructure for the Linux kernel to lift itself out of the swamp on its own hairs. This hardware drivers need to be statically linked into the kernel. But a Linux kernel compiler can decide to also compile additional modules statically into the kernel, supposedly for faster performance but it just as well works for tying a kernel more tightly to a specific hardware platform. So most likely even if you can retrieve the bootable image of a cDAQ or cRIO device, and install it in a VM, the loading of the actual Linux kernel will very likely fail during the initial boot procedure of the kernel itself. If you get a kernel panic on the terminal output you know at least that it did attempt to load the kernel but it just as well could already fail before the kernel even got any chance to initialize the terminal, if the bootloader even found the kernel image at all. I seem to remember that NI uses busybox as initial bootloader, so that would be the first thing one would need to get into, in order to try to debug the actual loading of the real kernel image.
-
The problem isn't even Linux. Even if you get NI Linux RT compiled and running you aren't even halfway. That is the OS kernel but it does not include LabVIEW Runtime, NI-VISA, NI-DAQmx, NI-this and NI-that. Basically a nice target with promises of additional real-time capabilities to run all your favorite Open Source tools like Python etc. on. Yes you have all the other libraries like libcurl, libssl, libz, libthisandthat, with each having their own license again, but they are completely irrelevant when you want to look at this as a LabVIEW realtime target. Without the LabVEW runtime library, and at least a dozen other NI libraries, such a target remains simply another embedded Linux system, even if you manage to install onto it every possibly open source library that exists on this planet. Technically it may be possible to then take all that additional stuff from an existing x86 NI Linux target and copy it over to your new NI Linux bare target. But there are likely pitfalls with some of these components requiring specific hardware drivers in the system to work properly. And in terms of licensing, when you go beyond the GPL covered Linux kernel that NI Linux in itself is, and other open source libraries, you are definitely outside any legal borders without a specific written agreement with the NI legal department.
-
Futures - An alternative to synchronous messaging
Rolf Kalbermatter replied to Daklu's topic in Object-Oriented Programming
But on which hardware? You can't run an ARM virtual machine on a PC without some ARM emulation somewhere. Your PC uses an x86/64 CPU that is architecturally very different to the ARM and there needs to be some kind of emulation somewhere, either an ARM VM inside an ARM on x86 emulator or the ARM emulator inside the x86 VM. There might be ways to achieve that with things like QEMU, ARMware and the likes but it is anything but trivial and is going to add even more complexity to the already difficult task of getting the NI Linux RT system running under an VM environment. Personally I wonder if downloading the sources for NI Linux RT and recompiling it for your favorite virtual machine environment is not going to be easier! And no I don't mean to imply that that is easy at all, just easier than adding also an emulation layer to the whole picture and getting to work that as well. -
Futures - An alternative to synchronous messaging
Rolf Kalbermatter replied to Daklu's topic in Object-Oriented Programming
That's not the idea. Getting an ARM emulator to run inside an x86 or x64 VM is probably a pipe dream. However the higher end cRIOs (903x and 908x) and several of the cDAQ RT modules use an Atom, Celeron or better x86/64 compatible CPU with an x64 version of NI-Linux. That one should theoretically be possible to run in a VM on your host PC, provided you can extract the image. -
Boolean Tag Channel vs Notifier Performance
Rolf Kalbermatter replied to infinitenothing's topic in LabVIEW General
I'm pretty convinced that the Notifiers, Queues, Semaphores and such all use internally the occurrence mechanism to do their asynchronous operation. The Wait on Occurrence node is likely a more complex wrapper around the internal primitive that waits on the actual occurrence and is being used by those objects internally but there might be a fundamental implementation detail in how the OnOccurrence() function, which is what the Wait on Occurrence ultimately ends up calling, (and all those other nodes when they need to wait) in LabVIEW is implemented that takes this much of time. -
That would seem to me to be posted to the wrong thread. You probably meant to reply to the thread about the Timestring function and yes that is not as easy as changing the datatype. The manager functions are documented and can't simply change at will. Anything that was documented officially in any manual has to stay the same typewise or CIN or DLL compiled with an older version of the LabVIEW manager functions will suddenly start to misbehave when run from within a LabVIEW version with the new changed datatype and vice versa!! The only allowable change would be to create a new function CStr DateCStringU64(uInt64 secs, ConstCStr fmt); implement it and test it to do the right thing and then use it from the DateTime node. However a timestamp in LabVIEW 7 and newer is not a U64 but in fact more like a fixed point 64,64 bit value with a 64 bit signed integer that indicates the seconds and a 64 bit unsigned integer indicating the fractional seconds. So This function would better be using that datatype then. But the whole DateCString and friends function is from very old LabVIEW days. The fact that it does return a CStr rather than a LStrHandle is already an indication. And to make things even worse to make such a function actually work with the date beyond 2040 one really has to rewrite it in large parts and there is no good way to guarantee that such a rewritten function would actually really produce exactly the same string in every possible situation. That sounds to me like a lot of work to provide functionality with little to now real benefit, especially if you consider that the newer formatting functions actually do work with a much larger calendar range already, although they do not work for the entire theoretically possible range of the 128 bit LabVIEW timestamp. In fact that timestamp can cover the range of +- 8'000'000'000'000'000'000 seconds relative to January 1, 1904 GMT (something like +-250'000'000'000 years, which is way beyond the expected lifetime of our universe) with a resolution of 2^-64 or ~10^-19 seconds which is less than a tenth of an atto second. However I do believe that the least significant 32 bits of the fractional part are not used by LabVIEW at all, which still makes it having a resolution of 2^-32 or less than 10^-9 seconds or about 0.25 nano seconds.
-
Your guess is very accurate! This node exists since the beginnings of multiplatform LabVIEW and the functions you mention do so too. It is logical that this node internally uses these functions. Now, why didn't they just change the node to use the more modern timestamp functions that the Format into String uses? Well, it's all about compatibility. While it is theoretically possible to just call the according format functions with the default timestamp format of %<%x %X>T to achieve the same, there is a good chance that the explicit code used in the two LabVIEW manager functions you mentioned, might generate a slightly different string format in some locales or OS versions, since that function queries all kinds of Windows settings to generate a local specific date and time string the very hard way. The formatting function was completely rewritten from scratch up to handle the many more possibilities of the Format into String format codes somewhere around LabVIEW 6, including the new timestamps. So if they changed the primitive to internally just call a Format into String with the according format string, there would have been a very good chance that existing code using that primitive would have failed when it was to narrow-minded when parsing the generated string for some aspects (a very bad idea anyhow to try to parse a string containing a locale specific time or date, but I have seen that often in inherited code!). One principle that LabVEW always has tried to follow is, that existing code that gets upgraded to a new version is simply continuing to work as before, or in the worst case is giving you an explicit warning at load time that there is something possibly going to change for a specific functionality. Testing all the possible incompatibilities with all the possible variations of OS version, language variants, etc. is a big nightmare and you still have no guarantee that you caught everything, since many of those locale settings can be customized by the user. The format you want to use is more likely %<%X>T as that node produces a local specific data string, whereas your format string specifies a locale independent fixed format.
-
That might be true in the current version of that VI, but at some point that was probably not the case and then the typecast can be necessary. Although recent versions of LabVIEW don't automatically turn into floating point if you remove any explicit type specification in the path of a shift register, they still will turn to that whenever you edit something that causes LabVIEW to have to decide between incompatible datatypes. An explicit typecast somewhere makes sure the code will be forced to that type and cause a broken arrow if something gets incompatible with it. The alternative is to have a case like initialize or similar where you explicitly set the shift register to a certain default value through a constant.
-
Database Connectivity Toolkit Multi Row Insert
Rolf Kalbermatter replied to lisam's topic in Database and File IO
Of course not! ADO stands for ActivX Database Objects and ActiveX is a Windows only technology. Depending of the actual database server you want to access there are several possibilities but not all of them are readily doable in LabVIEW for Linux. If your database driver is implementing the whole communication directly in the LabVIEW VI level such as the MySQL driver here which access the MySQL server directly through TCP/IP communication, then you are fine. Accessing the unixODBC driver is another possibility which keeps the LabVIEW part independent of the actual database driver implementation. This project does provide such a LabVIEW library however, it is not always easy to get a working ODBC driver for a specific database server. Microsoft officially supports Linux clients with their latest SQL server but I have not tried that at all, and if you talk about the NI Linux realtime targets an additional problem is the architecture (ARM based for the low cost targets and x64 based for the high end targets) and the fact that NI Linux RT isn't a normal standard Linux system but in several aspects a slimmed down Linux kernel that some precompiled binaries may not work on, and to expect Microsoft to give you the source code of their SQL server libraries to compile your own binaries for a specific target is of course pretty hopeless.- 8 replies
-
- database
- connectivity toolkit
-
(and 2 more)
Tagged with:
-
Well, to claim that it could not have any effect on cRIO sales is rather bold. But my point was not that you can not do that, it's part of any competition that your product could have a negative effect on the baseline of another product. My point was that NI has a license deal with Xilinx which might contain wording about with what hardware the Xilinx tools that NI bundles with their LabVIEW FPGA Toolkit can be used and that might exclude non NI hardware. I'm not sure such limitations exist but it would not surprise me if it does and if it does, NI might be obligated to prevent use of the Xilinx tools included in the LabVIEW FPGA Toolkit with non NI hardware, both technically as well as legally, independent if they want to or not.
-
OpenG Variant Configuration file library
Rolf Kalbermatter replied to Rolf Kalbermatter's topic in OpenG General Discussions
Right! I did check and when using shared reentrant clone VIs, then it also works in LabVIEW 2009. In my initial tests I did use the default preallcoated reentrancy of those VIs and that of course can't work as LabVIEW would then have to preallocate an indefinite amount of clones due to recursion and that would for sure crash! So LabVIEW 2009 it will stay! -
OpenG Variant Configuration file library
Rolf Kalbermatter replied to Rolf Kalbermatter's topic in OpenG General Discussions
Well recursion worked before but only if you opened a reference to the VI explicitedly. Since LabVIEW 2012 you can place a reentrant VI directly into its own diagram. -
A colleague recently tried to use the OpenG Variant Configuration File Library and found that the loading and saving of more complex structures was pretty slow. A little debugging quickly showed the culprit which is in the way the recursion in that library is resolved by opening a VI reference to itself to call the VI recursively. In LabVIEW 2012 and later the solution to this problem is pretty quick and painless: Just replace the Open VI Reference, Call VI by Reference and Close VI Reference by the actual VI itself. Works like a charm and loading and saving times are then pretty much in par with explicitly programmed VIs using the normal INI file routines (cutting down from 50 seconds to about 500ms for a configuration containing several hundred clustered items). Now I was wondering if there is anyone who would think that updating this library to be LabVIEW 2012 and later would be a problem?
-
Definitely! NI licensed the Xilinx toolchain from Xilinx to be distributed as part of the FPGA toolkit and there will be certainly some limitations in the fine print that Xilinx requires NI to follow as part of that license deal. They do not want ANY customer to be able to rip out the toolchain from a LabVIEW FPGA installation to program ANY Xilinx FPGA hardware with and not having to buy the toolchain from Xilinx instead, which starts at $2995 for a node locked Vivado Design HL license, which I would assume to be similar to what NI bundles, except that NI also bundles the older version for use with older cRIO systems. So while NI certainly won't like such hardware offerings, as it hurts their cRIO sales to some extend, they may contractually be obligated to proceed on such attempts to circumvent the Xilinx/NI license deal, if they want to or not.
-
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
Hard to say anything conclusive without the ability to debug the libraries in source (and no I don't volunteer to do that, that would be the original developers task). Generally .Net only looks at the GAC and the current process' executable directory when trying to load assemblies. This has been done on purpose since the old way of locating DLLs all over the place in various default and not so default places has created more trouble than it actually solved. An application can then register additional directories explicitedly for a .Net context. LabVIEW seems to maintain seperate .Net contexts per application instance and a project is an application instance in LabVIEW, isolating almost everything from any other application instance eventhough you run it in the same LabVIEW IDE process. For project application instances LabVIEW also registers the directory in which the project file resides as a. Net assembly location. This may or may not have anything to do with your issue, but from the description of your issues, it could be that one of your assemblies is trying to load some other assembly and not properly catching the exception when that fails. But this is really all guesswork without a deeper look into the actual .Net components involved. If you can't get the original developer of the .Net component to look into this issue for you with a source code debugger, I see not a lot of chances to get this working. -
There is definitely a change depending if you use a shift register or not for the error cluster with shift register without error before loop (n >= 1) n times do nothing n times do nothing error before loop (n = 0) error is visible after loop error has magically disappeared error in loop execution x of n 0 .. x -1 executes 0 .. n executes first error in loop is passed out unless you create an autoindexing error array, only the last error of the loop execution is passed out Generally only the purple situation in the loop without shift register for the error cluster is sometimes preferable above what the shift register would cause. The red ones are definitely not desirable in any code that you do not intend to throw away immediately.
-
IP Camera Surveillance TCP-IP
Rolf Kalbermatter replied to edupezz's topic in Machine Vision and Imaging
The page you link to is not very detailed. But it says under Network Protocol: TCP/IP, HTTP, DHCP, DNS You can forget the last two, they do not mean anything for the actual accessibility, but HTTP means most likely that you can access a periodically refreshed still image (JPEG format) from it if you can figure out the right URL path. TCP/IP with H.264 hints most likely at support live streaming with the right driver. You will need to look for an IP Camera Driver for your OS that supports the H.264 compression. Alternatively you can most likely install something like https://ip-webcam.appspot.com/ to access at least the JPEG interface on your camera. This driver translates the JPEG images into a DirectX interface that you can then interface to with the IMAQdx driver software from NI to get the images into LabVIEW IMAQ. -
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
Probably something with the .Net assembly search path in combination with some badly implemented dynamic linking. .Net by default only searches in the GAC and in the directory for the current executable for any .Net assemblies. LabVIEW adds to that the directory in which the current project files is located if you run the VIs from within a project. -
1) is bad form. It will cause an invalid refnum (and empty arrays and strings and 0 integer and floats) after the loop if the loop iterates 0 times. And yes, even if you think it will never be possible to execute 0 times, you always get surprised when the code is running somewhere on the other side of the globe and you only have a very slow remote connection that makes life debugging impossible. 2) is the prefered way for me for any refnum and most other data. Anecdotely LabVIEW had some pretty smart optimization algorithmes in its compiler that were specifically targetting shift registers, so for large data like arrays and long strings it could make a tremendous difference if you put this data into a shift register instead of just wiring it through. he latest versions of LabVIEW do have much more optimization strategies so that the shift register is not strictly necessary for LabVIEW to optimize many more things than before. However the shift register won't harm the performance either and is still a good hint to LabVIEW that this data can be in fact treated inplace. 3) is simply ugly although it doesn't bear the problem of invalid data after a 0 iteration loop (but can potentially prevent LabVIEW to do deeper optimization which for large data can make a real difference). For error clusters 1) can be a possible approach although you still have the potential to loose error propagation on 0 iteration loops that way. You do need to watch out for additional possible problems when an error occures. One possibility is to simply abort the loop immediately which the abort terminal for for loops is very handy for, or to handle the error and clear it so the next iterations still do something. The least preferable solution is to just keep looping anyhow and do nothing in the subsequent iterations (through the internal error handling in the VIs that you call) BUT ONLY for For Loops!! Never ever use this approach in While Loops without making 200% sure that the loop will abort on some criteria anyhow! I've debugged way to many LabVIEW software where an error in a while loop simply made the loop spin forever doing nothing.
-
To my knowledge not beyond what you can get from the OpenG library. When I tried to do that I was looking for some further documentation but couldn't find anything beyond some tidbits in remotely related things in knowledge articles. So I simply gleemed through the different packages and deduced the necessary format for the .cdf ( component definition file ). Being an XML format file made it somewhat easy to guess the relevant parts. There is in newer LabVIEW versions an option to create an installation package for RT from an RT project which uses the same cdf file, but the creation of that file is of course hidden inside the according package builder plugin. If the purpose is to only create an installer component for a shared library then the files from the OpenG ZIP library should give some headstart. One problem you might have to solve is also to get the files into the RT Image subdirectory. Since it is inside the Program Files, your installer app needs to have elevated privileges in order to put it there. If you do this as part of a VIPM or OGP package file, then VIPM often is not started by default with such rights. I solved that by packing all the components into a setup file made with innosetup which will request the privilege evelation. Then I start that setup file at the end of the package installation and it then installs the components into the RT Image subdirectory.
-
LabVIEW doesn't deploy shared libraries to the embedded targets itself (except the old Pharlap based systems). So in my experience you always need to find a way to bring the necessary shared libraries to those targets yourself. One option is to just copy them manually into the correct location for the system in question (/usr/local/lib for the NI Linux RT based systems). Another one is to create a distribution package that the user then can install to the target through the NI Max Software installation section for the specific target. The creation of such a package isn't to difficult, it is mostly a simple configuration file that needs to be made and copied into a subdirectory in the "Program Files/National Instruments/RT Images" folder together with the shared library file(s) and any other necessary files. The OpenG ZIP library makes use of this last method to install support for the various NI RT targets.
-
Right, only works for byte arrays, no other integer arrays. And they forgot the UDP Write which is actually at least as likely to be used with binary byte streams. Someone had a good idea but only executed it halfway through (well really a quarter way, if you think about the Read). To bad that the FlexTCP functions they added under the hood around 7.x never were finished.
-
Wait are you sure? I totally missed that! Ok well: 2014: no byte stream on TCP Write 2015: no byte stream on TCP Write 2016: haven't installed that currently
-
I didn't mean to use shared libraries for Endianess issues, that would be madness. But data passed to a shared library is always in native format, anything else would be madness too. Strings being used as standard anytype datatype is sort of ok in a language that only uses (extended) ASCII characters for strings. Even in LabVIEW that is only sort of true if you use a western language version. Asian versions use the multibyte character encoding, where a byte is not equal to a character at all. So I consider a byte stream a more appropriate data type for the network and VISA interfaces than a string. Of course the damage has been done already and you can't take away the string variant now, at least not in current LabVIEW. Still I think it would be more accurate to introduce byte stream versions of those functions and drop them by default on the diagram, with an option to switch to the (borked) string version they have now. I would expect a fundamentally new version of LabVIEW to switch to byte streams throughout for these interfaces. It's the right format since technically these interfaces work with bytes, not with strings.