-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Even if you make the string fixed size, LabVIEW most likely won't embed it into the cluster like a C compiler does. That simply was not a design criteria when fixed arrays (and strings) were implemented. They are mostly meant for FPGA, where variable sized elements have a very high overhead to be implemented. The fixed size option was documented at least since LabVIEW 3.0 in the data type documents and likely before but there was no public functionality to enable or access it and it only really got used with the introduction of FPGA support around LabVIEW 7.1. Outside of FPGA targets it's not an exercised feature at all, and for strings it's nowhere really used. It is more a byproduct of Strings being in fact just a special type of arrays with its own wire color and datatype, but the underlaying implementation is basically simply an array of bytes. Could LabVIEW inline fixed size arrays in Clusters? Sure! But that was initially never a design criteria. The support to interface to external libraries was added long after this datatype system was implemented, so changing that then to support C style fixed size arrays inside structs would have meant backwards incompatibility or a lot of extra work in the Call Library Node configuration.
-
Sharing serial device from Raspberry Pi to VISA?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW Community Edition
py-visa is a pure VISA client implementation. While the VISA API is sort of documented in the VXIpnp documents, the internal workings of VISA is not. That includes the VISA Server network protocol and all that stuff. I think the cost-usefullness analysis of trying to reverse engineer that is pretty bad as even under Windows VISA Server isn't used that often. The only platform I have ever used it with was with NI realtime controllers to access their serial ports from Windows. But that was all being NI hardware with NI drivers installed. -
LabVIEW Community Edition Announced
Rolf Kalbermatter replied to hooovahh's topic in LabVIEW Community Edition
Seems definitely doable. A bit a shame that they have one seperate API for each hardware board. -
LabVIEW Community Edition Announced
Rolf Kalbermatter replied to hooovahh's topic in LabVIEW Community Edition
The Pi is very easy to get. I have here a Beaglebone Black and a myRIO available. Also a RIOTboard and an older Atmel SAM7X embedded controller board, but they both are not supported by Linx. The MCC HAT needs to have some form of binary module interface in the form of a shared library to be accessible from C. Interfacing that interface with the LabVIEW Call Library Node definitely must be possible, it's just some busy work to do. -
LabVIEW Community Edition Announced
Rolf Kalbermatter replied to hooovahh's topic in LabVIEW Community Edition
Send one my way and I have a prelimenary library done in a few days. ๐ -
What selection do you propose? So far the question is simply to broad. Quick processing in terms of easy to setup, or quick in performance? Using any specific hardware or not? With LabVIEW vision or something else? What are your requirements?
-
Things are not that simple! LabVIEW on Linux RT comes in two flavors. For the ARM based targets and the x86 based targets. The 64-bit Fedora/Centos/Redhat binaries MIGHT work for the x86 based targets, but in my work with these I always compiled shared libraries from the source for these targets. For the ARM based targets you will almost certainly need to recompile them from source. Just because it says ARM does by far not mean that it is all the same although many of the SmartTV and similar hardware are probably running on some ARM CPU. The NI Linux RT used on the ARM targets needs binaries compiled specifically for the ARM CortexA implementation and should be compiled with softfp enabled. Otherwise floating point performance will be not very good. There are basically two ways to get a valid binary for the ARM cRIOs (works also for the x86_64 based cRIOs too): 1) Either cross compilation with a GNU C cross compiler that is prepared for the specific target such as explained here: http://www.ni.com/tutorial/14625/en/ 2) or installing the C development tools on your actual target using the opgk feeds from NI and compiling the sources there on the command line. This is however only for those who are not faint at heart and know about how configure, make, make install works. ๐ And you shouldn't be afraid to tweak the generated Makefile a little to fit your system exactly.
-
There is no need to go through external code for this. There have been many attempts at crypto libraries that are written natively in LabVIEW and they didn't fail because it is impossible but because nobody is interested to spend some time in searching for them or what a bad word, fork a few dollars over for them. That way authors have put out libraries in the past only to have them forgotten by the public and that is the most sure way for any maintenance work and improvement to be discouraged. Probably the first one was Enrico Vargas who wrote a pretty versatile Crypto library all in LabVIEW somewhere around 2000. It contained many hash and even symmetric algorithmes that were pretty well tested and he was an expert in that subject. And yes he charged something for that library which I found reasonable, I bought a license too and collaborated with him a little on some algorithmes and testing of them. I doubt he made much money with it though, as most Toolkit providers. Eventually it died of because he pursuaded other carrier options and maybe also partly because providing support for something that many were asking for but very few were willing to pay for is a frustrating exercise. A little googling delivers following solutions currently available: https://github.com/gb119/LabVIEW-Bits/tree/master/Cryptographic Services/SHA256 https://lvs-tools.co.uk/software/encryption-compendium-labview-library/ https://gpackage.io/packages/@mgi/hash Interesting disclaimer in the last link! ๐ I would say whoever understands the implications of this, is already aware of the limits in using such functions, but whoever isn't won't be bothered by it. There are several aspects to this, such that calling the same function in .Net or WinAPI (also a possibility) is not necessarily more safe as the actual string is still possibly somewhere in LabVIEW memory after the function is called, no matter how diligent the external library is about clearing any buffer it uses. Also many hashes are mostly used for hashing known sources. which does not have the problem that the original string or byte stream needs to stay secret at all as it is already in memory anyways elsewhere. So for such applications the use of these functions in LabVIEW would not cause any extra concerns about lingering memory buffers that might contain "the secret" after the function has finished.
-
I definitely saw somewhere discussions about this that were not LabVIEW 2020 related. The solution was to create(edit) some environment variable for the MKL library itself where some thread configurations were forced explicitedly rather than letting MKL detect the right configuration automatically. See this thread for some discussion of the problem and possible solutions. It seems to be related to latest AMD Ryzen CPUs with a specific SSE architecture. That it falls back to trying to load from the penguin path is however rather strange. Generally these paths only exist in the executable as debug messages related to the source module the specific code was compiled from.
-
lvanlys.dll is for the most part a thin wrapper around the Intel Math Kernel (MKL) library since many moons. It used to be a fully NI private implementation of the analysis functions but at some point NI realized that they never can beat the guys from Intel in making hyper-optimalized versions of those functions that work virtually on any CPU from early Pentiums to the latest iCore monsters with optimal performance (and also work on AMD CPUs as well). So if the Intel MKL for some reason refuses to load then such an error could occur. Incidentially I do remember some thread over on the blue side that mentions problems with the MKL on special more modern CPUs with many cores (8+). There is a possibility to configure some configuration file in the form of an ini file I believe to tell the MKL to initiailize in a specific way that allows it to load on such CPUs too. Maybe that is the problem here at hand?
-
Is LabVIEW a programming environment, vs Doom
Rolf Kalbermatter replied to Mefistotelis's topic in LabVIEW General
The main difference between LabVIEW and a C compiled file is that the compiled code of each VI is contained in that VI and then the LabVIEW Runtime links together these code junks when it loads the VIs. In C the code junks are per C source file, put into object files, and all those object files are then linked together when building the final LIB, DLL or EXE. Such an executable image still has relocation tables that the loader will have to adjust when the code is loaded into a different memory address than what its prefered memory address was defined to be at link time. But that is a pretty simple step. The LabVIEW runtime linker has to do a bit more of work, that the linker part of the C compiler has mostly already done. For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. In this case the code junks in each VI are real compiled machine code directly targetted for the CPU. In the past this was done through a proprietary compiler engine that created in several stages the final machine code. It already included the seperation where the diagram was first translated into a directed graph that then was optimized in several steps and the final result was then put through a target specific compiler stage that created the actual machine code. This was however done in such a way that it wasn't to easy to switch the target specific compiler stage on the fly initially so that cross compiling wasn't very easy to add when they developed the Real-Time addition to LabVIEW. They eventually improved that with an unified API to the compiler stages so that they could be switched on the fly to allow cross compilation for the real-time targets which eventually appeared in LabVIEW 7. LabVIEW 2009 finally introduced the DFIR (Dataflow Intermediate Representation) by formalizing the directed graph representation further so that more optimizations could be performed on it and it could eventually be used for LabVIEW 2010 as an input to the LLVM (Low-Level Virtual Machine) compiler infrastructure. While this would theoreticaly allow to leave the code in an intermediate language form that only is evaluated on the actual target at runtime, this is not what NI choose to do in LabVIEW for several reason. The LLVM creates fully compiled machine code for the target which is then stored (in the VI for a build executable or if code seperation is not enabled, otherwise in the compile cache). When you load a VI hierarchy into memory all the code junks for each VI are loaded into memory and based on linker information created at compile time and also stored in the VI, the linker in the LabVIEW runtime makes several modifications to the code junk to make it executable at the location it is loaded and calling into the correct other code junks that each VI consists of. This is indeed a bit more than what the PE loader in Windows needs to do when loading an EXE or DLL, but it isn't really very much different. The only real difference is that the linking of the COFF object modules into one bigger image has already been done by the C compiler when compiling the executable image and that LabVIEW isn't really using COFF or OMF to store its executables as it does all the loading and linking of the compiled code itself and doesn't need to rely on an OS specific binary image loader. -
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
We did indeed have some FPGA operation in there including some quadrature encoding and debounce circuitry. The main reason it was done with the NI chassis was however that they were already there from an earlier setup and it was considered to be cheaper to use them rather than buy Beckhoff IO terminals. -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That's a terrible hack. LabVIEW controls are not pointers. They are complex objects whose data elements can be dynamically allocated, resized and deallocated. It may work for a while but is utterly susceptible to LabVIEW versions, platforms, and what else! DCO/DDO concept is quite something else than the actual memory layout. It's related but there is no hard rule that says that the DCO or DDO layout in memory can't change and it has changed in the past. CIN support is utterly legacy. There are no tools to create CINs for any of the new platforms that have been released since around LabVIEW 8.2. That includes all the 64-bit platforms as well as the NI Linux RT platforms (both ARM and x64). One very bad aspect of CINs is that the code resource is platform specific and needs to be inside the VI and there is only space for one code resource per VI per CIN routine. So if you want to support multiplatform you have to create for each platform a copy of the VI and put the according code resource in there and then somehow place the correct VI on the system when installing your library to a particular platform. Utter maintenance nightmare! -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That makes no sense. But I"m not going to tell you you can't do that. ๐ The control value is completely seperated from the data value in the wire. Assuming that they are the same is utterly useless. There might be circumstances where they appear to be the same but threating them like that would simply cause potential trouble. You can NOT control LabVIEWs memory mnagement on such a level. LabVIEW reserves the right to reschedule code execution and memory reallocations depending on seemingly minimal changes and it also can and will change between versions. The only thing you can say is that in a particular version without any change to the diagram you SHOULD get the same result but anything else is simply not safe to assume. Trying to use this for a (possibly perceived) performance gain is utterly asking for a lot of pain in the long run. So don't do it! -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
My point is, that for the retrieval of the pointer that a handle is, the two SPrintf() calls are utterly Rube Goldberg. They simply return the pointer that a handle is. If you passed the array as Array Handle, Pass Handle by value to the first MoveBlock() function you would achieve the same! Yes I mean the data parameter. In your diagram you pass the U64 value of the control/indicator to it and declare the parameter as pointer sized integer (passed by reference?). But it should be the data type of the control passed by reference (so U64 passed by reference). For more complex values like clusters and arrays it is probably best to configure this parameter simply as Adapt to Type (pass handles by reference). CINS always passed data by reference. There was no way to configure this differently. There was also no way to tell LabVIEW that a parameter was const or not other than by the fact if the output (right) terminal was connected. For the Call Library node you have a Constant checkbox in the parameter declaration. This is a hint to LabVIEW that the DLL function will NOT modify (stomp on) the data and LabVIEW is free to schedule code in such a way to first execute this CLN before other functions that may want to modify data in place. If only one sink to a wire is marked as stomper (wanting to modify the data), LabVIEW can save copying the data to pass to the different nodes by making sure to schedule the non stomping nodes first (and even in parallel if possible) before finally executing the one stomper node. Even if there are multiple stomper sinks on a single wire, it will schedule them such that all non-stomping nodes are executed first if possible from the dataflow, and then create n - 1 data copies to pass to the different stomper nodes (with n being the number of nodes that are indicating that they MIGHT stomp on the data). Yes this might be overzealous if the node decides to not stomp on the data anyways, but LabVIEW always tries to work by the principle to be better safe than sorry in this respect (and has failed in the past sometimes in some cases but that are bugs the LabVIEW team wants to have squished ASAP when they get aware of them). -
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
I have only done minimal coding in TwinCAT. Been involved on the side with such a project where we had to provide support to get a few NI EtherCAT chassis to work in a system with Beckhoff controllers, Bronkhorst Flow Controllers and several other devices. While the Bronkhorst, Festo and NI IO chassis were on EtherCAT there were other devices that were using Modbus and Ethernet communication and that turned out to be more complex than initially anticipated to get working in TwinCAT. I was happy to sit on the side and watch them trying to get things eventually solved rather than getting my own hands dirty with TwinCAT programming. ๐ -
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
If your external hardware is EtherCAT based, then Beckhoff will be quite a bit easier to use. If it is heterogenous then IMHO LabVIEW tends to work better, but that is also probably my significantly greater experience with LabVIEW and all kind of weird external hardware in comparison with Beckhoff TwinCAT. -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That's all nice and pretty until you place an Array Resize node in the array wire to increase the array size. Et voila, the internal pointer most likely (not necessarily always) will change as LabVIEW will have to reallocate a new memory area and copy the old content into the new area and deallocate the original memory. So while this is no real news to people familiar with the LabVIEW memory manager C function interface, it is at best a brittle approach to rely on. When you write a C DLL that receives handles from LabVIEW, the handle is only guaranteed to exist for the duration of the function call and after you return control to the LabVIEW diagram, LabVIEW reserves the right to resize, delete, or reuse that array handle for other things as it sees fit. Your array_test.vi is a very Rube Goldberg solution for something that can be solved with a simple Resize Array node. What you basically do is to format the handle value (which is a pointer to the actual memory pointer) into text, then convert that text back into a pointer (handle) and then resize it with NumericArrayResize() and finally copy in the data into that resized handle. It's equivalent to doing a Resize Array on the original array and then a copy into that resized array, although in LabVIEW you wouldn't resize the handle for this but simply create a new one with that data, most easily by branching the wire but if you really want to, you could also use an autoindexing loop to make it a little bit more Rube Goldberg. I also kind of question the datatype of the third parameter for your ReadDCOTransferData and WriteDCOTransferData. As you set it, you treat it as a pointer (most likely passed by reference) but it should probably be the datatype of the control (passed by reference) (in this specific case the easiest would be then to simply configure it as Adapt to Type). -
I wouldn't expect real problems nor any specific advantages to doing so for these functions. The Call Library Nodes are already configured to run in any thread which allows the VI to execute the C function in whatever thread the VI is currently executing. As such it should not give you any serious performance improvements unless you intend to call these functions from many different locations in parallel and also only if you do this for pretty large string buffers. For short string buffers the actual execution time of these functions should be so short that the likelihood that one call has to wait for another call should be pretty small. The disadvantage obviously is that you will have to redo this for every new version of the library. ๐
-
Most likely when you pass in -1, it does a stat() (or possible the equivalent of the internal FSGetSize() function) to determine the size of the "file" and read that much bytes. Since it is a VFS it can't return a valid size for the "file" (most likely fills in 0) and LabVIEW concludes that it is an empty file and returns that.
-
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
Recently? OpenSSL does that all the time. There have been incompatible API changes all the time. Some between 0.9 and 1.0. a few more serious ones between 1.0 and 1.1 including renaming of the shared library itself to "mitigate" the version incompatibility problem. And expect some more when they go to 3.0. Wait 3.0? what happened to 2.x? ๐ And when you look at their examples they are riddled with #if OpenSSLVer <= 0968 call this API #else call this other API #endif That's definitely not going to work well unless you always can control exactly which version of OpenSSL is installed on a computer for your specific app and then you are stuck with doing all the maintenance yourself when a new version is released. I sort of managed to make the code of my library autodetect changes between 0.9 and 1.0 and adapt dynamically but definitely gave up when they started 1.1. That together with the fact that IPv6 and TLS support beyond what the LabVIEW HTTP VIs offered wasn't even any priority anywhere. Now with 1.0.2 definitely gone in obsolete mode my library wouldn't even compile properly for the time being. ๐ PS: Just recently read somewhere that IPv4 address range has been now officially depleted, so no new IPv4 address ranges can be given out anymore. This still isn't the end of the internet as many internet providers use dynamic IP address assignment and nobody should be even thinking about connecting his coffee machine or fridge directly to the internet ๐ but it definitely shows that IPv6 support by internet providers should be something they care about. But while we here in the Netherlands have one of the highest internet connectivity rates of the world, the majority of internet providers still doesn't provide a "working" IPv6 connectivity itself. You have to use tunnels to test that! -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
You are aware that there is a LabVIEW Idea Exchange entry about SSL TLS support in LabVIEW that is since about 6 month in development. Most likely not something to appear in LabVIEW 2020 though. I was considering reviving my library but when I saw that I abandoned the idea. -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
Depends how you implement them but to support asynchronous behaviour is a little nasty as you need to basically create a temporary structure that gets initialized on first call and then you need to call repeatedly into the shared library function to call poll() or select() to check the status of the socket and update the structure until the operation completes either because an error occured, the timeout elapsed or the requested data has been sent/received. Without that your call will be synchronous, consuming the calling LabVIEW thread and limiting potential parallelization which the native node does support out of the box. Synchronous operation is not a problem as long as you only process one or two connections at the same time but will certainly cause problems for server application or clients that need to process many seperate connections quasy parallel. Why I know that? I did the asynchronous implementation in this library: -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
That's not a publically exported API unfortunately. This is an internal function called by the NCConnect(), NCCreateListerer() and NCWaitOnListener() functions when a socket needs to be wrapped into a network refnum. And no, the according APIs with these names that are exported are just stubs that return an error. The real implementation is in non-exported functions too, as someone back in the LabVIEW 4.0 or 5.0 days decided that the Network Manager API should not be exported from the LabVIEW kernel for some reasons. Most probably a left over from the LabVIEW for Mac days when the TCP IP library was an external library implemented with CINs. Rather than just removing the functions from the export table they renamed them internally and all network functionality uses those new names and empty stubs returning "Manager Call not Supported" status were exported instead. -
Sounds like a home work assignment. And not a very complicated one when you have done some basic LabVIEW programming course. You will want to look into loops with shift registers for this.