-
Posts
3,909 -
Joined
-
Last visited
-
Days Won
270
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
A colleague recently tried to use the OpenG Variant Configuration File Library and found that the loading and saving of more complex structures was pretty slow. A little debugging quickly showed the culprit which is in the way the recursion in that library is resolved by opening a VI reference to itself to call the VI recursively. In LabVIEW 2012 and later the solution to this problem is pretty quick and painless: Just replace the Open VI Reference, Call VI by Reference and Close VI Reference by the actual VI itself. Works like a charm and loading and saving times are then pretty much in par with explicitly programmed VIs using the normal INI file routines (cutting down from 50 seconds to about 500ms for a configuration containing several hundred clustered items). Now I was wondering if there is anyone who would think that updating this library to be LabVIEW 2012 and later would be a problem?
-
Definitely! NI licensed the Xilinx toolchain from Xilinx to be distributed as part of the FPGA toolkit and there will be certainly some limitations in the fine print that Xilinx requires NI to follow as part of that license deal. They do not want ANY customer to be able to rip out the toolchain from a LabVIEW FPGA installation to program ANY Xilinx FPGA hardware with and not having to buy the toolchain from Xilinx instead, which starts at $2995 for a node locked Vivado Design HL license, which I would assume to be similar to what NI bundles, except that NI also bundles the older version for use with older cRIO systems. So while NI certainly won't like such hardware offerings, as it hurts their cRIO sales to some extend, they may contractually be obligated to proceed on such attempts to circumvent the Xilinx/NI license deal, if they want to or not.
-
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
Hard to say anything conclusive without the ability to debug the libraries in source (and no I don't volunteer to do that, that would be the original developers task). Generally .Net only looks at the GAC and the current process' executable directory when trying to load assemblies. This has been done on purpose since the old way of locating DLLs all over the place in various default and not so default places has created more trouble than it actually solved. An application can then register additional directories explicitedly for a .Net context. LabVIEW seems to maintain seperate .Net contexts per application instance and a project is an application instance in LabVIEW, isolating almost everything from any other application instance eventhough you run it in the same LabVIEW IDE process. For project application instances LabVIEW also registers the directory in which the project file resides as a. Net assembly location. This may or may not have anything to do with your issue, but from the description of your issues, it could be that one of your assemblies is trying to load some other assembly and not properly catching the exception when that fails. But this is really all guesswork without a deeper look into the actual .Net components involved. If you can't get the original developer of the .Net component to look into this issue for you with a source code debugger, I see not a lot of chances to get this working. -
There is definitely a change depending if you use a shift register or not for the error cluster with shift register without error before loop (n >= 1) n times do nothing n times do nothing error before loop (n = 0) error is visible after loop error has magically disappeared error in loop execution x of n 0 .. x -1 executes 0 .. n executes first error in loop is passed out unless you create an autoindexing error array, only the last error of the loop execution is passed out Generally only the purple situation in the loop without shift register for the error cluster is sometimes preferable above what the shift register would cause. The red ones are definitely not desirable in any code that you do not intend to throw away immediately.
-
IP Camera Surveillance TCP-IP
Rolf Kalbermatter replied to edupezz's topic in Machine Vision and Imaging
The page you link to is not very detailed. But it says under Network Protocol: TCP/IP, HTTP, DHCP, DNS You can forget the last two, they do not mean anything for the actual accessibility, but HTTP means most likely that you can access a periodically refreshed still image (JPEG format) from it if you can figure out the right URL path. TCP/IP with H.264 hints most likely at support live streaming with the right driver. You will need to look for an IP Camera Driver for your OS that supports the H.264 compression. Alternatively you can most likely install something like https://ip-webcam.appspot.com/ to access at least the JPEG interface on your camera. This driver translates the JPEG images into a DirectX interface that you can then interface to with the IMAQdx driver software from NI to get the images into LabVIEW IMAQ. -
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
Probably something with the .Net assembly search path in combination with some badly implemented dynamic linking. .Net by default only searches in the GAC and in the directory for the current executable for any .Net assemblies. LabVIEW adds to that the directory in which the current project files is located if you run the VIs from within a project. -
1) is bad form. It will cause an invalid refnum (and empty arrays and strings and 0 integer and floats) after the loop if the loop iterates 0 times. And yes, even if you think it will never be possible to execute 0 times, you always get surprised when the code is running somewhere on the other side of the globe and you only have a very slow remote connection that makes life debugging impossible. 2) is the prefered way for me for any refnum and most other data. Anecdotely LabVIEW had some pretty smart optimization algorithmes in its compiler that were specifically targetting shift registers, so for large data like arrays and long strings it could make a tremendous difference if you put this data into a shift register instead of just wiring it through. he latest versions of LabVIEW do have much more optimization strategies so that the shift register is not strictly necessary for LabVIEW to optimize many more things than before. However the shift register won't harm the performance either and is still a good hint to LabVIEW that this data can be in fact treated inplace. 3) is simply ugly although it doesn't bear the problem of invalid data after a 0 iteration loop (but can potentially prevent LabVIEW to do deeper optimization which for large data can make a real difference). For error clusters 1) can be a possible approach although you still have the potential to loose error propagation on 0 iteration loops that way. You do need to watch out for additional possible problems when an error occures. One possibility is to simply abort the loop immediately which the abort terminal for for loops is very handy for, or to handle the error and clear it so the next iterations still do something. The least preferable solution is to just keep looping anyhow and do nothing in the subsequent iterations (through the internal error handling in the VIs that you call) BUT ONLY for For Loops!! Never ever use this approach in While Loops without making 200% sure that the loop will abort on some criteria anyhow! I've debugged way to many LabVIEW software where an error in a while loop simply made the loop spin forever doing nothing.
-
To my knowledge not beyond what you can get from the OpenG library. When I tried to do that I was looking for some further documentation but couldn't find anything beyond some tidbits in remotely related things in knowledge articles. So I simply gleemed through the different packages and deduced the necessary format for the .cdf ( component definition file ). Being an XML format file made it somewhat easy to guess the relevant parts. There is in newer LabVIEW versions an option to create an installation package for RT from an RT project which uses the same cdf file, but the creation of that file is of course hidden inside the according package builder plugin. If the purpose is to only create an installer component for a shared library then the files from the OpenG ZIP library should give some headstart. One problem you might have to solve is also to get the files into the RT Image subdirectory. Since it is inside the Program Files, your installer app needs to have elevated privileges in order to put it there. If you do this as part of a VIPM or OGP package file, then VIPM often is not started by default with such rights. I solved that by packing all the components into a setup file made with innosetup which will request the privilege evelation. Then I start that setup file at the end of the package installation and it then installs the components into the RT Image subdirectory.
-
LabVIEW doesn't deploy shared libraries to the embedded targets itself (except the old Pharlap based systems). So in my experience you always need to find a way to bring the necessary shared libraries to those targets yourself. One option is to just copy them manually into the correct location for the system in question (/usr/local/lib for the NI Linux RT based systems). Another one is to create a distribution package that the user then can install to the target through the NI Max Software installation section for the specific target. The creation of such a package isn't to difficult, it is mostly a simple configuration file that needs to be made and copied into a subdirectory in the "Program Files/National Instruments/RT Images" folder together with the shared library file(s) and any other necessary files. The OpenG ZIP library makes use of this last method to install support for the various NI RT targets.
-
Right, only works for byte arrays, no other integer arrays. And they forgot the UDP Write which is actually at least as likely to be used with binary byte streams. Someone had a good idea but only executed it halfway through (well really a quarter way, if you think about the Read). To bad that the FlexTCP functions they added under the hood around 7.x never were finished.
-
Wait are you sure? I totally missed that! Ok well: 2014: no byte stream on TCP Write 2015: no byte stream on TCP Write 2016: haven't installed that currently
-
I didn't mean to use shared libraries for Endianess issues, that would be madness. But data passed to a shared library is always in native format, anything else would be madness too. Strings being used as standard anytype datatype is sort of ok in a language that only uses (extended) ASCII characters for strings. Even in LabVIEW that is only sort of true if you use a western language version. Asian versions use the multibyte character encoding, where a byte is not equal to a character at all. So I consider a byte stream a more appropriate data type for the network and VISA interfaces than a string. Of course the damage has been done already and you can't take away the string variant now, at least not in current LabVIEW. Still I think it would be more accurate to introduce byte stream versions of those functions and drop them by default on the diagram, with an option to switch to the (borked) string version they have now. I would expect a fundamentally new version of LabVIEW to switch to byte streams throughout for these interfaces. It's the right format since technically these interfaces work with bytes, not with strings.
-
That's because you don't write shared libraries! Only the flattened LabVIEW formats use by default Big Endian. absolutely anything else is native byte order. And the only places where LabVIEW flattens data, is in its own internal VI Server protocol, when using the Flatten and Unflatten and the Typecast functions, or when writing or reading binary data to/and from disk. Still waiting for the FlexTCPRead and Write, that do not use strings as data input but directly the LabVIEW binary data types (and definitely a byte array instead of a string!! Same for VISA, the byte array I mean, strings simply do not cover the meaning of what is transfered anymore in a world of Unicode and Klingon language support on every embedded OS!!! ).
-
It's nitpicking a bit but the options for the Flatten (and Unflatten) functions are Big Endian (or network byte order, which is the same), Little Endian and native. Big and Little Endian should be clear, native is whatever the current architecture uses, so currently Little Endian on all LabVIEW platforms except when you run the code on an older PowerPC based cRIO. And LabVIEW internally uses whatever Endianess is used by the native architecture but its default flattened format is Big Endian. Those two are very distinctive things, If it would use Big Endian anywhere, it would need to convert every number everytime it is passed to the CPU for processing.
-
Allocating Host (PC) Memory to do a DMA Transfer
Rolf Kalbermatter replied to PJM_labview's topic in Hardware
Well, I'm pretty sure that for DMA transfer you do need a physical memory address.The LabVIEW DSNewPtr() function allocates a chunk on the process heap, which is a virtual memory area for each process. Between this virtual adress space and the actual physical address is the CPU MMU (memory management unit) which translates every address from the virtual memory to the actual physical memory address (and in the case of already cached memory locations actually, this step is skipped and directly translated to the cache memory location). The DMA controller only can transfer between physical memory addresses, which means for DMA transfer all the cache lines currently caching any memory that belongs to the DMA transfer block need to be invalidated. So you first need to lock the virtual address block (the DMA controller would get pretty panicky if the virtual memory suddenly was moved/paged out) which will also invalidate the cache for any area in that block that is currently cached and retrieve its real physical address. Then you do the DMA transfer on the physical address and afterwards you unlock the memory area again, which also invalidates the physical address you previously got. Incidentially the services you need to access to do physical address translation and locking are all kernel space APIs. VISA comes with its own internal kernel driver, which exports functions for the VISA layer to do these things but performance suffers considerably if you do it this way. The alternative however is to have to write a kernel driver for your specific hardware. Only in the kernel driver do you have direct access to physical memory translation and such things, since these APIs are NOT accessible from the ring 3 user space, normal Windows processes are running in. And yes, Windows limits the size of blocks you can lock. A locked memory area is a considerable weight on the leg of every OS, and in order to limit the possibility for a kernel driver to drown the OS for good, this limitation is necessary. Otherwise any misbehaving driver could simply DOS the OS by requesting an unreasonable big block to be locked.- 2 replies
-
- labview memory manager
- dma
-
(and 2 more)
Tagged with:
-
database What Database Toolkit do you use?
Rolf Kalbermatter replied to drjdpowell's topic in Database and File IO
Most of the database work we do is SQL Server, occasionally Oracle for specific customers and we normally use our own Database Toolkit. Difference to the NI database toolkit are Express VI based configuration wizards for the queries and transparent support for multiple database drivers such as MS SQL Server, Oracle and MYSQL, (MS Access too, but that hasn't been used in ages, so I wouldn't vouch for its spotless operation at this point). In addition I have my own ODBC based API that I have used in the past. I'm still considering about incorporating everything into a unified interface, likely based on a LabVIEW class interface, but proirity for that never seems to make it into the top 5. -
Actually btowc() is the single byte version of mbtowc(). Both are single char, but the first only works for single byte chars while the second will use as many bytes from a multi byte character sequence (MBCS) as are needed (and return an error if the byte stream in the mbcs input does start with an invalid byte code or is not long enough to describe a complete MBCS character for the current local. mbstowc() then works on whole MBCS strings while mbtowc() only processes a single character at a time. Please note that a character is not a single byte generally although here in the western hemisphere you get quite far with assuming that that is the case, although it's not quite safe to work from. Definitely on *nix systems which nowadays often use UTF-8 as default locale, you automatically end up with multibyte characters for those Umlaut, accent and other characters many European languages do use. Windows solves it differently by using codepages for the non-Unicode environment, which for western locales simply means that for extended characters the same byte means something different depending on the codepage you have configured. But even here you do need MBCS encoding for most non western languages anyhow UTF-8 to UTF-16 conversion is a fairly straightforward conversion, although the simple approach of some bitshifting only, could end up with invalid UTF-16 characters. A fully compliant conversion is somewhat tricky to get right for yourself as there are some corner cases that need to be taken care of.
-
Well Windows 3.1 was not fully protected mode anyhow. Your LabVIEW process could kill your File Manager (the predecessor to your good old File Explorer for those young sports knowing Windows 3.1 only from hairy tales of others) quite easily. And multitasking was fully cooperative, if an application forgot to call the GetMessage() API (or at least the PeekMessage() call) in its message loop (the root loop in LabVIEW and the thread_0 or UI thread too) then all Windows applications were hanging for good and only some kernel driver stuff would still be working in the background. To be fair though MacOS Classic was about the same . That about sharing globals between LabVIEW executables does sound a bit strange to me though. What you could do is referencing VI files (and gloabals) in an executable as if they were VIs in an LLB (which they actually were back in those days). But that wouldn't really share the dataspace, only create a copy of the VI and its dataspace in the other application. There is no safe way to abort a thread that has been locked by an external DLL and resume operation from just after calling that DLL. That DLL could have been suspended in a kernel call at that point (and quite often that is exactly where the DLL is actually waiting) and yanking the floor out under its feet at that point could leave the kernel driver in a very unstable state that could simply crash Windows.
-
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
Well not necessarily exactly the same CLR version but there has been a change somewhere between 3.5 and 4.0 of the CLR which makes this usually break. Both the calling application, here LabVIEW, and your callees need to be on the same side of this limit. LabVIEW until 2012 or so was loading by default CLR 3.5 or lower and newer LabVIEW versions load the CLR 4.0 or higher by default. If your assemblies are created for a different CLR version you either have to recompile them to use the version LabVIEW is using or you have to create a manifest file for LabVIEW to make it load a different CLR when initializing the .Net environment. The exact procedure for creating that manifest file are detailed here. Things get hairy when you have to incorperate multiple mixed mode assemblies that use CLRs from both sides of the 4.0 version limit. -
Callin a mixed mode (c#,managed c++) dll in Labview
Rolf Kalbermatter replied to demess's topic in Calling External Code
If the library is written and compiled as a managed .Net assembly you do not use the Call Library Node to call its methods, but the .Net palette instead. While a .Net assembly has a DLL file ending, it is not a classical DLL at all, and only the .Net CLR has any idea how to load and reference that. -
Can you elaborate more about what you mean with sysconfig? For me that is a configuration file under /etc/sysconfig on most *nix systems not a DLL or shared library I would know off. From the name of it, if it is a DLL, it would probably be a highly UI centric interface meant to be used from a configuration utility, which could explain the synchronous nature of the API, since users are generally not able to handle umptien configuration tasks in parallel. But then I wonder what in such an API could be necessary to be called continously in normal operation and not just during reconfiguration of some hard- or software components. As to LabVIEW allocating threads dynamically, that is a pretty tricky thing. I'm pretty sure it could be done, but not without a complete overhaul of the current thread handling. And something like this is an area where even small changes can have far reaching and sometimes very surprising effects, so I can understand that it's not on the top of the priorities to work on, when you take the advantages and the risks into account. Besides, a simple programming error in your LabVIEW code could easily create a thread collector and although Windows is pretty good at managing threads, they do consume quite a bit of memory and the management does take some CPU horse power too, and once you exhaust the Windows kernel resources you get a hard crash, not a DOS for your own application only. So personally I would prefer my application to run into thread starvation at some point rather than the whole Windows system crashing hard when doing something that uses up to many threads. As to if it is the task of LabVIEW to make our life easier, I would generally of course agree. However calling DLLs is a pretty advanced topic already anyhow, so I would really think that someone working on that level should be possible to be bothered about using asynchonous APIs if there is a chance that the synchonous ones might block threads for long periods.
-
But LabVIEW strings are in the system encoding (codepage on Windows)!
-
Not sure about the .Net details really. But .Net is somehow executed in a special subsystem of LabVIEW and communication with the rest of LabVIEW is indeed in some form of queues I would assume. No such voodoo is necessary for normal DLL calls though. They just execute in whatever thread and on whatever stack space that thread has assigned at the moment LabVIEW invokes the function. Invocation is a simple "call" assembly code instruction after setting up the parameters on the stack according to the Call Library Node configuration. It's as direct and immediate as you can ever imagine. And of course for the duration of the function call the thread LabVIEW has used to invoke the function is completely consumed and unavailable for LabVIEW in any way. The only thing LabVIEW could do is aborting the thread altogether but that has a high chance to lead to unrecoverable complications, that only a process kill can clean up.
-
DLL calls will block the thread they are executing in for the duration of the DLL function call. So yes if you do many different DLL calls in parallel that all take nasty long to execute, then you can of course use up all the preallocated threads in a LabVIEW execution system even if all the Call Library Nodes are configured to run in the calling thread. However if your DLL consists of many long running and synchronous calls you have already trouble before you get to that point, since your DLL is basically totally unusable from non-LabVIEW programming environments, which generally are not multi-threading out of the box without explicit measures taken by the application programmer. So I would guess that if you call such DLL functions you either didn't understand the proper programming model of that DLL, or took the super duper easy approach of only calling into the upper most, super easy dummy mode API that only exists to demo the capability of the DLL, not to use it for real! .Net has in addition to that some extra complications since LabVIEW has to provide a specific .Net context to run any .Net method call safely. So there it is quite easily possible to run into thread starvation situations if you tend to just call into the fullly synchonous Beginner API level of those .Net assemblies. But please note that this is not a limitation of LabVIEW, in fact if you call lengthy synchronous APIs in most other environments you run into serious problems at the second such call in parallel already if you don't explicitedly delegate those calls to other threads in your application (which of course have to be created explicitedly in the first place). The problem of LabVIEW is that it allows you to call easily more than one of these functions in parallel and it doesn't break down immediately, but only after you exhausted the preallocated threads in a specific execution system. By using lower level asynchonous APIs instead you can completely prevent these issues and do the arbitration on the LabVIEW cooperative multithreading level, at the cost of a somewhat more complex programming, but with proper library design that can be fully abstracted away into a LabVIEW VI library or class so that the end user only sees the API that you want them to use.