-
Posts
3,907 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
labview queue from native windows dll
Rolf Kalbermatter replied to Mark Zvilius's topic in Calling External Code
Right! No! NumericArrayResize() does not update the length value in an array. Read the documentation about that function which states so explicitedly. One of the likely reasons for this is that you want to update the length usually after you have added the elements to the array and not before. Otherwise leaving the routine early because of an error or exception will leave uninitialized array elements in the buffer that according to the length parameter should be there. And LStrLen(ptr) is simply a macro and not a function. It translates to #define LStrLen(ptr) ((LStrPtr)(ptr))->len; so it can be used both as lvalue and rvalue in a statement. -
DLL, Call Library Function and IntanceDataPtr questions
Rolf Kalbermatter replied to jbone's topic in Calling External Code
The InstanceDataPtr is basically what the GetDSStorage() and SetDSStorage() functions provided for CINs. It is a pointer value LabVIEW maintains for each Call Library Node in a diagram and in the cases of reentrant VIs with Call Library Nodes for each instance of such a Call Library Node. Your DLL can specifiy the three Callback functions ( I still think the name callback function is a mistake for this functionality) that all accept a reference to this InstanceDataPtr. These three functions Reserve(), Unreserve(), Abort() correspond somewhat to the CIN functions CINInit(), CINDispose(),, and CINAbort(). Usually you would in Reserve() check if the InstanceDataPtr is NULL and create it then or reuse the non-null value. In Unreserve() you should deallocate any resources that you created for this InstanceDataPtr. In Abort() you could for instance cancel any pending IO or other operations associated with this InstanceDataPtr to prevent the threaded Resetting.... dialog when your DLL function hangs on a IO driver call. The actual function associated with the Call Library Node can be configured to have an extra InstanceDataPtr parameter that will be not visible as terminal on the diagram. LabVIEW will pass the data pointer instance stored for the current CLN instance to this parameter. Please note, this is strictly for managing a specific CLN instance. It is not meant to pass around as a token between different CLNs or even different instances of the same CLN in case that the CLN resides in a multiple instantiated reentrant VI. It seems what you want is more a kind of token or handle you can pass between different CLNs to identify some resource. This has to be done by your library for instance by creating a pointer where all the necessary information is stored. You can treat this as opaque value as far as LabVIEW is concerned. Basically you configure the according parameter to be a pointer sized integer and pass this "handle" around in LabVIEW like this between the different CLNs. If you need to access content inside this handle in a LabVIEW diagram your library should export corresponding accessor functions that you can import with CLNs. -
Well I didn't feel I had to defend myself, but wanted to explain the issues as they apply to other use cases too. The work on this library actually prompted me to write some articles about External Code use in LabVIEW on the Expression Flow blog, and touches all these issues and some more. Yes it supports any combination of memory and file based archive and archive content datastreams (and I'm fairly proud of that accomplishment). This means that both for compressing and uncompressing, the source and target can be any combination of memory or file based data stream. Accessing the memory stream of archive components was a requirement to allow copying password protected archives (for instance when adding a new component or removing an existing one). And no it doesn't compromise password protection since the entire encrypted stream is copied and to unpack it you still need to know the original password. You could easily create archives in that way that are hard to open in some normal unarchiving tools, since different internal components can end up having different passwords, but that is simply a usage problem, not a fundamental problem of the ZIP archive or its operation.
-
The intermediary DLL is not so much a seperate DLL as the combination of the original zlib source code with the additional zip support that is now official part of the contribution directory but used to be just an optional component by someone else, and on top of that some utility functions to help support with LabVIEW integration. I could have left that all in separate DLLs but since I had to modify some zlib header files anyhow (to make the functions cdecl instead of stdcall as they used to be in older versions on Windows, and later on to decorate the exported functions with my own prefix to workaround naming clashes in the cRIO systems with empty stubs included in the system image) I decided to derive from the concept of providing fully original DLLs. And since the entire build process already was required anyhow for the additional LabVIEW specific part it wasn't really much extra work. In fact it makes building the shared library easier and also the packaging and distribution since only one shared library is required. The drawback is that merging in new zlib/zip versions is a little bit more complicated but last time when I added the new zlib version to it, the actual merging was done in less than an hour since I still only have to manually check and adapt two of the header files.
-
I think the main reason SVN is used so often is because it is easy to setup and to use. And with many LabVIEW programmers doing small teams or even single user scenarios, and having to install the version control too themselves this is simply a huge incentive. It's not because SVN is superior to those other version control systems (it's definitely not) but simply because it is easy to use (and easy to use it wrong too). It's definitely better than no version control at all and for the little cost it has in getting it to work an ideal tool for many. It has limitations such as not really supporting branching and merging and I have run many times into situation when copying a controlled module to a different computer and making some modifications there (you can't usually install arbitrary software on a customer computer, and you also can't debug system hardware which needs some specific interface hardware from your own development computer) and afterwards trying to get this back into the controlled module on my system. Hg seems to support that much better, and while git on a command line is a nightmare to use, it certainly has very powerful features.
-
Well from the pic there is no obvious reason why it shouldn't work. But there is only so much a pic says. It tells us nothing about the actual Call Library Node configuration and even less about what the function expects to do with the buffer. This is however very crucial information. The problem you pose to us is a bit like presenting us a picture of a car in a seemingly perfect condition and asking us why the engine doesn't work. Is gas in the tank? Is the engine in there? You see?
-
Well changing only the offsets into 64 bit integers won't work. You also need to change the calls to use the 64 bit versions of the functions. If they simply changed the offsets to just support 64 bits, backwards compatibility would be broken. A 64 bit integer is passed to a function in two 32 Bit addresses on the stack so changing all 32 Bit offsets into 64 Bits only, it would seem strange that you could even run an entire chain of VIs as this mismatch should mess up the stack alignment. Although thinking about it, the functions are all cdecl and that means the caller cleans up the stack afterward so it may work ok anyhow, if there are no other parameters after the offset in the parameter list. Like all OpenG libraries it's all on SourceForge.
-
Actually 64Bit and support for >2GB ZIP archives are two distinct issue. I have in the current HEAD integrated a new version of zlib which would support 64Bit offsets internally which is what is required to support archives containing more than 2GB components. The same version also SHOULD be compilable as 64Bit code. While these two things happen to be incorporated in a working way in the current ZLIB library, they are not equivalent and even not really related. I have come across some snatch in compiling the LVZIP DLL for 64 Bit and haven't really spent much more time on this until now. Well I can compile the code but it crashes and since I don't have a 64 Bit system at hand with a compatible source level debugger and LabVIEW installation (they both also need to be 64Bits) I can't at the moment debug it. The code as it is in the repository would apart from that crash be fully 64 bit capable and should also be able to handle >2GB components although that is nothing I have ever tested nor have thought to test so far. So the OpenG library as it is in the repository SHOULD be able to handle >2GB files but that will also need re-factoring of some VIs to allow for 64 bit integers wherever offsets are passed to and from the DLL functions (those 64 bit capable functions have a special 64 postfix.
-
call library function node error 13
Rolf Kalbermatter replied to Rammer's topic in Calling External Code
One possible problem might be that the DLL is a 32bit DLL and you were using it in the 32 Bit version of LabVIEW and now have upgraded to LabVIEW 2010 64Bit. LabVIEW only can call DLLs compiled for the same environment as it is itself. If this is the case you have two options: 1) Install LabVIEW 2010 32Bit which will work fine on a 64 Bit Windows version 2) Recompile your DLL for Windows 64Bit (and review it for any bitness issues) This review would encompass things such as that the Call Library Node is configured right: Windows 64 bit uses obviously different pointer sizes which could be a problem if you have configured pointer parameters to be treated as integers, but using the new pointer sized integer type since LabVIEW 8.5 would then be the right thing instead of explicit 32 bit or 64 bit integers Also there is a difference in size between long: Windows treats longs always as 32 bit while everyone else treats a long as 32 bit on 32 bit platforms and 64 bit on 64 bit platforms. Use of long long instead in the C code avoids that problem as it is always 64 Bit but it is a C99 feature only so it may have problems with older compilers and also C++ compilers who don't support the upcoming C++0x standard not being able to recognize it. -
I fully agree! And for me VB always has been a nightmare to deal with. Apart of the syntax which I always found fuzzy and unclear (coming from Pascal and Modula which we learned in school) I HATED the separation of the code into separate methods for every event and what else. If I wanted to get an overview of a program I had to either print it out or click through many methods to get even an idea about what the code does and how. I'm pretty sure the more modern versions of VB at least allow a different more full view of the code but this together with the syntax that always seemed rather unstructured and ad hoc from a design view point made me avoid VB like the devil. C# is a bit better in that respect but after having dabbled in Java a bit I prefer it above C#. The only thing I don't like about Java is the fact that it doesn't support unsigned integers and other "low" level stuff. It makes interfacing to binary protocols for instance a bit more of a challenge since you have often to deal with proper sign extension and that sort of thing.
-
How to Implement Callback function using LabVIEW
Rolf Kalbermatter replied to Vivek Bhojan's topic in Calling External Code
You will need to learn some C AND LabVIEW for sure. The example in the link from vugie provides all that is necessary with only one difference and that is the prototype of your function itself. If your knowledge is so limited that this is to much to do yourself you will run into many more trouble along the way and should consider to either try to find a non callback solution or hire someone who does this part for you. -
Sorry, I had been working on another post and for some reason the link to the clipboard library must have stayed in my clipboard (how is that about the clipboard causing a problem about a link to a post about the clipboard ) despite my different intentions. It should be fixed now.
-
Move 3D picture control to back...
Rolf Kalbermatter replied to Jonathan Borduas's topic in User Interface
No! Because ActiveX Controls have traditionally not allowed overlapping at all. Maybe it would be theoretically possible nowadays, but most ActiveX controls assume simply that they are the one and only control to draw to the screen area they have been constricted to and won't even bother otherwise, simply because at least in older ActiveX standards there was simply no good way to do overlapping. Maybe later versions added support for that, but since ActiveX is already an obsolete technology again, since MS came out with the next Übercool technology called .Net nobody probably bothered to look into that very much anyhow. The ActiveX container in LabVIEW certainly won't allow support for overlapping controls, so even if ActiveX has provisions for that in its latest released version (which I actually still doubt) and you could find a control that supports that it would make no difference when that control is used in LabVIEW. I don't know your VI sensor mapping example but if overlapping controls are used there then they most likely make use of the Picture Control with the 3D OpenGL based drawing functionality. Look in your LabVIEW palette under Graphics & Sound->3D Picture Control. It's not high performance 3D scene creation and rendering and not at all a 3D wire frame graph but it may be able to do what you want to do. -
Well the OpenG pipe library has a System Exec equivalent that also returns besides the pipes for the stdIO and optionally stdErr the resulting task ID. And then another function which can be used to kill that task. Works like a charm for me for starting up command line tools that have no built in remote control to quit them. The OpenG Pipe library has not been released as a package but can be found here as a build OGP package.
-
I have to agree with Shaun. While the text may seem to indicate that the driver is MOSTLY thread safe, it isn't completely. The three mentioned calls need to be made from the same thread and the only way to easily guarantee that in LabVIEW is by calling them in the UI Thread. And incidentially I would say that are probably also the only functions where you could possibly gain some performance if you could call them multithreaded. So I would not bother to change the VI library at all.
-
If the VI already crashes when it is opened then something has been corrupted. It could be one of the subVIs it uses or the actual VI itself. Corruptions while not very common do happen and the only way to deal with them is by backing up your work regularly. If the VI crashes as soon as you start it, then it may be calling some invalid method, or into a driver which has gotten corrupted. In the driver case reinstalling the driver software might help.
-
Mostly C and a little C++. Did some Python and Lua programming in the past. Having gotten an Android mobile device now I'm looking into some Java programming too :-)
-
How to identiy DLL created version?
Rolf Kalbermatter replied to Karthik888's topic in LabVIEW General
LabVIEW uses for many version specific things a 4 byte integer which encodes the version in nibbles. And newer LabVIEW versions compress almost everything in the VI to safe space and load time, but this also makes any internal information very hard to read. -
LabVIEW read Unicode INI file
Rolf Kalbermatter replied to MViControl's topic in Database and File IO
Yep, that are the nodes (note: not VI's). I'm aware that they won't help with UI display but only in reading and writing UTF8 files or any other UTF8 data stream in or out of LabVIEW. Display is quite a different beast and I'm sure there are some people in the LabVIEW development departement, biting nails and ripping out their hair, trying to get that working fine. -
Not sure why you say that. Just because it is not all obfuscated or even binarised doesn't mean it is a joke. The OGP format was the first format devised and Jim, me and I think someone else whose name I can't come up right now, came up with it after I had looked at ZLIB and according ZIP library and decided to create a LabVIEW library to deal with ZIP formats. The idea for the spec file format I did in fact derive from the old application builder configuration format and it proved flexible, yet simple enough to deal with the task. Jim came with the idea to put the spec file inside the archive under a specific name, similar to how some Linux package managers did it back then. The rest was mostly plumbing and making it work and it is still in principle the same format as in the beginning. The VIPC is just another similar format to track down packages more easily. JKI goes to great lengths to obfuscate the software in the VIPM but they are to be applauded to not have gone the path of obfuscating the actual file formats.
-
LabVIEW read Unicode INI file
Rolf Kalbermatter replied to MViControl's topic in Database and File IO
While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently to the native libraries on some systems. -
labview queue from native windows dll
Rolf Kalbermatter replied to Mark Zvilius's topic in Calling External Code
Yes you can't avoid a buffer copy but I wasn't sure if your buffer copy VIs did just a single buffer copy or if your first VI does some copying too. But the main reason to do it in this way is simply to keep as much of the low level stuff in the DLL, especially since you have that DLL already anyhow. And to be honest, I believe your original solution could have a bug. You pass in a structure that is on the stack. The user event however is processed asynchronous so at the time the event is read the stack location can have long been invalidated and possibly overwritten with other stuff. Bad, Bad! especially since both information relate to a memory buffer. And declaring the variable static is also not a solution since a new event could be generated before the previous is processed. Note: It may work if PostLVUserEvent makes a copy of the data, leaving the ownership of the data object to its caller. It could do that since the event refnum stores internally the typecode of the data object associated with it, but I'm not 100% sure it does so. -
labview queue from native windows dll
Rolf Kalbermatter replied to Mark Zvilius's topic in Calling External Code
If you changed the user Event to be of a string instead of the cluster and changed the code in the Callback like this: DLL void iCubeCallback (char *pBuffer, int IBufferSize, LVUserEventRef *eventRef){ LStrHandle *pBuf = NULL; MgErr err = NumericArrayResize(uB, 1, (UHandle*)pBuf, IBufferSize); if (noErr == err) { MoveBlock(pBuffer, LStrBuf(**pBuf), IBufferSize); LStrLen(**pBuf) = IBufferSize; PostLVUserEvent(*eventRef, (void *)pBuf); } } This way you receive the data string directly in the event structure and don't need to invoke buffer copy functions there. No not really. Supposedly there would be probably some functions in the latest LabVIEW versions, but without at least some form of header file available it's simply to much of trial and crash . -
No you need LabVIEW 2010 for this! But I wouldn't expect to much of this. Creating a DLL in LabVIEW and using it from the C environment is certainly going to be an easier solution.
-
While this would likely work I consider it a rather complicated solution which makes the whole deployment of the library more complicated. Writing a wrapper interface for the DLL (to add the extra error cluster parameter to all functions) is indeed not a trivial exercise (mainly in terms of time, not so much in terms of complication). Especially with open source it also poses the question if you want to modify the original sources or add a separate wrapper. However this is work that has to be done once at creation time and won't bother the end user with complicated setups and non-intuitive usage limitations. If it is only a for a solution for a build once and forget application then it's feasable, otherwise I would definitely try to invest a little more time upfront in order to make it more usable in the long term.