Jump to content

Help on building a DLL in LabVIEW and then calling the same DLL in LabVIEW


Fab

Recommended Posts

Hello,

 

Before, going any further let me answer the question that I know you will ask: "Why would you want to do THAT?"

 
We are creating a LabVIEW instrument driver for a new instrument. The instrument manufacturer wants to provide also an instrument driver in the form of a dll. They are aware that if the DLL is built with LabVIEW the end users will need the LabVIEW RunTime Engine and they are OK with this. (At least for now).
 
So, LabVIEW users will get a LabVIEW palette API installed in their palettes that will let them communicate with the instrument.
Other developers would get the DLL directly.
 
We figured that it would be best to "eat our own dog food" and use the same dll we will create for the C developers as the basis for our instrument driver. This way we can test the dll as we go and if there is push back from the instrument manufacturer's customers to move away from requiring the LabVIEW RunTime Engine, we could replace the DLL built in LabVIEW by one created in C and the rest of the LabVIEW palette API code would still be the same.
 
Cool, now, what do you need from the great LAVA community?
 
We have been looking for documentation on how to do this and there is not much we can use. Our main question is What's the best way to configure the function prototype for a DLL built in LabVIEW so that a string (or U8 array) output of unknown size will not require arbitrarily-sized pre-allocation by the caller? Is this even possible?
 
 The main problems we have encountered so far:
 
1) Getting only the first two bytes of a string output from the DLL function --> Solution: even if the DLL function call has the string inputs and outputs defined as "C String pointers", when we call it in LabVIEW we change the function node call parameters definition to expect an array of 8 bits Unsigned Integers. This lets us initialize an array of uint8 of the size we expect and then use the byte array to string from the output. However it requires that we pre allocate an array by initializing the array of uint8. 
Question: Would defining the string as a "Pascal String Pointer"  remove the need to know in advance how large the string needs to be. We haven't been able to make this work. Is the use of Pascal String Pointer recommended? If it is, how should we handle the DLL source code and the "Call Library Function Parameters"?
 
3) We have found on several NI forum posts reference to LabVIEW.dll calls that could make our life easier by providing us access to the LabVIEW Memory Manager (for example DSNewPtr()). These functions are documented in some places, we even found one of them in the LabVIEW 2012 manual (http://zone.ni.com/reference/en-XX/help/371361J-01/lvexcode/aznewptr_dsnewptr/)
 
Question:  Do we need these functions? If we do, would they be used inside the DLL source code or to manage the inputs/outputs of the "Call Library Function Node". If we need them where can we find more documentation about the use of these functions?
 
4) We are defining the VI Prototypes for our DLL to use "C Calling Conventions" the other option is "Standard Calling Conventions".  The help says:
  • Standard Calling Conventions—Sets the function prototype to use standard calling conventions. 
  • C Calling Conventions—Sets the function prototype to use C calling conventions. This radio button is enabled by default.

Question:  Despite the accuracy of the help description ... well ... are we doing the right thing by using the C Calling Conventions?
 
Any help will be appreciated.

 

Thanks in advance for your time,

 

Fab

Link to comment
Hello,

 

Before, going any further let me answer the question that I know you will ask: "Why would you want to do THAT?"

 
We are creating a LabVIEW instrument driver for a new instrument. The instrument manufacturer wants to provide also an instrument driver in the form of a dll. They are aware that if the DLL is built with LabVIEW the end users will need the LabVIEW RunTime Engine and they are OK with this. (At least for now).
 
So, LabVIEW users will get a LabVIEW palette API installed in their palettes that will let them communicate with the instrument.
Other developers would get the DLL directly.
 
We figured that it would be best to "eat our own dog food" and use the same dll we will create for the C developers as the basis for our instrument driver. This way we can test the dll as we go and if there is push back from the instrument manufacturer's customers to move away from requiring the LabVIEW RunTime Engine, we could replace the DLL built in LabVIEW by one created in C and the rest of the LabVIEW palette API code would still be the same.
 
Cool, now, what do you need from the great LAVA community?
 
We have been looking for documentation on how to do this and there is not much we can use. Our main question is What's the best way to configure the function prototype for a DLL built in LabVIEW so that a string (or U8 array) output of unknown size will not require arbitrarily-sized pre-allocation by the caller? Is this even possible?
 
 The main problems we have encountered so far:
 
1) Getting only the first two bytes of a string output from the DLL function --> Solution: even if the DLL function call has the string inputs and outputs defined as "C String pointers", when we call it in LabVIEW we change the function node call parameters definition to expect an array of 8 bits Unsigned Integers. This lets us initialize an array of uint8 of the size we expect and then use the byte array to string from the output. However it requires that we pre allocate an array by initializing the array of uint8. 
Question: Would defining the string as a "Pascal String Pointer"  remove the need to know in advance how large the string needs to be. We haven't been able to make this work. Is the use of Pascal String Pointer recommended? If it is, how should we handle the DLL source code and the "Call Library Function Parameters"?
 
3) We have found on several NI forum posts reference to LabVIEW.dll calls that could make our life easier by providing us access to the LabVIEW Memory Manager (for example DSNewPtr()). These functions are documented in some places, we even found one of them in the LabVIEW 2012 manual (http://zone.ni.com/reference/en-XX/help/371361J-01/lvexcode/aznewptr_dsnewptr/)
 
Question:  Do we need these functions? If we do, would they be used inside the DLL source code or to manage the inputs/outputs of the "Call Library Function Node". If we need them where can we find more documentation about the use of these functions?
 
4) We are defining the VI Prototypes for our DLL to use "C Calling Conventions" the other option is "Standard Calling Conventions".  The help says:
  • Standard Calling Conventions—Sets the function prototype to use standard calling conventions. 
  • C Calling Conventions—Sets the function prototype to use C calling conventions. This radio button is enabled by default.

Question:  Despite the accuracy of the help description ... well ... are we doing the right thing by using the C Calling Conventions?
 
Any help will be appreciated.

 

Thanks in advance for your time,

 

Fab

 

No you can't avoid having to preallocate the strings and arrays by the caller if you want to make everything like you imagine. There is no way the LabVIEW DLL can allocate C string or array pointers and return them to the caller, without limiting the caller to only use very specific deallocator functions provided by your library too, or to never link with a different C runtime library than the one you used (that doesn't just mean a specific type of compiler but even a spedific C runtime version, even down to the last version digit when using side by side (SxS) libraries, which all Visual C versions since 2005 do). This is the old problem of managed code versus unmanaged code. C is normally completely unmanaged! There exists no universally accepted convention for C that would allow to allocate memory in one place and deallocate it in another place without exactly knowing how it was allocated. This requires full control of the place where it gets allocated as well as where it gets deallocated. And if that is not both in the caller, that is seldom the case and usually also perverts the idea of libraries almost completely.

 

The only way to not have the caller to preallocate the arrays (and strings) is to have a very strict contract (basically this is one main aspect of what managed code means) in both the caller and callee about how memory is allocated and deallocated. This happens for instance in DLLs that are specifically written to handle LabVIEW native datatypes, so LabVIEW as a caller does not have to preallocate buffers to unknown sizes and the DLL can then allocate and/or resize them as needed and pass them back to LabVIEW. In this case the contract is that any variable sized buffer is allocated and deallocated exclusively by LabVIEW memory manager functions. This works as long as you make sure there is only one LabVIEW kernel mapped into the process that does this. I'm not entirely sure how they solved that, but there must be a lot of trickery when loading a LabVIEW DLL created in one version of LabVIEW into a different version of LabVIEW to make sure buffers are allocated by the same memory manager when using native datatypes.

 

But enforcing to use LabVIEW manager functions to your C users so you can pass LabVIEW native datatypes as parameter is not an option either, since there is no officially sanctioned way to call into the runtime system used by the LabVIEW DLL from a non LabVIEW process. Also your C programmers would likely spew poison and worse if you tell them they have to call this and this function exactly in such a way to prepare and later deallocate the buffers needed, using some obscure (to them) memory manager API.

 

This is not even so much bad intention by NI and the LabVIEW developers but simply how programming works. The only universally safe way of calling functions with buffers is to both allocate and deallocate them in the caller. Anything else requires a very strict regime about memory manager calls to use, that can work if designed in the programming framework from scratch (C#) for instance, but C and C++ existed long before there was any programming framework that would care about such things, and many programmers have attempted to add something to C and C++ like that later, but each came up with a different interface and each of them always will remain an isolated solution not accepted by the huge majority of other C and C++ users.

 

Basically if you want to go the path you described you will have to bite the sour apple and use C pointers for arrays and strings, and require the caller to preallocate those buffers properly.

  • Like 1
Link to comment

Thanks Rolf for the detailed reply. I will then use the approach that has worked for us so far:

 

Define the VI Prototype for the DLL function to use C string pointers and then on the VI wrapper VI that will call the DLL change the function call definition to use a byte array initialized to a certain size as an input and use the byte array to string from the output.

 

Thanks again,

Fab

Link to comment

A couple of additional comments:

1) One common solution is to make two function calls: one that returns the size of the array that will need to be allocated, and a second that then fills the pre-allocated array with data. The second function should still accept a size parameter so it can confirm that the data will fit and return an error if it will not. Some Windows functions combine these into one function, by passing the size parameter by pointer. If the array pointer is null, or the value pointed to by the size parameter is 0, then the function returns the needed size in the size parameter; otherwise, it fills the array.

 

You should not use Pascal strings, it will be a pain for anyone calling the library from C. If you're using a string and not an array of bytes, then you need to keep in mind that C strings are null-terminated (the last byte in them is 0) and the size needs to include that terminating byte. When you call a DLL from within LabVIEW and you pass the parameter as a string, LabVIEW automatically adds the null termination. However, if you pass the string as an array of U8, then LabVIEW doesn't know it's a string and will NOT add that null terminator. The DLL, having been built in LabVIEW, will expect to find that null terminator to determine the appropriate string length and will likely generate an error if it doesn't find a null byte before the end of the string (a null before the end of the string is fine although you may lose data that follows it; no null at all in the string is not fine).

 

2) Yes, you should use C calling conventions, mostly because it's more common (with the important exception of almost all DLLs that are part of Windows).

Link to comment
Would defining the string as a "Pascal String Pointer"  remove the need to know in advance how large the string needs to be

 

No, well. not really. It depends if you are going to have nulls in your data then you could use the C String and not worry about it. However. I'm guessing that because you are looking at string length bytes (pascal style strings can be no more than 256 bytes by-the-way)  that you are intending to pass arbitrary length binary data that just happen to be strings..

.

There are two ways of transferring variable length data to/from a library.

  1. Memory is allocated by labview and the library populates this memory the data (library needs to know the size and the resultant data must be no more than that passed - create and pass array like the ol' for loop of bytes)
  2. Memory is allocated by the library and labview accesses this memory (Labview needs to know the size and the resultant data can be any size- moveblock ).

 

Either way. One or the other needs to know the size of the allocated memory.

 

The general method is case no.2 since this does not require pre-allocation, is unlikely to crash because the data is too big and only requires one call for allocation and size, You call the function and have the size as one of the returned parameters and a pointer (uintptr_t) as the other Then use Moveblock to get the data (since it will be known at that point by the size parm). You will also need a separate function to release the memory. This also happens to be the fastest :)

 

CDECL calling convention is the one of choice as the STDCALL is windows specific (you are intending to port this to other platforms.....right?)

Edited by ShaunR
  • Like 1
Link to comment
Thanks Rolf for the detailed reply. I will then use the approach that has worked for us so far:

 

Define the VI Prototype for the DLL function to use C string pointers and then on the VI wrapper VI that will call the DLL change the function call definition to use a byte array initialized to a certain size as an input and use the byte array to string from the output.

 

Thanks again,

Fab

 

Unless your string can have embedded NULL bytes that should not terminate it, there should be no need to pass string parameters as byte arrays.

 

In fact when you configure a CLN parameter to be a C string pointer LabVIEW will on return explicitedly scan the string for a NULL byte (unless you configured it to be constant) and terminate it there. This is usually highly desirable for true strings.

 

If the buffer is however binary data that can have 0 bytes, you should indeed pass it as byte array pointer to avoid having LabVIEW scan it on return for a NULL character.

  • Like 1
Link to comment
The general method is case no.2 since this does not require pre-allocation, is unlikely to crash because the data is too big and only requires one call for allocation and size, You call the function and have the size as one of the returned parameters and a pointer (uintptr_t) as the other Then use Moveblock to get the data (since it will be known at that point by the size parm). You will also need a separate function to release the memory. This also happens to be the fastest :)

 

Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.

Link to comment
Eventually the DLL will do serial calls, so yes we will be dealing with byte arrays that might have null in between. 

 

Make sure you specify to whoever is writing it that it must be "Thread-safe". Otherwise you will have to run it in the UI thread.

 

Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.

 

That probably limits it to just you then :D

Link to comment
That probably limits it to just you then :D

 

In the scenario of this thread it's not an option. But for a C DLL to be called by a LabVIEW program it is for anyone useful who gets to use my DLL in LabVIEW, ideally with an accompanying LabVIEW VI library! :D

 

Also the Pointer variant while indeed a possible option is in LabVIEW in fact seldom significantly faster and quite often slower. If there is any chance for the caller to know beforehand the size of the buffer (maybe by calling a second API or by defining anyhow what data needs to be returned: data acquisition for instance) the use of caller allocated buffers passed as C pointers into the function is at least as fast or faster, since the DLL can directly copy the data into the buffer. With the DLL allocated buffer you end up in most cases with a double data copy once for the DLL when it allocates the buffer and copies its data into it and once with the MoveBlock() in the caller.

 

So claiming that it is always faster is not correct. At least inside LabVIEW it is usually about the same speed, only with the data copy happening in one case inside the DLL and in the other in the caller. Only when the DLL can only determine the buffer size during the actual retrieval itself can it be an advantage to use DLL allocated buffers as it avoids the problem of having to potentially allocate a hugely over-sized buffer.

 

If the potential user of the DLL is a C program then this is different. In that case returning DLL allocated buffers is indeed usually faster as you do not need the extra MoveBlock()/memcpy() call afterwards. But it's in any case a disadvantage that the API gets complicated to a level that is stretching the knowledge limits of many potential DLL users, and not just for LabVIEW Call Library Node users, as it is non-standard and also creates easy to introduce bugs in respect to resource management, because of unclear situations who needs to deallocate the buffers eventually. The returned pointer could also be a statically allocated buffer inside the DLL(often the case for strings) that would be fatal to try to free(). And another issue is that your library absolutely needs to provide an according dispose() method, as the free() function the caller might be linking too, might operate on a different heap than the malloc() function the DLL used.

 

The only real speed improvement is when the data producing entity directly can create the managed buffers the final caller will eventually use. But C pointers don't count as such in LabVIEW since you have to do the MoveBlock() trick eventually.

 

One more comment in respect to the calling convention. If you ever intend to create the shared library also for non-Windows platforms, C calling convention is really the only option. If you start out with stdcall now and eventually decide to create a Linux or MacOS version of your shared library you would have to either distribute different VI libraries for Windows and non-Windows platforms, or bite the bullet and change the entire VI library to C calling convention for all users, likely introducing lots of crash problems for users who find it normal to grab a DLL copy from somewhere and copy it into the system or application directory to "fix" all kind of real and imagined problems. At least there are tons of questionable links in the top google hits about DLL downloads to fix and improve the system, whenever I google for a particular DLL name. That and so called system performance scanners who offer to scan my system for DLL problems! :lol: Never tried them but I would suspect 99% of them doing nothing really useful, either containing viruses and troyans or trying to scare the user into downloading the "improved" program that can also fix the many "found" issues, of course for an obolus in the form of hard valuta.

Link to comment
C calling convention it is and we won't try to get fancy with memory management.

Thank you guys, I think we are going to learn a lot (probably more than what we wanted/expected) about DLLs ;)

And make sure they supply you with the 32 bit & 64 bit versions. Most suppliers think that only the 32 bit is sufficient since 32 bit software can be used in windows. However. LabVIEW 64 bit cannot load 32 bit libraries! Edited by ShaunR
Link to comment
And make sure they supply you with the 32 bit & 64 bit versions. Most suppliers think that only the 32 bit is sufficient since 32 bit software can be used in windows. However. LabVIEW 64 bit cannot load 32 bit libraries!

 

Wait, I am building the DLL, does this mean that I have to build two versions of the DLL, one in a LabVIEW 32 bit version and one in a LabVIEW 64 bit version?

Link to comment
Wait, I am building the DLL, does this mean that I have to build two versions of the DLL, one in a LabVIEW 32 bit version and one in a LabVIEW 64 bit version?

Well. You don't have to. But if you don't, then those with LabVIEW64 bit won't be able to use it (that's assuming you are only supporting windows ;) ).

 

You are much better off leaving the LabVIEW code for LabVIEW users (and then it will work on all platforms including Linux, Mac etc) and just compile a 32 bit DLL for non-LabVIEW people and worry about the 64 bit when someone (non-labview) asks/pays for it.

Edited by ShaunR
Link to comment
Well. You don't have to. But if you don't, then those with LabVIEW64 bit won't be able to use it (that's assuming you are only supporting windows ;) ).

 

You are much better off leaving the LabVIEW code for LabVIEW users (and then it will work on all platforms including Linux, Mac etc) and just compile a 32 bit DLL for non-LabVIEW people and worry about the 64 bit when someone (non-labview) asks/pays for it.

 

How can I test the DLL? We wanted to use it in our LabVIEW code so we would have a single DLL for everyone and like I said earlier, to make it easier to replace in the future if they decided to build a DLL in C. But it seems, from what I am reading, that this will be more pain than gain, right?

Link to comment
How can I test the DLL? We wanted to use it in our LabVIEW code so we would have a single DLL for everyone and like I said earlier, to make it easier to replace in the future if they decided to build a DLL in C. But it seems, from what I am reading, that this will be more pain than gain, right?

 

There is no free ride! A DLL/shared library is always platform specific and that means for CPU architecture, OS and bitness. All three have to match for the shared library to be even loadable. That is why distributing a LabVIEW written driver as shared library is probably one of the worser ideas one can have. The same effect you get when distributing VI's without diagram. Because that is what basically is inside the shared library. And no unfortunately you can't leave the diagrams intact inside the DLL and hope that it will still work when loaded into a different version of LabVIEW eventhough the bitness or OS doesn't work. The DLL still executes in the context of the runtime engine which has no compiler or even the possibility to load the diagram into memory.

 

The most user friendly approach is to distribute the instrument driver as LabVIEW source (I personally consider instrument drivers distributed as DLL/shared library at most as a compromise but loath it) and create a shared library from it for those non-LabVIEW users and worry about OS/bitness version and such as requests come in. There won't be any way around creating special versions of your test program that access the DLL instead of the native driver, for testing the shared library version. The upside of this is that debugging of any driver related issues during testing is MUCH easier when you leave everything as diagram, and only check after the final build that it also works as DLL.

 

 

For everyone? Including Linux, Mac, Pharlap and VxWorks? If you are going to support all the platforms that labview supports, then you will need 6 dynamic libraries and you can't use labVIEW to create some of them. Two is just for windows.

 

Fortunately the only one that can not be created by LabVIEW is the VxWorks shared library! But I really echo Shauns comments. If you have any chance to avoid the shared library for your LabVIEW users, you save yourself a lot of pain and sweat and make your LabVIEW users much happier too. Building multiple shared libraries after every modification of your LabVIEW code is no fun at all. And LabVIEW only creates shared libraries for the platform it is running on, so you need to have as many (virtual) OS/LabVIEW installations as you want to support platforms for,  and test them each and every one as well after each build.

Link to comment
How can I test the DLL? We wanted to use it in our LabVIEW code so we would have a single DLL for everyone and like I said earlier, to make it easier to replace in the future if they decided to build a DLL in C. But it seems, from what I am reading, that this will be more pain than gain, right?

For everyone? Including Linux, Mac, Pharlap and VxWorks? If you are going to support all the platforms that labview supports, then you will need 6 dynamic libraries and you can't use labVIEW to create some of them. Two is just for windows.

 

However. If you are committed to making the labVIEW driver dependent on a dynamic library (which is the case if you plan to replace it later with a native C implementation) then you are unavoidably making a rod for your own back. Avoid dynamic libraries in LabVIEW if you can - here be monsters (at least you didn't say you wanted .NET or ActiveX ... :D ).

Edited by ShaunR
Link to comment

We were planning on wrapping the DLL with the VIs and not exposing our users to the pain... but basically what you are telling me, is that from now on, I would have to keep building a new version of the LabVIEW driver for each version of LabVIEW, because the DLL would be version specific.

 

We will continue to develop the driver in LabVIEW and hopefully we get to work with the C developer soon who will be developing the examples. Maybe when they start working with our DLL built in LabVIEW, they decide to make his own DLL in C... 

 

I think trying to have a single source was a nice thought but it will be more pain than gain.

 

Thanks for all your help.

Fab

Link to comment
We were planning on wrapping the DLL with the VIs and not exposing our users to the pain... but basically what you are telling me, is that from now on, I would have to keep building a new version of the LabVIEW driver for each version of LabVIEW, because the DLL would be version specific.

 

I think I ought to clarify this. I assume you came to this conclusion from Rolfs comparison with panel-removed VIs. It's not actually as bad as that, Dynamic libraries in themselves aren't so much version specific but they are platform specific.

 

A dynamic library can be loaded in any version of LabVIEW with a caveat.

 

IF the library was written purely in C. You could load it in any version of LabVIEW and you wouldn't need the LV run-time for non-LabVIEW users (this you know).

 

If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012. If you do this,however, you need to test, test, and test some more.

Edited by ShaunR
Link to comment
If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012. If you do this,however, you need to test, test, and test some more.

 

This is not entirely correct.  Let's say you have a LV 2009 32-bit built library (i.e. a .dll).  You always need a 2009 32-bit run-time engine to load this!  The one exception is if you're in the LV 2009 editor on Windows, but in that case you should have the run-time engine installed anyway so it's a moot point.

 

It is not the case that a LV 2012 run-time engine can load a LV 2009 built library, no matter what features the built library uses. The same is true vice versa - the versions have to match for it to be able to load. (although 2009 SP1 and 2009 count as the same version for these purposes)

Link to comment
This is not entirely correct.  Let's say you have a LV 2009 32-bit built library (i.e. a .dll).  You always need a 2009 32-bit run-time engine to load this!  The one exception is if you're in the LV 2009 editor on Windows, but in that case you should have the run-time engine installed anyway so it's a moot point.

 

It is not the case that a LV 2012 run-time engine can load a LV 2009 built library, no matter what features the built library uses. The same is true vice versa - the versions have to match for it to be able to load. (although 2009 SP1 and 2009 count as the same version for these purposes)

Can you expand on that since that has not been my experience.

 

Are we talking about MSVC dependency linking being the reason or is there something else.

 

......later, after playing a bit......

 

 

So that's pretty definitive. It looks like it checks. But I would still like to understand what the issues are i.e. what makes a LabVIEW dll different from a C dll apart from feature support

Edited by ShaunR
Link to comment
This is not entirely correct.  Let's say you have a LV 2009 32-bit built library (i.e. a .dll).  You always need a 2009 32-bit run-time engine to load this!  The one exception is if you're in the LV 2009 editor on Windows, but in that case you should have the run-time engine installed anyway so it's a moot point.

 

It is not the case that a LV 2012 run-time engine can load a LV 2009 built library, no matter what features the built library uses. The same is true vice versa - the versions have to match for it to be able to load. (although 2009 SP1 and 2009 count as the same version for these purposes)

 

Thanks for clarifying Greg. I was pretty sure that this was the case like this, but started to wonder after Shauns reply.

 

Other than that I do however fully agree with Shaun. DLLs are not evil but more complicated in terms of maintenance and distribution since you need one DLL/shared library for every potential target OS, and if the DLL is a LabVIEW DLL it gets even a little more involved. For that reason distributing LabVIEW created DLLs for LabVIEW users is mostly a pain in the ass and will likely annoy the LabVIEW user too, as he can't look into the driver and debug it if the need should arise. Distributing a LabVIEW DLL driver for non-LabVIEW users is however a possibility although the fact that one needs to have the correct LabVIEW runtime installed is probably going to cause some sputtering by some C/C++/whatever users.

 

 

Can you expand on that since that has not been my experience.

 

Are we talking about MSVC dependency linking being the reason or is there something else.

 

......later, after playing a bit......

 

attachicon.gifUntitled.png

 

So that's pretty definitive. It looks like it checks. But I would still like to understand what the issues are i.e. what makes a LabVIEW dll different from a C dll apart from feature support

 

 

 

Hmm, could you clarify a bit what you were trying to do there? It's not easy to guess what you try to demonstrate from this message box and that makes it very hard to come up with some ideas to explain the behavior you see. To me this looks like a LabVIEW 2012 generated shared library that you try to load on a computer that does not have the 2012 runtime engine installed.

Link to comment
So that's pretty definitive. It looks like it checks. But I would still like to understand what the issues are i.e. what makes a LabVIEW dll different from a C dll apart from feature support

 

I wasn't around when we made this decision, but I would guess the rationale was something like the following:

 

Backwards compatibility would be a big burden for us. Every time we made a change to the execution system, we would have to consider how older code would behave with the change.  It would increase our testing burden and make execution bugs (the worst kind of bugs!) more likely.  It would make some kinds of big architectural changes (like the DFIR compiler rewrite) even more scary, and we'd be less likely to take on that risk.  It would make the run-time engine bigger.

 

Now the C runtime is backwards compatible (I think?), but I'd imagine they aren't adding as many new features as we are.  The pain is also eased because you get a C runtime installed with the operating system.

Link to comment
Now the C runtime is backwards compatible (I think?), but I'd imagine they aren't adding as many new features as we are.  The pain is also eased because you get a C runtime installed with the operating system.

 

The C runtime is mostly backwards compatible. There have been hickups both with the MS VC C runtime and also the GCC clib in the past. MS "solved" the problem by forcing the MS C runtime to be a SxS dependendy, which will load whatever version of the C runtime library that was used to compile the executable module (DLL and EXE) and in that way created a new huge problem. If a module was compiled in a different VC version, it will load a different C runtime version into the process and havoc begins, if you start passing any C runtime objects between those modules. This includes heap pointers that can not be freed in a different C runtime library than they were created, and shows that even when using just the C runtime, you have to make sure to allocate and destroy memory objects always from the same C runtime scope.

But also more fundamental things like file descriptors are a problem. Basically anything that has a filedescriptor in the function interface will completely fail if made to operate on objects that were created in a different C runtime library. Also exception handling is another area that changes with every Visual C version significantly and can have nasty effects if it gets mixed.

 

This is all rather painful and also seems mostly unnecessary if you think about the fact that almost everything in the MS C runtime library ultimately maps to WinAPI calls at some point. Also for itself they avoided this problem by making all of their Windows tools link to the original msvcrt.dll, but declaring that private after Visual C 6. The only way to make use of msvcrt.dll instead of msvcrtxx.dll is by either using Visual C 6 or older WinDDK compiler toolchains.

  • Like 1
Link to comment
I wasn't around when we made this decision, but I would guess the rationale was something like the following:

 

Backwards compatibility would be a big burden for us. Every time we made a change to the execution system, we would have to consider how older code would behave with the change.  It would increase our testing burden and make execution bugs (the worst kind of bugs!) more likely.  It would make some kinds of big architectural changes (like the DFIR compiler rewrite) even more scary, and we'd be less likely to take on that risk.  It would make the run-time engine bigger.

 

Now the C runtime is backwards compatible (I think?), but I'd imagine they aren't adding as many new features as we are.  The pain is also eased because you get a C runtime installed with the operating system.

 

OK. Played a bit more with yours and Rolfs comments in mind.

 

I will modify my original statement to

 

If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, as long as you have BOTH run-time engines installed but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012 with the appropriate run-times . If you do this,however, you need to test, test, and test some more.

 

 

Dynamic libraries, however, still are no-where as bad as diagram-less VIs (LabVIEW dynamic libraries being slightly worse from a deployment perspective than C ones, it seems.).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.