Jump to content

LabVIEW memory management different from C ?


Recommended Posts

Hi everybody,

In our quest to optimize our codes we try to interface the variables declared in LabVIEW with a C/C++ processing (DLL).

Copy_adr_BD.png

 

1 - In my example, we had fun declaring a U32 variable in LabVIEW, then we created a pointer in C to assign it the value we wanted (copy) then we restored the value in LabVIEW.
In this case everything works correctly.

Here is the code in C :

Quote

DLL_EXPORT unsigned int *create_copy_adress_Uint(unsigned int val)
{
unsigned int *adr;
adr = (unsigned int*)malloc(10);
*adr = val;
return adr;
}


Hence my question, am I breaking my head unnecessarily, does my function set already exist in the LabVIEW DLL (I have a feeling that one of you will tell me...) 

Copy_adr_fp.png

 

 

Get_adress.png

2 - During our second experiment (more interesting), we assign this time the address of the variable U32 declared in LabVIEW to our pointer, this time the idea is to act directly at the level of C on the variable declared in LabVIEW.
We read this address, then we try to manipulate the value of this variable via the pointer in C and it does not work!

Why ? or did I make a mistake in my reasoning ?

 

Get_adress_fp.png

 

 

 

This experiment aims to master the memory management of the data declared in LabVIEW at the C level. The idea would be then to do the same thing but with U32 or sgl arrays.

3 - When I declare a variable in LabVIEW, how is it managed in memory? Is it done like in C/C++?

4 - Last question, the moveblock function give me the value of a pointer (read), which function allows me to write to a pointed celled ?

 

Quote

 

DLL_EXPORT unsigned int *get_adress_Uint(unsigned int val)
{
unsigned int *adr;
adr = (unsigned int*)malloc(10);
adr = &val;
return adr;
}

DLL_EXPORT unsigned int adress_to_int(unsigned int *adr)
{
return *adr;
}

DLL_EXPORT void set_int_to_adress(unsigned int *adr, unsigned int val)
{
*adr = val;
}

 

 

I put the source code as zip file 

 

 

 

DLL pointeur.zip

Edited by Youssef Menjour
Link to comment

You are playing with fire.

Ownership is key. DO NOT manipulate pointers in LabVIEW-period!

You either manipulate data by passing it to a DLL (like an array where LabVIEW owns the data) or you provide functions to manipulate data (where the DLL owns the data - where is your freeing of the pointer allocated inside the DLL?). LabVIEW has no ability to know what a DLL is doing with memory and vice versa.

You must also take into account the pointer size. (32 bit LabVIEW or 64 bit LabVIEW). For some types, this is handled for you (arrays, for example) others you will want to use the Unsigned/Signed Pointer sized Variable (for opaque pointers) and pass that BY VALUE to other functions.

Look at the Function Prototype in the dialogue. You will see the C equivalent of the call. Note that you do not seem to be able to do things things like int32_t myfunc(&val). Instead you have to use "Pointer to Value" and it will look like int32_t myfunc(int32_t *val). 

If you are trying to manipulate pointers, you are doing it wrong and it will crash at some point taking the whole IDE with it.

  • Like 1
Link to comment

Hello ShaunR,

First of all, thank you for taking the time to answer me.
In view of your answer I will recontextualize the question.

First of all I agree that you have to initialize and delete a pointer correctly. Here it is an example which aims to understand the management of the memory under LabVIEW.

I want to understand how it works. I don't agree with your answer forbidding to manipulate a pointer. When the subject will be mastered there is no reason to be afraid of it.

What I want to do is to understand how LabVIEW when declaring a variable stores it in memory.
Is it strictly like in C/C++? Lets take an exemple,  with an array of U8

Because in this case by manipulating the pointers properly it is interesting to declare an array variable in labview then to transmit its address to a DLL (first difficulty), manipulate as it should in C/C++  then return to labview to continue the flow. 

Why do I want to do this ?
Because it seems (I say seems because it's probably not necessary) that LabVIEW operations are slow, to slow for my application !

As you know we are working on the development of a DeepLearning library and this one is greedy in calculation and thus it is necessary to accelerate them with libraries of multitrading C/C++ (unless to have the equivalent in LabVIEW but I doubt it for the moment).

Just to give you a comparatif if we content to use LabVIEW normally we are 10 time slow as python !!

Is it possible to pipeline a loop in labview ? Is it possible to merge nested loop in LabVIEW ?

Finally about the data transfer, I understand perfectly that in "term of security", copy a data in the DLL to use it then to restore it to LabVIEW is tempting but the worry is in the delays of data transfer. That's what we want to avoid !

I think it's stupid to copy a data if it already exists in memory, why not use it directly ! (in condition to master the subject) The copy and transfer make us lose time.

Can you please give me some answers ?

Thank you very much 

 

Link to comment

You have some serious undefined behaviour in your c code.

In create_copy_adress_Uint you dereference an uninitialized pointer, writing in a random location. In get_adress_Uint you return the address of a stack variable that is invalid as soon as the function returns. You are going to experience lots of crashing.

Have you looked at the configuratrion options for the call library node? You can just pass parameters by pointer. Passing an array by "array data pointer" will let you manipulate the data as in C (but do not try to free that memory). You do not need to make a copy. Be mindful of the lifetime. That pointer is only valid during the function call and might be invalidated later. So don't keep it around after your function returns.

If you also want to resize LabVIEW data structures, there are memory manager functions to do that. Pass the array by handle and use DSSetHandleSIze or NumericArrayResize.

Examples for interfacing with DLLs are here: examples\Connectivity\Libraries and Executables\External Code (DLL) Execution.vi

 

Edited by cordm
  • Like 1
Link to comment
4 hours ago, Youssef Menjour said:

I want to understand how it works. I don't agree with your answer forbidding to manipulate a pointer. When the subject will be mastered there is no reason to be afraid of it.

Be afraid; be very afraid :D 

4 hours ago, Youssef Menjour said:

What I want to do is to understand how LabVIEW when declaring a variable stores it in memory.

Generally, there is no concept of a pointer in LabVIEW.  LabVIEW is a managed environment so it is more like .NET. You don't know here it is stored or even how much memory is used to store it.

4 hours ago, Youssef Menjour said:

Because in this case by manipulating the pointers properly it is interesting to declare an array variable in labview then to transmit its address to a DLL (first difficulty), manipulate as it should in C/C++  then return to labview to continue the flow. 

The CLFN will do that out-of-the-box

4 hours ago, Youssef Menjour said:

Is it possible to pipeline a loop in labview ?

.Yes.

4 hours ago, Youssef Menjour said:

I think it's stupid to copy a data if it already exists in memory, why not use it directly !

Because you don't know where it is for the lifetime of the variable.

  • Thanks 2
Link to comment

You need to understand what Managed code means. In .Net that is a very clear and well defined term and has huge implications. LabVEW is a fully managed environment too and all the same basic rules apply. C on the other hand is completely unmanaged. Who owns a pointer and who is allowed to do anything with it, even reading from it, and when, is completely up to contracts that each API designer defines himself. And if you as caller don't adhere to that contract to the letter, no matter how brain damaged or undocumented it is, you are in VERY DEEEEEEEP trouble.

LabVIEW (and .Net and (D)COM and others like it) all have a very well defined management contract. Well defined doesn't necessarily mean that it is simple to understand, or that there are lengthy documentations that detail everything about it in detail. Not even .Net has an exhaustive documentation. Much of it is based on some basic rules and a set of APIs to use to guarantee that the management of memory objects is fully consistent and protected throughout the lifetime of each of those objects. Mixing and matching those ideas between each environment is a guaranteed recipe for disaster. Not understanding them as you pass around data is that too!

For other platforms such a Linux and MacOSX there also exist certain management rules and they are typically specific to the used API or group of API. For instance it makes a huge difference if you use old (and mostly depreciated) Carbon APIs or modern Cocoa APIs. They share some common concepts and some of its data types are even transferable between those two without invoking costly environmental conversions, but at that point stops the common base. Linux is according to its heritage a collection of very differing ideas and concepts. Each API tends to follow its own specific rules.

Much of it is very logical, once you understand the principles of safe and managed memory. Until then it all looks like incomprehensible magic and you are much better off to stay away from trying to optimize memory copies and such things to squeeze out a little more performance.

One of the strengths of LabVIEW is that it is very difficult to make code that crashes your program. That is until you venture into accessing external code. Once you do that your program is VERY likely to crash randomly or not so randomly, unless you fully understand all the implications and intricacies of working that way.

Quote

I think it's stupid to copy a data if it already exists in memory, why not use it directly !

The pointer from a LabVIEW array or string, passed to the Call Library Node, only is guaranteed to exist for the time your function runs. Once your function returns control back to LabVIEW it reserves the right to reallocate, resize, delete, or reuse that memory buffer for anything it deems necessary. This part is VERY important to allow LabVIEW to optimize memory copies of large buffers.

If you want to have a buffer that you can control yourself you have to allocate it yourself explicitly and pass its reference around to wherever it is needed. But do not expect LabVIEW to deallocate it for you. As far as LabVIEW is concerned it does not know that that variable is a memory buffer, nor when it is not anymore needed or which heap management routines it should use to properly deallocate it. And don't expect it to be able to directly dereference the data in that buffer to display it in a graph for instance. As far as LabVIEW is concerned, that buffer is simply a scalar integer that is nothing more than a magic number that could mean how many kilometers the moon is away or how many seconds exist in the universes life, or how many atoms fit in a cup of tea or anything else you fancy.

Or you pass the native LabVIEW buffer handle into the Call Library Node and use the LabVIEW memory manager functions if you have to resize or deallocate them. That way you can use LabVIEW buffers and adhere to the LabVIEW management contract. But it means that that part of your external code can only run when called from LabVIEW. Other environments do not know about these memory management functions and consequently can not provide compatible memory buffers to pass into your functions. And definitely don't ever store such handles somewhere in your external code to access them asynchronously from elsewhere once your function has returned control to LabVIEW. That handle is only guaranteed to exist for the duration of your function call as mentioned above. LabVIEW remains in control of it and will do with it whatever it pleases once you return control from your function call to the LabVIEW diagram. It could reuse it for something entirely different and your asynchronous access will destroy its contents or it could simply deallocate it and your asynchonous access will reach into nirvana and send your LabVIEW process into "An Access Violation has occurred in your program. Save any data you may need and restart the program! Do it now, don't wait and don't linger, your computer may start to blow up otherwise!" 😀

And yes, one more advice. Once you start to deal with external code anywhere and in anyway, don't come here or on the NI forum and ask why your program crashes or starts to behave very strange and if there was a known LabVIEW bug causing this. Chances are about 99.25678% that the reason for that behaviour is your external code or the interface you created for it with Call Library Nodes. If your external code tries to be fancy and deals with memory buffers, that chance increases with several magnitudes! So be warned!

Quote

Just to give you a comparatif if we content to use LabVIEW normally we are 10 time slow as python !!

In that case you are doing something fundamentally wrong. Python is notoriously slow, due to its interpreted nature and the concept of everything is an object. There are no native arrays as this is represented as a list of objects. To get around that numpy uses wrapper objects around external managed memory buffers that allow consecutive representations of arrays in one single memory object and fast indexing into them. That allows numpy routines to be relatively fast when operating on arrays. Without that, any array like manipulation tends to be dog slow. LabVIEW is fully compiled and uses many optimizations that let it beat Python performance with hands tied on its back. If your code runs so much slower in LabVIEW, you have obviously done something wrong and not just tied its hands on its back but gagged and hogtied it too. Things that can cause this are for instance Build Array nodes inside large loops if we talk about LabVIEW diagram code and bad external code management if you pass large arrays between LabVIEW and your external code.

But the experiments you show in your post may be interesting exercises but definitely go astray in trying to solve such issues.

Edited by Rolf Kalbermatter
  • Like 2
Link to comment
  • 2 weeks later...

Rolf, if i well understood you, if i do that : 

DLL_EXPORT unsigned int* Tab1D_int_Ptr(unsigned int* ptr){
return ptr;
}

Data coming from LabVIEW, it mean adress memory could be released at any time by LabVIEW ? (that's logic)

--> method 2 is a solution in that case (Pointer created in labview with DSNewPtr + Moveblock)

 

On 5/4/2022 at 1:15 PM, cordm said:

 

If you also want to resize LabVIEW data structures, there are memory manager functions to do that. Pass the array by handle and use DSSetHandleSIze or NumericArrayResize.

Examples for interfacing with DLLs are here: examples\Connectivity\Libraries and Executables\External Code (DLL) Execution.vi

 


I have another question, for tab, what is the difference between passing by pointer or handler.

I mean we have a a struct implicitly give to the C/C++ lenght of tab with the Handler method but is there another difference ? (Ugly structure syntax 🥵)

(many thanks for the exemple cordm ! 👍)

 

Image is a VI snipped 

 

On 5/4/2022 at 2:04 PM, ShaunR said:

Be afraid; be very afraid :D 

ShaunR, i'm not far to do what i want 😉

 

Pointeur.dll

memory management.png

Edited by Youssef Menjour
Link to comment

If you want to manipulate array pointers directly on the diagram, why not use convenient Memory Manager functions such as DSNewPtr / DSNewPClr? You may call them through the same CLF Nodes as you already do with MoveBlock function. Just allocate enough memory with DSNewPtr, copy the array contents to there with MoveBlock and then do what you want. Or, if you prefer hacky ways, you could use internal ArrayMemInfo node and process arrays "in-place", without getting extra copies on data transfers. In the latter case it's necessary to realize that the pointer will be 'alive' as long as you pull the array wire through the structures. At the moment you decide to drop that wire somewhere, LabVIEW wants to optimize your code and the pointer becomes invalid or occupied by something else in the next structures.

But, as it has been said above, LV native yellow nodes should already be optimized and should satisfy your needs in most cases. If not, then process your array in DLL entirely and give that array back to LV, when it's done completely. As long as you are in the DLL doing something with the data, the array pointer should be fine.

upd: You changed your snippets and seem to use Memory Manager allocation functions now. Still I don't get the grand design of these experiments. Maybe I should wait a little. 😀

Edited by dadreamer
Link to comment
3 hours ago, Youssef Menjour said:

Pointer created in labview with DSNewPtr + Moveblock

Yes. LabVIEW owns it, not the DLL

3 hours ago, Youssef Menjour said:

ShaunR, i'm not far to do what i want 😉

Ready to take the training wheels off (orange nodes) and run in any thread (yellow nodes)? :)

2 hours ago, dadreamer said:

Still I don't get the grand design of these experiments.

Me neither. I can understand wanting to figure out how to interface to external DLL's but this seems a lot of effort for no reward. Passing a LabVIEW array to a DLL would be more useful as a learning exercise and a lot easier. Maybe it's just that he is moving from C/C++ and so is trying to do things they way he always has. I think we (as LabVIEW programmers) often forget what an enormous paradigm shift it actually is.

Edited by ShaunR
Link to comment

Pointers are pointers. If you use DSNewPtr() and DSDisposePtr() or malloc() and free() doesn't matter to much, as long as you stay consistent. A Pointer allocated with malloc() has to be deallocated with free(), a DSNewPtr() must be deallocated with DSDisposePtr() and a pointer allocated with HeapAlloc() must be deallocated with HeapFree(), etc. etc. They may in the end all come from the same heap (likely the Windows Heap) but you do not know and even if they do, the pointer itself may and often is different since each memory manager layer adds a little of its own layer to manage the pointers itself better.

To make matters worse, if you resolve to use malloc() and free() you always have to do the according operations in the same compilation unit. Your DLL may be linked with gcc c-lib 6.4 and the calling application with MS C Runtime 14.0 and while both have a malloc() and free() function they absolutely and certainly will not operate on the same heap.

Pointers are non-relocatable as far as LabVIEW is concerned and LabVIEW only uses them for clusters and internal data structures. All variable sized data on the diagram such as arrays and strings is ALWAYS allocated as handle. A handle is a pointer to a pointer and the first N int32 elements in the data buffer are the dimension size, followed directly with the data and possibly memory aligned if necessary, N being here the number of dimensions. Handles can be resized with DSSetHandleSize() or NumericArrayResize() but the size of the handle does not have to be the same as the size elements in the array that indicate how many elements the array hold. Obviously the handle must always be big enough to hold all the data, but if you change the size element in an array to indicate that it holds fewer elements than before, you do not necessarily have to resize the handle to that smaller size. Still if the change is big, you anyhow absolutely should do it but if you reduce the array by a few elements you can forgo the resize call.

There is NO way to return pointers from your DLL and have LabVIEW use them as arrays or strings, NONE whatsoever! If you want to return such data to LabVIEW it has to be in a handle and that handle has to be allocated, resized, and deallocated with the LabVIEW memory manager functions. No exception, no passing along the start and collecting your start salary, nada niente nothing! If you do it this way, LabVIEW can directly use that handle as an array or string, but of course what you do in C in terms of the datatype in it and the according size element(s) in front of it must match exactly. LabVIEW absolutely trusts that a handle is constructed the way it wants it and makes painstakingly sure to always do it like that itself, so you better do so too. One speciality in that respect. LabVIEW does explicitly allow for a NULL handle. This is equivalent to an "empty" handle with the size elements set to 0. This is for performance reasons. There is little sense to invoke the memory manager and allocated a handle just to store in it that there is not data to access. So if you pass handle datatypes from your diagram to your C function, your C function should be prepared to deal with an incoming NULL handle. If you just blindly try to call DSSetHandleSize() on that handle it can crash as LabVIEW may have passed in a NULL handle rather than a valid empty handle. Personally I prefer to use NumericArrayResize() at all times as it deals with this speciality already properly and also accounts for the actual bytes needed to store the size elements as well as any platform specific alignment. A 1D array of 10 double values does require 84 bytes on Win32, but 88 bytes on Win64, since under Win64 the array data elements are aligned to their natural size of 8 bytes. When you use DSSetHandleSize() or DSNewHandle() you have to account for the int32 for the size element and the possible alignment yourself. If you use

err = NumericArrayResize(fD, 1, (UHandle*)&handle, 10)

You simple specify in its first parameter that it is an fD (floatDouble) data type array, there is 1 dimension, passing the handle by reference, and the number of array elements it should have. If the array was a NULL handle, the function allocates a new handle of the necessary size. If the handle was a valid handle instead, it will resize it to be big enough to hold the necessary data.

You still have to fill in the actual size of the array after you copied the actual data into it, but at least the complications of calculating how big that handle should be is taken out of your hands.

Of course you also always can go the traditional C way. The caller MUST allocate the memory buffer big enough for the callee to work with, pass its pointer down to the callee which then writes something into it and then after return, the data is in that buffer. The way that works in LabVIEW is that you MUST make sure to allocate the array or string prior to calling the function. InitializeArray is a good function for that, but you can also use the Minimum Size configuration in the Call Library Node for array and string parameters.

LabVIEW allocates a handle but when you configure the parameter in the Call Library Node as a data pointer, LabVIEW will pass the pointer portion of that handle to the DLL. For the duration of that function, LabVIEW guarantees that that pointer stays put in place in memory, won't be reused anywhere else, moved, deallocated or anything else like that (unless you checked the constant checkbox in the Call Library Node for that parameter). In that case LabVIEW will use that as hint that it can pass the handle also to other functions in parallel that are also marked to not going to try to modify it. It has no way to prevent you from writing into that pointer anyhow in your C function but that is a clear violation of the contract you yourself set up when configuring the Call Library Node and telling LabVIEW that this parameter is constant. Once the function returns control to the LabVIEW diagram, that handle can get reused, resized, deallocated at absolutely any time and you should therefore NEVER EVER hold onto such a pointer beyond the time when you return control back to LabVIEW!

That's pretty much it. Simple as that but most people fail it anyhow repeatedly.

Edited by Rolf Kalbermatter
  • Like 2
Link to comment

Thanks Rolf for the explanation (i still need to digest all).

14 hours ago, dadreamer said:

Still I don't get the grand design of these experiments. Maybe I should wait a little. 😀

The best way to acquire experience is to experiment ! Read some of you, we feel we are manipulating a nuclear plant 😆 --> worst case labview crash (we'll survive really !!)

As I am working on a project where I need time performance on array operations with CPU (read, calculate, write) ; 

Good news - The arrays are fixed size  (no pointer realocation and no resizing)
Bad news - The array can be 1D, 2D, 3D, 4D. (the access times with the LabVIEW native function palette are not satisfactory for our application --> need to find a better solution)

By analogy, we suppose that the access to an array is limited on a PC as with an FPGA (on this one the physical limitation of access to the data of an array in read/write is 2 ports of reading and writing by cycle of clock whatever the size of the array).

There is also the O(N) rule which says that the access time (read/write) to an array data is proportional to its size N --> I maybe wrong here

In any case to increase the access time (read/write) of an array, a simple solution is to organize our data by ourselves (an array is split in several arrays (pointers) to multiply the access speed --> O(N) becomes in theory O(N/n) ) and port are multiplied by n (access time)

We navigate in this "array" by addressing the right part (the right pointer).

Some will say to me, but why you do not divide your table in labview and basta ? --> simply that navigating with pointers avoids unnecessary data copies on all levels and therefore makes us lose process time. 
We tested it, we saw a noticeable difference!

In theory, doing like this is much more complex to manage but has the advantage of being faster for the reading / writing of data which are in fact the main problem

 

1915049504_2Darray.png.b6179d1fa965eebfe3b9af7f2f727238.png

 

Now why am I having fun with C/C++?
Simply in case we can't go fast enough on some operations, in this case we transfer the data via pointers (as i told pointer well managed is the best solution - no copy ), we use C/C++ libraries like "boost" which are optimized for some operation.

Moveblock is a very interesting functionnality ! 
So the next step is to code and test 3D,4D array and be able with only PTR primary adress to navigate very fast inside arrays (recode replace, recode index, code construct the final array)

 

I found some old documentations and topic speaking about memory management and it helped me much. Thank you again Rolf because i saw many time some of your post helping a lot

Edited by Youssef Menjour
Link to comment

Generally if you use an external library to do something because that library does things that LabVIEW can't: Go your gang!

If you try to do an external library to operate on multidimensional arrays and do things on them that LabVIEW has native functions for:

You are totally and truly wasting your time. Your C compiled code may in some corner cases be a little faster, especially if you really know what you are doing on C level, and I do mean REALLY knowing not just hacking around until something works.

So sit back relax and think where you need to pass data to your external Haibal library to do actual stuff and where you are simply wasting your time with premature optimization. So far your experiments look fine as a pure educational experiment in itself, but they serve very little purpose in trying to optimize something like interfacing to a massive numerical library like Haibal is supposed to get.

What you need to do is to design the interfaces between your library and LabVIEW in a way to pass data around. And that works best by following as few rules as possible, but all of them VERY strictly. You can not change how LabVIEW memory management works  Neither can you likely change how your external code wants his data buffers allocated and managed. There is almost always some impedance mismatch between those two for any but the most simple libraries. The LabVIEW Call Library Node allows you to support some common C scenarios in the form of data pointers. In addition it allows you to pass its native data to your C code, which every standard library out there simply has no idea what to do with. Here comes your wrapper shared library interface. It needs to manage this impedance mismatch in a way that is both logical throughout and still performant. Allocating pointers in your C code to pass back and forth across LabVIEW is a possibility but you want to avoid that as much as possible. This pointer is an anachronisme in terms of LabVIEW diagram code. It exposes internals of your library to the LabVIEW diagram and in that way makes access possible to the user of your library that 99% of your users have no business to do nor are they able to understand what they are doing. And no, saying don't do that usually only helps for those who are professionals in software development. All the others believe very quickly they know better and then the reports about your software misbehaving and being a piece of junk start pouring in.

Edited by Rolf Kalbermatter
  • Like 1
Link to comment
1 hour ago, Rolf Kalbermatter said:

If you try to do an external library to operate on multidimensional arrays and do things on them that LabVIEW has native functions for:

You are totally and truly wasting your time. Your C compiled code may in some corner cases be a little faster, especially if you really know what you are doing on C level, and I do mean REALLY knowing not just hacking around until something works.

That's not strictly true. There are some standard situations where LabVIEW is extremely slow and one of those is for computational functions. As an example. Hashing, encryption et. al. is orders of magnitude faster using OpenSSL than native LabVIEW.

image.png.51047a5864023fdbb316e9d125558a37.png

I don't know much about AI but I do know it is computationally intensive.

2 hours ago, Rolf Kalbermatter said:

All the others believe very quickly they know better and then the reports about your software misbehaving and being a piece of junk start pouring in.

Amen to that :lol:

Link to comment

Cryptography and compression/decompression are algorithms that can be significantly faster if done in C and if you really know what you are doing. However doing that requires usually a level of knowledge that simply eliminates the option to tinker yourself with it (ok I'm not talking about a relatively simple MD5 or SHA-something hash here but real cryptography or big number arithmetic).

And then you are in the realms of using existing libraries rather than developing your own (see your openSSL example 🙂 ). These libraries typically have specific and often fairly well defined data buffer in and data buffer out interfaces, which is what we are currently talking about. If you can use such an interface things are trivial, as they work with standard C principles: Caller controls allocations and has to provide all buffers and that's it. Simple and quick, but not always convenient as the caller may not even remotely know in advance what size of output buffer is needed. If you need to go there, things start to get hairy and you have to think about real memory management and who does what and when. One thing is clear: LabVIEWs management contract is very specific and there is no way you can change that.

Either you follow the standard C route where the caller provides all necessary buffers, or start to work with LabVIEW memory handles and follow its contract to the letter. The only other alternative is to start to develop your own memory management scheme, wrap each of these memory objects into a LabVIEW object whose diagram is password locked to avoid anyone insane enough to think they know better to tinker with it.

The fourth variant is to use LabVIEW semi documented features such as external DVRs (there is some sort of minimalistic documentation in the CUDA examples that could be downloaded from the NI site many moons ago), or even more obscure, truly undocumented features such as user refnums and similar things. Venturing into these is really only for the very brave. Implementing C callback functions properly is a piece of cake in comparison 🙂

Edited by Rolf Kalbermatter
Link to comment
7 minutes ago, Rolf Kalbermatter said:

Cryptography and compression/decompression are algorithms that can be significantly faster if done in C and if you really know what you are doing. However doing that requires usually a level of knowledge that simply eliminates the option to tinker yourself with it (ok I'm not talking about a relatively simple MD5 or SHA-something hash here but real cryptography or big number arithmetic).

And then you are in the realms of using existing libraries rather than developing your own (see your openSSL example 🙂 ). These libraries typically have specific and often fairly well defined data buffer in and data buffer out interfaces, which is what we are currently talking about. If you can use such an interface things are trivial, as they work with standard C principles: Caller controls allocations and has to provide all buffers and that's it. Simple and quick, but not always convenient as the caller may not even remotely know in advance what size of output buffer is needed. If you need to go there, things start to get hairy and you have to think about real memory management and who does what and when. One thing is clear: LabVIEWs management contract is very specific and there is no way you can change that.

Either you follow the standard C route where the caller provides all necessary buffers, or start to work with LabVIEW memory handles and follow its contract to the letter. The only other alternative is to start to develop your own memory management scheme, wrap each of these memory objects into a LabVIEW object whose diagram is password locked to avoid anyone insane enough to think they know better to tinker with it.

The fourth variant is to use LabVIEW semi documented features such as external DVRs (there is some sort of minimalistic documentation in the CUDA examples that could be downloaded from the NI site many moons ago), or even more obscure, truly undocumented features such as user refnums and similar things. Venturing into these is really only for the very brave. Implementing C callback functions properly is a piece of cake in comparison 🙂

We don't know what the OP's competence is in C/C++ - only LabVIEW. Additionally. He may have access to very competent C/C++ programmers within the company. All I'm saying is that, generally, LabVIEW is pretty slow when it comes to computational functions when compared to C/C++. So if he thinks that he can get better performance from a DLL, then it's valid to try things out or to figure out how to tell another C/C++ programmer what interface to their code would be better for him.

If I had 10 minutes in a room with the developers of OpenSSL 3, there would be a few choice words and ringing ears at the end of it :lol:

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.