bna08 Posted December 1, 2022 Report Share Posted December 1, 2022 (edited) In my application I need to get a few hundred bytes from a .NET C++/CLI DLL as fast as possible. This would be achievable by initializing an array in LabVIEW and then passing its pointer to the DLL so it can copy the data to the LabVIEW array directly. As far as I know this cannot be done. Or is there another way? Currently, I am achieving this behavior by using IMAQ GetImagePixelPtr.vi which allocates a small IMAQ image (array of pixels) and returns pointer to this image. Afterwards, I pass the pointer to the DLL, write my data to it and read the values back in LabVIEW. Is this too much of a hack? It seems to work OK. Edited December 1, 2022 by bna08 Quote Link to comment
ShaunR Posted December 1, 2022 Report Share Posted December 1, 2022 (edited) DSNewPointer can create a pointer to an array. It's in VIlib\Utility\importsl. You can then use the GetValueByPointer XNode to retrieve the array (or Moveblock if you need speed). Edited December 1, 2022 by ShaunR 1 Quote Link to comment
Rolf Kalbermatter Posted December 2, 2022 Report Share Posted December 2, 2022 (edited) 17 hours ago, bna08 said: In my application I need to get a few hundred bytes from a .NET C++/CLI DLL as fast as possible. This would be achievable by initializing an array in LabVIEW and then passing its pointer to the DLL so it can copy the data to the LabVIEW array directly. As far as I know this cannot be done. Or is there another way? Currently, I am achieving this behavior by using IMAQ GetImagePixelPtr.vi which allocates a small IMAQ image (array of pixels) and returns pointer to this image. Afterwards, I pass the pointer to the DLL, write my data to it and read the values back in LabVIEW. Is this too much of a hack? It seems to work OK. It's definitely a hack. But if it works it works, it may just be a really nasty surprise for anyone having to maintain that code after you move on. It would figure very high on my list of obscure coding. The solution of Shaun is definitely a lot cleaner, without abusing an IMAQ image to achieve your goal. But!!! Is this pointer passed inside a structure (cluster)? If it is directly passed as a function parameter, there really is no reason to try to outsmart LabVIEW. Simply allocate an array of the correct size and pass it as an Array (correct data type), Pass as: Array Data Pointer and you are done. If you want to keep this array in memory to avoid having LabVIEW allocate and deallocate it repeatedly just keep it in a shift register (or feedback node) and loop it through the Call Library Node. The LabVIEW optimizer will then always attempt to reuse that buffer whenever possible (and if you don't branch that wire anywhere out of the VI or to functions that want to modify it, this is ALWAYS). Edited December 2, 2022 by Rolf Kalbermatter 1 Quote Link to comment
bna08 Posted December 5, 2022 Author Report Share Posted December 5, 2022 On 12/2/2022 at 3:37 PM, Rolf Kalbermatter said: Simply allocate an array of the correct size and pass it as an Array (correct data type), Pass as: Array Data Pointer and you are done. Thank you, Rolf. However, what do you mean by "pass it as an Array as Array Data Pointer"? Pass where? Into which function? I am calling .NET DLL functions with calls via .NET Invoke Node in LabVIEW. Passing as Array Data Pointer is possible when calling unmanaged DLL functions via Call Library Function which I cannot use in my case, or can I? Quote Link to comment
LogMAN Posted December 5, 2022 Report Share Posted December 5, 2022 It sounds as if you want to pass a .NET Array type by ref to your method. This should be possible by constructing an array in LabVIEW, for example, by using the To .NET Object function, and passing the instance by reference to your method (assuming that your method signature is by ref). If you want to avoid generics, you can also initialize your own array as illustrated below. 1 Quote Link to comment
Rolf Kalbermatter Posted December 5, 2022 Report Share Posted December 5, 2022 1 hour ago, bna08 said: Thank you, Rolf. However, what do you mean by "pass it as an Array as Array Data Pointer"? Pass where? Into which function? I am calling .NET DLL functions with calls via .NET Invoke Node in LabVIEW. Passing as Array Data Pointer is possible when calling unmanaged DLL functions via Call Library Function which I cannot use in my case, or can I? So far it's all guessing. You haven't shown us an example of what you want to do nor the according C# code that would do the same. It depends a lot on how this mysterious array data pointer by reference is actually defined in the .Net method. Is it a full .Net Object, or an IntPtr? 1 Quote Link to comment
bna08 Posted December 5, 2022 Author Report Share Posted December 5, 2022 10 minutes ago, Rolf Kalbermatter said: So far it's all guessing. You haven't shown us an example of what you want to do nor the according C# code that would do the same. It depends a lot on how this mysterious array data pointer by reference is actually defined in the .Net method. Is it a full .Net Object, or an IntPtr? I don't work with a DLL in C#. The DLL is in C++/CLI which allows working with managed and unmanaged data at the same time. Therefore, in this case I pass a 64-bit pointer to data buffer allocated with DSNewPtr (as suggested by ShaunR above) by calling one of the DLL functions as UInt64 value to the DLL where I use memcpy to fill the memory pointed to by the LabVIEW pointer. Basically: allocate a buffer in LabVIEW with DSNewPtr pass this pointer to the .NET DLL by calling my .NET DLL function SetExternalDataPointer(UInt64 lvPointer) use memcpy in the DLL to copy the data to lvPointer address read the data in LabVIEW by calling GetValueByPointer.xnode Quote Link to comment
Rolf Kalbermatter Posted December 5, 2022 Report Share Posted December 5, 2022 (edited) 1 hour ago, bna08 said: I don't work with a DLL in C#. The DLL is in C++/CLI which allows working with managed and unmanaged data at the same time. Therefore, in this case I pass a 64-bit pointer to data buffer allocated with DSNewPtr (as suggested by ShaunR above) by calling one of the DLL functions as UInt64 value to the DLL where I use memcpy to fill the memory pointed to by the LabVIEW pointer. Basically: allocate a buffer in LabVIEW with DSNewPtr pass this pointer to the .NET DLL by calling my .NET DLL function SetExternalDataPointer(UInt64 lvPointer) use memcpy in the DLL to copy the data to lvPointer address read the data in LabVIEW by calling GetValueByPointer.xnode That's hardly efficient as you actually copy the memory buffer at least twice (but most likely three times), likely once in the .Net function you call, then with memcpy() in your C++/CLI wrapper and then again with your GetValueByPointer.xnode. Basically you created a complicated solution to supposedly make something performant, but made it anything but performant. If your C++/CLI DLL instead provides a function where the caller can pass in the pre-allocated array as an actual array (of bytes, integers, doubles, apples or whatever) and request to have the data copied into it, you are already done. Without pointer voodoo on the LabVIEW diagram and at least one memory copy less. Edited December 5, 2022 by Rolf Kalbermatter 1 Quote Link to comment
bna08 Posted December 5, 2022 Author Report Share Posted December 5, 2022 (edited) 2 hours ago, Rolf Kalbermatter said: If your C++/CLI DLL instead provides a function where the caller can pass in the pre-allocated array as an actual array (of bytes, integers, doubles, apples or whatever) and request to have the data copied into it, you are already done. Without pointer voodoo on the LabVIEW diagram and at least one memory copy less. Well, I was trying to keep things simple, so I omitted a "detail" where I am actually not passing just a single pointer into the DLL, but an array of pointers as my DLL is generating a few hundred bytes every 2ms that I need to have available in LabVIEW in a buffer of its own. mdAddress below is therefore an UInt64[] array. I pass the whole array of pointers to the DLL only once when the application starts - and then in every iteration of my DataGenerator method (e.g. DataGenerator::GenerateMetadata) a new set of metadata (e.g., eight different variables) is written to an mdAddress address. Basically, I am following the pattern of IMAQ GetImagePixelPtr.vi which I am using to get an array of image pointers (array of some complex data types) which I pass to .NET DLL which writes data to it. My example below with DSNewPtr follows the same logic, but instead of pixel array I am creating byte arrays and passit their pointers. After the DLL is done generating the data, I access them by dereferencing each pointer with GetValueByPointer.xnode (and since each pointer points to a buffer with multiple 64-bit variables, I add 8-byte offsets to the original pointer address to access the particular variable): I realize this is a pointer voodoo and would like to do it in a nicer way although at the moment I am happy I found a way how to make it work at all. Initially, I created an array of LabVIEW clusters which I wanted to pass to the .NET via an Invoke node, but passing array of clusters or clusters from LabVIEW to .NET is not possible if I am not mistaken. Therefore, I was looking for other ways how to get data from .NET DLL into the LabVIEW fast. Ultimately, I would like to use an array of clusters (structs) in LabVIEW, pass them into the .NET DLL and have the DLL write data to each cluster every 2ms. Edited December 5, 2022 by bna08 Quote Link to comment
LogMAN Posted December 5, 2022 Report Share Posted December 5, 2022 (edited) 3 hours ago, bna08 said: Initially, I created an array of LabVIEW clusters which I wanted to pass to the .NET via an Invoke node, but passing array of clusters or clusters from LabVIEW to .NET is not possible if I am not mistaken. LabVIEW clusters can actually be passed by value, given that the values are structs. For classes, you need to construct the class before you pass it to the method. Edited December 5, 2022 by LogMAN 1 Quote Link to comment
ShaunR Posted December 5, 2022 Report Share Posted December 5, 2022 (edited) You could do this. It may be a lot easier handling a contiguous array in your DLL than a LabVIEW array of clusters (but that is possible too). Instead of creating 1000 pointer arrays of 256 (uint64s?) just pass a single 1D pointer of an array of 256,000 elements and pars it out as above. Edited December 5, 2022 by ShaunR 1 Quote Link to comment
bna08 Posted December 5, 2022 Author Report Share Posted December 5, 2022 2 hours ago, ShaunR said: Instead of creating 1000 pointer arrays of 256 (uint64s?) just pass a single 1D pointer of an array of 256,000 elements and pars it out as above. Yes, uint64s. How do I get a 1D pointer of an array - do you mean pointer to a buffer created with DSNewPtr? And how does your parsing work? Where did you get the array? All I have is a pointer which I need to dereference with GetValueByPointer.xnode while you already have an array of uint64s which you split by 4 elements...? Sorry, I don't understand what you are doing. Quote Link to comment
ShaunR Posted December 6, 2022 Report Share Posted December 6, 2022 A la.. cluster casting with moveblock.vi 1 Quote Link to comment
Rolf Kalbermatter Posted December 6, 2022 Report Share Posted December 6, 2022 (edited) 2 hours ago, ShaunR said: A la.. cluster casting with moveblock.vi 15.91 kB · 1 download You can forget about that comment about endianess. MoveBlock is not endianess aware and operates directly on native memory. Only if you incorporate the LabVIEW Typecast do you have to consider the LabVIEW Big Endian preference. For the Flatten and Unflatten you can nowadays choose what Endianess LabVIEW should use and the same applies for the Binary File IO. TCP used to have an unrealeased FlexTCP interface that worked like the Binary File IO but they never released that, most likely figuring that using the Flatten and Unflatten together with TCP Read and Write does actually the same. PS: A little nitpick here: The size parameter for MoveBlock is defined to be size_t. This is a 32-bit unsigned integer on 32-bit LabVIEW and a 64-bit unsigned integer on 64-bit LabVIEW. Edited December 6, 2022 by Rolf Kalbermatter 1 Quote Link to comment
ShaunR Posted December 6, 2022 Report Share Posted December 6, 2022 (edited) 25 minutes ago, Rolf Kalbermatter said: You can forget about that comment about endianess. MoveBlock is not endianess aware and operates directly on native memory. Yes. But he will be populating the array data inside the DLL so if he mem copies u64's into the array it's likely little endian (on Intel). When we Moveblock out, the bytes may need manipulation to get them into big endian for the type cast. Ideally, the DLL should handle the endianess internally so that we don't have to manipulate in LabVIEW. If I'm wrong on this then that's a bonus. I think this can also be done directly by Moveblock using the Adapt to Type (for a CLFN) instead of the type cast but I think you'd need to guarantee the big endian and using a for loop to create the cluster array (speed?). Edited December 6, 2022 by ShaunR Quote Link to comment
Rolf Kalbermatter Posted December 6, 2022 Report Share Posted December 6, 2022 (edited) 2 hours ago, ShaunR said: Yes. But he will be populating the array data inside the DLL so if he mem copies u64's into the array it's likely little endian (on Intel). When we Moveblock out, the bytes may need manipulation to get them into big endian for the type cast. Ideally, the DLL should handle the endianess internally so that we don't have to manipulate in LabVIEW. If I'm wrong on this then that's a bonus. No! The DLL also operates on native memory just as LabVEW itself does. There is no Endianness disparity between the two. Only when you start to work with flattened data (with the LabVIEW Typecast, but not a C typecast) do you have to worry about the LabVIEW Big Endian preferred format. The issue is in the LabVIEW Typecast specifically (and in the old flatten functions that did not let you choose the Endianness). LabVIEW started on Big Endian platforms and hence the flatten format is Big Endian. That is needed so LabVIEW can read and write flattened binary data independent of the platform it works on. All flattened data is Endian de-normalized on importing, which means it is changed to whatever Endianness the current platform has so that LabVIEW can work on the data in memory without having to worry about the original byte order. And it is normalized on exporting the data to a flattened format. But all the numbers that you have on your diagram are always in the native format for that platform! Your assumption that LabVIEW somehow always operates in Big Endian format would be a performance nightmare as it would need to convert every numeric data every time it wants to do some arithmetic or similar on it. That would really suck great time! Instead it defines an external flattened format for data (which happens to be Big Endian) and only does the swapping whenever that data crosses the boundary of the currently operating system. That means when streaming data over some byte channel, be it file IO, or network or a memory byte stream. And yes, when writing a VI to disk (or streaming it over the network to download it to a real-time system for instance), all numeric data in it is in fact normalized to Big Endian, but when loading it into memory everything is reversed to whatever endianness format is appropriate for the current platform. And even if you use Typecast it only will swap elements if the element size on the input side doesn't match the element size on the output. For instance Byte Array (or String, which unfortunately still is just a syntactic sugar to a Byte Array) to something else. Try a Typecast from an (u)int32 to a single precision float. LabVIEW won't swap bytes since the element size on both sides is the same! That even applies to arrays of (u)int32 to array of single precision (or between (u)int64 and double precision floats). Yes it may seem unintuitive when there is swapping or not but it is actually very sane and logical. Quote I think this can also be done directly by Moveblock using the Adapt to Type (for a CLFN) instead of the type cast but I think you'd need to guarantee the big endian and using a for loop to create the cluster array (speed?). Indeed, and no there is no problem about Endianness here at al. The only thing you need to make sure is that the array of clusters is pre-allocated to the size needed to copy the elements into and that you have in fact three different size elements here: 1) the size of the uint64 array, lets call it n 2) the size of the cluster array, which must be at least n + (e - 1) / e, with e being the number of u64 elements in the cluster 3) the size of bytes to copy which will be n * 8 Edited December 6, 2022 by Rolf Kalbermatter Quote Link to comment
ShaunR Posted December 6, 2022 Report Share Posted December 6, 2022 2 hours ago, Rolf Kalbermatter said: Indeed, and no there is no problem about Endianness here at al. Yup. this works too. Quote Link to comment
Rolf Kalbermatter Posted December 6, 2022 Report Share Posted December 6, 2022 (edited) 3 hours ago, ShaunR said: Yup. this works too. That loop looks nice, but I prefer to use Initialize Array. 😀 But I'm pretty sure the generated code is in both cases pretty similar in performance. 😁 Edited December 6, 2022 by Rolf Kalbermatter Quote Link to comment
ShaunR Posted December 6, 2022 Report Share Posted December 6, 2022 1 hour ago, Rolf Kalbermatter said: but I prefer to use Initialize Array So do I. Quote Link to comment
JLMommy Posted January 19, 2023 Report Share Posted January 19, 2023 Ugh...sorry...someone must have hacked my account...changing password. 🙂 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.