Jump to content

Calling a dll that returns a buffer full of data


Recommended Posts

I'm calling a dll that returns a buffer full of data. If I ask for up to 255 data points it returns the correct data. If I ask for more than 255 points LV gives an error and tells that memory may be corrupted. I preload the input to the dll with an array of the correct size. Any ideas what might be going on? Why the 255 point limit?

I've attached the VI, but I don't seem to be able to load the dll.

Download File:post-2786-1207325057.vi

George

I should add that the dll is supposed to return up to 65536 data points.

Link to comment

QUOTE (george seifert @ Apr 4 2008, 11:08 AM)

I doubt very highly that your DLL is understanding LabVIEW datatypes. That is however what it is going to see if you use Adapt to Type. With that you tell LabVIEW to pass its array just as it is in memory, which will be a LabVIEW data handle and not an array data pointer.

Since it is an array of struct there is no trivial way to make LabVIEW pass it as a pointer. You will have to typecast the cluster array to a byte array (selecting little endian) then pass it as an array data pointer and on return decode the bytestream. There is really no other way other than writing a wrapper DLL in C doing that translation for you in C.

Rolf Kalbermatter

Link to comment

QUOTE (rolfk @ Apr 7 2008, 02:21 AM)

Rolf,

I suppose you're probably right that the dll isn't understanding what's being sent, but I still don't understand how I can get 255 elements of the array to work perfectly. I tried a zillion different ways to pass the data and they all crashed the system when I passed more than one element. So I figured I had to be close.

I have no way to do any C code so I was wondering if something else might work. Basically I expecting two 32 bit elements, one U32 and one Single, in each array element. Could I just send it an array of U32 that's twice as big as what I normally expect? I'm not sure though if I can convert the U32 to Single.

I zipped the VI and the dll. They're available here: http://lavag.org/old_files/post-2786-1207571374.zip'>Download File:post-2786-1207571374.zip

Here's the data definition and prototype code from a Visual Basic example.

Public Enum LKIF_FLOATRESULT

LKIF_FLOATRESULT_VALID ' valid data

LKIF_FLOATRESULT_RANGEOVER_P ' over range at positive (+) side

LKIF_FLOATRESULT_RANGEOVER_N ' over range at negative (-) side

LKIF_FLOATRESULT_WAITING ' comparator result

End Enum

Public Type LKIF_FLOATVALUE

FloatResult As LKIF_FLOATRESULT ' valid or invalid data

Value As Single ' measurement value during LKIF_FLOATRESULT_VALID

End Type

Public Declare Function LKIF_DataStorageGetData Lib "LkIF.dll" (ByVal OutNo As Long, ByVal NumOutBuffer As Long, ByRef OutBuffer As LKIF_FLOATVALUE, ByRef NumReceived As Long) As Long

Link to comment

QUOTE (george seifert @ Apr 7 2008, 07:53 AM)

Rolf,

I suppose you're probably right that the dll isn't understanding what's being sent, but I still don't understand how I can get 255 elements of the array to work perfectly. I tried a zillion different ways to pass the data and they all crashed the system when I passed more than one element. So I figured I had to be close.

I have no way to do any C code so I was wondering if something else might work. Basically I expecting two 32 bit elements, one U32 and one Single, in each array element. Could I just send it an array of U32 that's twice as big as what I normally expect? I'm not sure though if I can convert the U32 to Single.

I zipped the VI and the dll. They're available here: http://lavag.org/old_files/post-2786-1207571374.zip'>Download File:post-2786-1207571374.zip

Here's the data definition and prototype code from a Visual Basic example.

Public Enum LKIF_FLOATRESULT

LKIF_FLOATRESULT_VALID ' valid data

LKIF_FLOATRESULT_RANGEOVER_P ' over range at positive (+) side

LKIF_FLOATRESULT_RANGEOVER_N ' over range at negative (-) side

LKIF_FLOATRESULT_WAITING ' comparator result

End Enum

Public Type LKIF_FLOATVALUE

FloatResult As LKIF_FLOATRESULT ' valid or invalid data

Value As Single ' measurement value during LKIF_FLOATRESULT_VALID

End Type

Public Declare Function LKIF_DataStorageGetData Lib "LkIF.dll" (ByVal OutNo As Long, ByVal NumOutBuffer As Long, ByRef OutBuffer As LKIF_FLOATVALUE, ByRef NumReceived As Long) As Long

Yes treating it as array of int32 of double the size should work quite well. You can then typecast that back into an array of your cluster type although you may have to byteswap and word swap the whole array first to correct for endianess issues. Or maybe just swap the bytes and words of the integer part. That is always something best seen in trial and error.

Why it seemed to work for smaller arrays is probably because the DLL was in fact writing the first enum valu into the int32 that tells LabVIEW how many elements are in the array. As such you should have seen a swapping of the float and enum in comparison to what the VB code would indicate. With smaller arrays the overwriting did not cause to bad problems but with longer arrays it somehow set of a trigger.

Rolf Kalbermatter

Link to comment

QUOTE (rolfk @ Apr 7 2008, 08:19 AM)

Yes treating it as array of int32 of double the size should work quite well. You can then typecast that back into an array of your cluster type although you may have to byteswap and word swap the whole array first to correct for endianess issues. Or maybe just swap the bytes and words of the integer part. That is always something best seen in trial and error.

Why it seemed to work for smaller arrays is probably because the DLL was in fact writing the first enum valu into the int32 that tells LabVIEW how many elements are in the array. As such you should have seen a swapping of the float and enum in comparison to what the VB code would indicate. With smaller arrays the overwriting did not cause to bad problems but with longer arrays it somehow set of a trigger.

Rolf Kalbermatter

Rolf,

First of all. Thanks so much for all your help. Can I please bug you for a little more help in setting up the call? I've been trying to get it to work with the int32 array, but I'm not having any luck. I got one combination to sort of work (Type -Adapt to Type, Data format - Array Data Pointer), but it crashed after I ask for 127 data points which translates to 254 elements in the array. There's that same upper limit again. I also tried Type = Array, Data type = int32, Dimensions = 1, Array format = Array Data Pointer, Minimum size = <None> with the same results.

I did just manage to get one thing to work. I used the above settings, but used a int64 element and just asked for the right number of elements (instead of twice the number). Now I can get back all the values in the instruments (12500). Glancing at the received numbers I think they make sense. One of the 32 bit numbers should be constant at 0, which it is. Now maybe I just have to convert the other 32 bit number like you said.

Just to beat a dead horse a bit more. I wasn't seeing any swapping of the float and enum in my original code. I was easy to tell because the ENUM was 0 (indicating a valid value was sent) and the float value agreed with the value returned by the company's sample code.

George

Success! A simple typecast to Single after separating out the upper 32 bits worked.

It would be nice to know why sending an array of int32 of twice the size didn't work. I'm lucky the data fit in 64 bits. I don't what I'll do if a situation comes up where I need more bits.

George

Link to comment
  • 4 years later...
QUOTE (rolfk @ Apr 7 2008, 08:19 AM)

Rolf,

First of all. Thanks so much for all your help. Can I please bug you for a little more help in setting up the call? I've been trying to get it to work with the int32 array, but I'm not having any luck. I got one combination to sort of work (Type -Adapt to Type, Data format - Array Data Pointer), but it crashed after I ask for 127 data points which translates to 254 elements in the array. There's that same upper limit again. I also tried Type = Array, Data type = int32, Dimensions = 1, Array format = Array Data Pointer, Minimum size = <None> with the same results.

I did just manage to get one thing to work. I used the above settings, but used a int64 element and just asked for the right number of elements (instead of twice the number). Now I can get back all the values in the instruments (12500). Glancing at the received numbers I think they make sense. One of the 32 bit numbers should be constant at 0, which it is. Now maybe I just have to convert the other 32 bit number like you said.

Just to beat a dead horse a bit more. I wasn't seeing any swapping of the float and enum in my original code. I was easy to tell because the ENUM was 0 (indicating a valid value was sent) and the float value agreed with the value returned by the company's sample code.

George

Success! A simple typecast to Single after separating out the upper 32 bits worked.

It would be nice to know why sending an array of int32 of twice the size didn't work. I'm lucky the data fit in 64 bits. I don't what I'll do if a situation comes up where I need more bits.

George

 

Hi George,
 
I'm having the same problem, but unfortunately I couldn't resolve with these tips. So, could you post your VI that is working?
 
Thanks,
Alexandre Marcondes.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.