Jump to content

LV to DLL endian conversion question


Recommended Posts

Hi there,

I'm currently trying to access some functions written in plain old C, compiled on Windows under Visual Studio 2005, from LabVIEW (8.2).

The C code is written in static libraries, which I have called from a DLL.

On debugging strange behaviour of the DLL and noting comments written in posts on LAVA and elsewhere

it appears that passing an array of LabVIEW type DBL to a DLL via the

Array handle type

after defining the type in a header used by the DLL with code, typically something like this

typedef struct {

int size;

double elt[1];

} dblArray;

typedef dblArray **dblArrayHdl;

I find that double values passed from LabVIEW via the inner array (using a C pointer hooked up to the elt pointer)

are (inside the C program) completely incorrect values. The same is true vice versa for numbers originating in C and propagating out to the LabVIEW via DLL outputs.

The standard explanation given for this nightmare, is that LabVIEW uses big endian and windows (hence the C) are trying to interpret using little endian.

Fine all understood so far, but

if this is true, can anyone explain why integer types are not similarly mangled.

when the array type is say int32 (labview) to int or long (inside the c)

I find integer values entering and leaving the DLL and being correctly represented inside the C code, with no attempts to correct the

endianness from LabVIEW to C or vice versa.

If endianness being opposite is the explanation for doubles, surely - unless I accidently chose integers which happen to be endianness palindromes (ie look the same in each direction) -

the effect should create problems for integers as well?

Doesn't anyone else find this bizarre?

It was the very fact that integers survive the passage to and from DLLs

which originally lead me to naively believe that the DLL interface (and user configuration settings)

were doing the heavy lifting for me and were converting the LabVIEW types to C types and back - at least for integers

and so I naively assumed the same would happen for doubles as well

Can anyone explain the apparent inconsistency?

Link to comment

QUOTE (Paulus @ Oct 22 2008, 06:56 PM)

Array handle type

after defining the type in a header used by the DLL with code, typically something like this

typedef struct {

int size;

double elt[1];

} dblArray;

typedef dblArray **dblArrayHdl;

I have passed arrays, both 1D and 2D to and from a C DLL and LabVIEW (6,7, and 8) and had no trouble. A couple of things that I kept in mind to make things work out.

1. My C DLL never, ever, resized my LabVIEW array. This point is, from what I can tell, the most critical. If needed, I would create a function, or a mode of a function that returned how big LabVIEW needed to make the array it was going to provide to the C DLL to fill up.

2. Don't use Array handle types, use Array Data Pointer. Pass the array bounds to the C DLL if needed.

I have found that LabVIEW performs any endian conversions that are needed for you, which, as far as I can tell, is an undocumented (or at least, rarely mentioned) feature. I've used the process of having a C DLL fill up LabVIEW arrays pretty extensively and had no problems, as long as I follow the two rules mentioned above.

Good Luck.

Chris

Link to comment

QUOTE (Chris Davis @ Oct 23 2008, 04:11 AM)

I have passed arrays, both 1D and 2D to and from a C DLL and LabVIEW (6,7, and 8) and had no trouble. A couple of things that I kept in mind to make things work out.

1. My C DLL never, ever, resized my LabVIEW array. This point is, from what I can tell, the most critical. If needed, I would create a function, or a mode of a function that returned how big LabVIEW needed to make the array it was going to provide to the C DLL to fill up.

2. Don't use Array handle types, use Array Data Pointer. Pass the array bounds to the C DLL if needed.

I have found that LabVIEW performs any endian conversions that are needed for you, which, as far as I can tell, is an undocumented (or at least, rarely mentioned) feature. I've used the process of having a C DLL fill up LabVIEW arrays pretty extensively and had no problems, as long as I follow the two rules mentioned above.

Good Luck.

Chris

Ok thanks for that.

I agree with the idea of not trying to resize arrays inside the C code, I used to do it in CINs calling the NI functions for that but it is a hairy operation best avoided if one can lazily get LabVIEW to provide the arrays with space already allocated in advance.

As for passing arrays by pointer, with a 2D array what gets passed? I assume it is a pointer to the start of the 1D array which only looks 2D to labview

in other words I assume that the "elt" pointer gets passed over to the C code

After my original post I did a bit more investigation into the way the 8 bytes representing the DBL were arranged.

If the bytes in one representation were arranged

8 7 6 5 4 3 2 1

then I found the other representation ordered them

4 3 2 1 8 7 6 5

in other words, they were reordered at the WORD (4 byte WORD) level

and using a bit of extra code along these lines

double *d_ptr;

d_ptr = (double*)((*arrayHdl)->elt);

int *a;

int *b;

int temp;

a = (int*)(d_ptr); // set a to first word of the double

b = a;

b++; // set b to second word

temp = a[0];

a[0] = b[0];

b[0] = temp;

I could convert the double between LabVIEW readable format and C readable format

and as one might expect from the symmetry of the situation, the same operation converts back

this explains why the 4 byte integers were not troubled by entering and exiting DLLs via the array handle

I think your method of access will be better (for 8 byte doubles) as I do not plan to run every array through such a WORD swapping procedure

coming in and going out of a DLL, element by element

Thanks for the info

Link to comment

ZITAT(Paulus @ Oct 23 2008, 12:08 PM)

Why not? NumericArrayResize function is documented. NI also provided example how to use it.

ZITAT(Paulus @ Oct 23 2008, 12:08 PM)

After my original post I did a bit more investigation into the way the 8 bytes representing the DBL were arranged.

If the bytes in one representation were arranged

8 7 6 5 4 3 2 1

then I found the other representation ordered them

4 3 2 1 8 7 6 5

as far as I can understand, you have problem with alignment, and not with endian conversion. as result you have gap between dimSize and elements.

Try to add #pragma pack(1) before declaration

typedef struct {

int32 dimSize;

float64 elt[1];

} TD2;

typedef TD2 **TD2Hdl;

and all should be OK.

see attached example.

best regards,

Andrey.

Link to comment

QUOTE (Paulus @ Oct 22 2008, 06:56 PM)

Hi there,

I'm currently trying to access some functions written in plain old C, compiled on Windows under Visual Studio 2005, from LabVIEW (8.2).

The C code is written in static libraries, which I have called from a DLL.

On debugging strange behaviour of the DLL and noting comments written in posts on LAVA and elsewhere

it appears that passing an array of LabVIEW type DBL to a DLL via the

Array handle type

after defining the type in a header used by the DLL with code, typically something like this

typedef struct {

int size;

double elt[1];

} dblArray;

typedef dblArray **dblArrayHdl;

I find that double values passed from LabVIEW via the inner array (using a C pointer hooked up to the elt pointer)

are (inside the C program) completely incorrect values. The same is true vice versa for numbers originating in C and propagating out to the LabVIEW via DLL outputs.

The standard explanation given for this nightmare, is that LabVIEW uses big endian and windows (hence the C) are trying to interpret using little endian.

Fine all understood so far, but

if this is true, can anyone explain why integer types are not similarly mangled.

when the array type is say int32 (labview) to int or long (inside the c)

I find integer values entering and leaving the DLL and being correctly represented inside the C code, with no attempts to correct the

endianness from LabVIEW to C or vice versa.

If endianness being opposite is the explanation for doubles, surely - unless I accidently chose integers which happen to be endianness palindromes (ie look the same in each direction) -

the effect should create problems for integers as well?

Doesn't anyone else find this bizarre?

It was the very fact that integers survive the passage to and from DLLs

which originally lead me to naively believe that the DLL interface (and user configuration settings)

were doing the heavy lifting for me and were converting the LabVIEW types to C types and back - at least for integers

and so I naively assumed the same would happen for doubles as well

Can anyone explain the apparent inconsistency?

LabVIEW uses internal to the diagram whatever endianess the CPU prefers. only when flattening data to a stream or unflattening it from it (Flatten, Unflatten, Typecast, Binary File Read and Write) does it apply endianess conversion (in LabVIEW >= 8.2 by default, before that always) to make sure data is in Big Endian format on the byte stream side.

So the data passed to your DLL is really in whatever format the CPU desires on which LabVIEW is running, which in your case is the same Little Endian format as what your DLL expects.

My only explanation is that you interprete the data structure wrongly. While elm in itself is indeed a pointer it is not a pointer inside the main structure very much but the entire array is inlined into the main structure. This means you see the start of the data at byte offset 4 relative to the start of the structure. On some obsolete platforms (SPARC) the offset would be actually 8 since LabVIEW has to align floating point values to their natural byte boundary for the Sparc CPU to be able to access it without a huge perfrmance penalty but on Windows LabVIEW uses byte packing for all structures since the x86 architecture has little or no penalty for such access.

Did you include extcode.h in your program? If not you have to add an explicit #pragma pack(1) before defining your structure and best reset it back to the previous by using a #pragma pack() after. Microsoft VC uses 8 byte alignment by default.

Rolf Kalbermatter

Link to comment
  • 5 weeks later...

Dear all

thanks for the helpful comments and advice

to answer the last few questions

I haven't been including "extcode.h" although I recognise it from older projects when I used CINs

nor have I been making any effort to set 1 byte alignment either with a pack pragma (which I would not have known about) or with a similar setting on the Visual Studio project options under

Project

Project settings

C/C++

Code Generation

Struct Member alignment

I found that using function parameters like double *an_array, int dim_size_passed_separately

rather than the CIN style array handle worked without any issues or needing any alterations to byte alignment, packing or anything else

and therefore I opted to pass arrays using pointers.

No problems detected so far in that regard

Link to comment

QUOTE (Paulus @ Nov 24 2008, 01:12 PM)

I found that using function parameters like double *an_array, int dim_size_passed_separately

rather than the CIN style array handle worked without any issues or needing any alterations to byte alignment, packing or anything else

and therefore I opted to pass arrays using pointers.

No problems detected so far in that regard

That obviously avoids the alignment problem.

Rolf Kalbermatter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.