Jump to content

LabVIEW Call Library Function Node - Union Pointer handling


Recommended Posts

12 minutes ago, Rolf Kalbermatter said:

size_t is usually according to the platform bitness since it makes little sense to have size parameters that would span more than 4GB on a 32-bit platform. If you need a 64-bit value regardless you need to use the explicit uint64_t. time_t is another type that can be 64-bit even on 32-bit platforms, but it doesn't have to be. It often depends on the platform libraries such as the libc version used or similar things.

Yeah. Didn't think I'd get a definitive answer since size_t is only guaranteed to be greater than 15 bits.

For MS it is UINT_MAX, which could be anything. The only way to know for certain is to test the compiled environment but even then it depends on things like if functions use 32 modular arithmetic. I don't think it can be resolved conclusively in LabVIEW but the choice is only unsigned pointer sized integer or a conditional case for target bitness. I guess you are alluding to the conditional for "most likely not to fall over in most circumstances"

25 minutes ago, Rolf Kalbermatter said:

it makes little sense to have size parameters that would span more than 4GB on a 32-bit platform

That's what they said about disk drives and look what happened :D

Link to comment
2 hours ago, ShaunR said:

That's what they said about disk drives and look what happened :D

Definitely not the same. A 32-bit process simply can't address more than 4GB in total so it makes no sense to have a variable that can specify a size of more than that. Usually the maximum amount of memory that can be allocated as a single block is much smaller than that too.

Quote

Yeah. Didn't think I'd get a definitive answer since size_t is only guaranteed to be greater than 15 bits.

According to the 1999 ISO C standard (C99) it is at least 16-bit but meant to represent the size of an object. Library functions that take or return sizes expect them to be of this type or have the return type of size_t. As such it is most commonly defined to be the address size for the current platform. In most modern compilers this is the same as the bitness for that platform.

Quote

For MS it is UINT_MAX, which could be anything. The only way to know for certain is to test the compiled environment but even then it depends on things like if functions use 32 modular arithmetic. I don't think it can be resolved conclusively in LabVIEW but the choice is only unsigned pointer sized integer or a conditional case for target bitness. I guess you are alluding to the conditional for "most likely not to fall over in most circumstances"

Your reasoning would be correct if you wanted to write a general completely target platform independent library. But we work in the confines of LabVIEW here. And that supports several platforms but not that very exotic ones. Under Windows size_t is meant to be the same as the unsigned bitness value. Under Linux the same applies as can be easily verified in the stddef.h file. So in terms of LabVIEW it is safe to assume that size_t is actually always the same as the unsigned pointer sized integer. Of course once we have 128-bit CPUs (or God help us, 64/64 segmented addresses) and LabVIEW chooses to support them, this assumption may not hold anymore.

Also the specification only says that size_t must be able to at least address SIZE_MAX (which is probably the same as UINT_MAX under Windows) elements but not how big it actually is. Pretty much all current implementations except maybe some limited embedded platforms use a size_t that can address a lot more than that. And that is the nature of standards, they use a lot of at least, must, can, should and at most and that are usually all constraints but not exact values. C is especially open in that sense as the C designers tried to be as flexible as possible to not put to much constraints on C compiler implementation on specific hardware.

Edited by Rolf Kalbermatter
Link to comment

I'm at a loss as to why one works and one doesn't.

7 hours ago, Rolf Kalbermatter said:

This doesn't make much sense. His example points nowhere, it simply takes the pointer and interprets it as a Zero terminated C string and converts it to a LabVIEW String/Byte array. If this doesn't return the right value, your v value is not what you think it is. And the Get Value Pointer xnode does in principle nothing else for a string type. But without an example to look at, that we can ideally test on our own machines, we won't be able to tell you more.

The code I have for the source of the Union is in my first post on this topic.

It seems to be a \00 terminated C string pointer. 🤷‍♂️

Link to comment
41 minutes ago, Ryan Vallieu said:

I'm at a loss as to why one works and one doesn't.

The code I have for the source of the Union is in my first post on this topic.

It seems to be a \00 terminated C string pointer. 🤷‍♂️

That code is in itself neither complete nor very clear. And it is not clear to me why you would even need to interface to the so library. It seems to simply read data from a socket and put it into this artificial variant. Rather than using a so that does such low level stuff to then call it from LabVIEW on an only slightly higher level, it would seem to me much more logical to simply do the entire reading and writing directly on a LabVIEW TCP refnum.

The problem likely is in the readString() function which is not shown anywhere how it is implemented. It supposedly should allocate a buffer to assign to the c_ptr variable and then fill that buffer. But that also means that you need to call the correct memory dispose function afterwards to deallocate that buffer or you will create nice memory leaks every time you call the getValue() function for a string value.

Doing the whole thing directly with LabVIEW TCP functions would solve a lot of problems without a lot more work needed.

- You don't depend on a shared library that needs to be compiled for every platform you use.

- You don't end up with the long difference between Windows and Unix. Most likely this library was originally written for Windows and never tested under Unix.  What he calls long in this library is likely meant to be always an int32. Alternatively it was originally written for Linux and that value is really meant to be always an int64. As it is programmed now it is not platform interoperable, but that doesn't need to bother you if you write it all in LabVIEW. You simply need to know what is the intended type and implement that in LabVIEW accordingly and you can leave the library programmer to try to deal with his ignorance.

- You have one LabVIEW library that works on all LabVIEW platforms equally, without the need to distribute additional compiled code in the form of shared libraries.

- You don't have to worry about buffers the C library may or may not allocate for string values and deallocate them yourself.

- You don't have to worry about including the LabVIEW lvimpltsl.so/dll file in a build application.

Edited by Rolf Kalbermatter
Link to comment

This is the xnode code that is generated for a string.

GetValueByPointer.png.d25bc95148d77a69aea099c405e52650.png

 

I wondered why the other system owner wanted to do things that way myself.  I guess because this is the API they provide to all the other systems developers.  I certainly am talking to the remote system on another process just using the TCP refnum and building the packets and interpreting the incoming message packets.  Maybe I will rework this piece in the future, but it is working with including the lvimptsl.so in my support folders

Edited by Ryan Vallieu
Link to comment

I think I may know what's going on here.

char *s;      /* If a string user is responsible for 
                       * allocating memory. */

The "v" in your cluster is actually a pointer to char so v in your cluster should be an array of bytes which *you* will initialise with n bytes.

image.png.feefbf3e68884ef48e21f64b6d87a789.png

i.e. You have to create the pointer. The function obviously copies data into your string (byte array) which is why you have responsibility for allocating the memory, rather than the function allocating the memory and you copying the data into LabVIEW. So you are going to either something like this:

image.png.0d696f67d929700fca4c491cc7dc63a5.png

or this

image.png.11f8ffdb02321e199e197806dc4e8a0c.png

 

What are you doing? The second one?

Link to comment

I'm dumb 😒 and was looking at the output in the wrong spot in my code when initially trying to validate the StrLen and MoveBlock output you sent.  ShaunR your version is working as well as the GetValueByPointer.

 

I am just feeding in a blank string to the GetValueByPointer to define the string type for the return and it is returning the whole string including the \00 at the end.

 

Link to comment
2 minutes ago, Ryan Vallieu said:

I'm dumb 😒 and was looking at the output in the wrong spot in my code when initially trying to validate the StrLen and MoveBlock output you sent.  ShaunR your version is working as well as the GetValueByPointer.

 

I am just feeding in a blank string to the GetValueByPointer to define the string type for the return and it is returning the whole string including the \00 at the end.

 

But someone has to allocate the pointer. If it is not readString() it has to be done by you as shown in Shauns last picture. Otherwise it may seem to work but is in fact corrupting memory.

Link to comment

Maybe I am completely misunderstanding here....the 'v' in the case of the switch detecting the string type is just a pointer to the first element of the string in memory.  The nice thing about the GetValueByPointer.vi is that you don't need to wire anything in or know the size of the string, you just wire a blank string type.

 

When I make the call into the getValue function, which is upstream of trying to dereference the pointer it returns if the switch detects string type, I am feeding getValue the Cluster of elements, in which 'v' is a U64, as that is the widest the data can be based on the types switched into v.  When valuetype is used to drive the decoding and states the data in the U64 v is a Pointer I use that pointer to read the characters.  GetValueByPointer handles the memory allocation for dereferencing the string, or in the case of StrLen and MoveBlock - the StrLen is used to allocate the memory for the String.

After that I am not using the original Struct in the LabVIEW code and have converted the converted data into variant so I can convert back in the calling LabVIEW code based on valueType, I am not trying to reuse the Value structure from the C call.

Am I missing something?

getValue and dereference.png

The function within getValue that gets the datatype does perform the malloc.  In the case of the valuetype being charString it calls a readString() function that performs a memory allocation to hold the string.

Edited by Ryan Vallieu
Adding png of the getValue call
Link to comment
1 hour ago, Ryan Vallieu said:

Maybe I am completely misunderstanding here....the 'v' in the case of the switch detecting the string type is just a pointer to the first element in the string in memory.  The nice thing about the GetValueByPointer.vi is that you don't need to wire anything in or know the size of the string, you just wire a blank string type.

 

When I make the call into the getValue function, which is upstream of trying to dereference the pointer it returns if the switch detects string type, I am feeding getValue the Cluster of elements, in which 'v' is a U64, as that is the widest the data can be based on the types switched into v.  When valuetype is used to drive the decoding and states the data in the U64 v is a Pointer I use that pointer to read the characters.

Strictly speaking is the v nothing in itself. It's the unions name and does not really occupy any memory in itself. It's just a container that contains the real data which is one of the union elements, so the l, ul, d, or s. And yes the s is a string pointer, so according to the comment in the declaration, if you request a string value you have to preallocate a pointer and assign it to this s pointer. What size the memory block needs to have you better also know beforehand! And when you don't need it anymore you have to deallocate it too or you cause a memory leak!

Now LabVIEW does not know unions so we can't really make it like that. Since the union contains a double, its memory size is always 8 byte. But!!

So if the type is Double, we need to Typecast the uint64 into a Double. But the LabVIEW Typecast does Big Endian swapping, so we rather need to use the Unflatten to string and specify Native Byte Order

If the type is Long or ULong we need to know if it was sent by a Linux or Windows remote side. If it was Linux we just convert it to the signed or unsigned int64, otherwise we need to split the uint64 into 2 uint32 and take the lower significant one.

If it was (or rather really is going to be a string) we first need to allocate a pointer, and assign it to the uint64, but only if we run in 64 bit LabVIEW, otherwise we would strictly speaking need to convert the LabVIEW created pointer which is always an (U)Int64 on the diagram into an (U)int32 and then Combine it with a dummy (U)Int32 into an uint64 and assign that to the union value then pass that to the readValue() function, retrieve the data from the string pointer and then deallocate the pointer again. Since we nowadays always run on Little Endian, unless you want to also support VxWorks targets, you can however forget the (U)Int64 -> (U)Int32 -> (U)Int64 conversion voodoo and simply assign the LabVIEW (U)Int64 pointer directly to the union uint64 even for the 32-bit version.

And if that all sounds involved, then yes it is and that is not the fault of LabVIEW but simply how C programming works. Memory management in C is hard, complicated, usually under documented when using a library and sometimes simply stupidly implemented.

Doing the same fully in LabVIEW will almost certainly be less work, much less chance for memory corruptions and leaks (pretty much zero chance) and generally a much more portable solution.

The only real drawback will be that if the library implementer decides to change how his communication data structures are constructed on the wire, you would have to change your LabVIEW library too. However seeing how thin the library implementation really is it is almost certain that such a change would also cause a change to the functional interface and you would have to change the LabVIEW library anyhow.

Quote

The most important take from all this is that you need to know before you call the readValue() function that you will receive a string value and you also need to know the maximum size that string can have, then allocate the according pointer, assign it to the union, pass it to readValue(), retrieve the string from the pointer and then deallocate the pointer again.

That is unless the readString() function allocates the pointer, which would be more logical but then the comment in the Value typedef would be wrong and misleading and your library would need to export also a function that deallocates that pointer and that you would have to call from LabVIEW after you are done converting it into a LabVIEW string.

Edited by Rolf Kalbermatter
Link to comment
2 hours ago, Ryan Vallieu said:

StrLen is used to allocate the memory for the String.

StrLen doesn't allocate anything. It just scans the string until it reaches \00 and returns the length minus the \00 char. It does the same as your match pattern above if you were to look at the Offset Past Match (-1).

With StrLen + Moveblock, you are finding how many bytes you need to allocate to hold a *copy*, then copying from s[0] to another location (the initialise array in the VI I posted). The memory of the source string you are copying must already exist, somehow, otherwise bad things happen.

Edited by ShaunR
Link to comment
13 hours ago, Ryan Vallieu said:

The function within getValue that gets the datatype does perform the malloc.  In the case of the valuetype being charString it calls a readString() function that performs a memory allocation to hold the string.

Then you need to deallocate this pointer after you are done with it! Any you can not just call whatever deallocation function you would like, it MUST be the matching function that was used to allocate the buffer. Windows knows several dozen different functions to do something like that and all ultimately rely on the Windows kernel to do the difficult task of managing the memory, but they do all some intermediate thing of their own too.

To make matters worse, if your library calls malloc() you can't just call free() from your program. The malloc() the library calls may not operate on the same heap than the free() you call in your calling program. This is because there is generally not a single C runtime library on a computer but a different version depending on what compiler was used to compile the code. And a DLL (or shared library) can be compiled by a completely different C compiler (version) than the application that calls this shared library. On Linux things are a little less grave since libc is since about 1998 normally always accessed through the libc.so.6 symbolic link name which resolves to the platform installed libc.x.y.so shared library but even here you do not have any guarantee that it will always remain like this. The fact that the libc.so soname link is currently already at version number 6 clearly shows that there is a potential for this library to change in binary incompatible ways, which would cause a new soname version to appear. It's not very likely to happen as it would break a lot of programs that just assume that there is nothing else than this version on a Linux computer, but Linux developers have been known to break backward compatibility in the past for technical and even just perceived esthetical reason.

The only safe way is to let the library export an additional function that calls free() and use that from all callers of that library.

And to make matters worse, this comment here:

typedef struct Value
{
    union        /* This holds the value. */ 
    {
        long l;
        unsigned long ul;
        double d;
        char *s;      /* If a string user is responsible for  <<<<<------------
                       * allocating memory. */
    } v;

    long order;
    valueType type;    /* Int,float, or string */
    valueLabel label;  /* Description of characteristic -- what is it? */
} Value;

would be in that case totally and completely false and misleading!!

It would be more logical to let readString() allocate the memory as it is the one which knows that a string value is actually to be retrieved and it also can determine how big that string will need to be, but adding such misleading comments to the library interface description is at best negligent and possibly even malicious.

Edited by Rolf Kalbermatter
Link to comment

I think they meant that if you read out the string into your program you are responsible for allocating the memory.

 

the Readstring function called in the Union code does allocate memory for the pointer, but I agree that I don't see a memory deallocation when done.

 

What I meant by the StrLen being used to allocate was using it with the Build Array and feeding that in to the MoveBlock.

I will definitely bring this up with the developer that gave me the library.  

Moving to a LabVIEW only solution shouldn't be too difficult for the communications.  Just costs time at this point.

Link to comment
7 minutes ago, Ryan Vallieu said:

Moving to a LabVIEW only solution shouldn't be too difficult for the communications.  Just costs time at this point.

Since you have already experience with network communication in LabVIEW I would strongly suspect that this is on the short and medium term less than what you will have to put up with to make this current external shared library work fully reliable. Basically writing the readValue() function in LabVIEW is pretty straightforward and the underlaying readLong(), readDouble() and readString() should be even simpler. 

As long as you have the C source code of these all available I would think it is maybe 1 or two days of work.

The only tricky things I see are:

- we haven't seen the sendCHARhandle() function nor what the CharacteristicHandle datatype is. It's likely just an integer of some size, so shouldn't be really difficult.

- it's not clear how the protocol sends a string. ideally it would first send a byte, or integer that indicates the number of bytes that follow, then everything is very easy. If it rather uses a zero termination character you would need to go into a loop and read one character after the other until you see the zero character, that's more of a hassle and slows down communication a bit, but that would be similar in the C code.

  • Thanks 1
Link to comment
  • 1 month later...

Since I had time while waiting for NI to figure out why the same code called by the Embedded Runtime in Linux is not working the same as the normal LabVIEW runtime EXe - I tackled the replacement of the C code in the .SO called by the LabVIEW CLFN for the implementation of this driver.

Removes the need for me to compile the code on the target, removes the extra issues of everything discussed above.  Thanks for the impetus to complete that. 

The reads from the TCP were much easier to implement and maintain and won't lock up my code if there is an error on the server side.

Link to comment

Thanks for the feedback. It's very much as I suspected. Network communication in LabVIEW is not really that complicated. In fact it's easier than doing the same in C. The problem is when you have to do higher level protocols such as HTTP or even more specialistic. Here you have the choice to implement the underlying protocol entirely in LabVIEW, which tends to be a major project in itself or to rely on some intermediate library like the LabVIEW HTTP Client Library which is in essence simply a thin wrapper around libcurl.

The first is a lot of effort since HTTP is quite a complex protocol in itself with many variants depending on server version, authentication level and such. The second hides most of those details entirely to the point where you can't access them anymore.

As a point in case I recently had to do some X Windows configuration for a RT target. Possible options:

- call the xrandr command line interface tool

- call the X11lib shared library to do the relevant things directly

- call the xcb shared library instead to do the relevant things

- implement the X protocol directly in LabVIEW

I ended up using option 1, simply because it was the quickest but just for lolz I also tried the last option and got some experimental code running. Now the X Windows protocol is extensive and it would be a really serious effort to make something that is reasonably functional. Another complicated fact is the authentication because that always involves some more or less obscure cryptographic functions. Even the xcb shared library, while implementing everything else from scratch (and it is also nowadays normally used as the backend for X11lib) relies on the original auth code from X11lib for this functionality rather than trying to reimplement it itself.

Edited by Rolf Kalbermatter
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.