Jump to content
mhy

Python client to unflatten received string from TCP Labview server

Recommended Posts

How do you decode, unpack a flattened string in python which was sent by a LabVIEW TCP server?

I want to exchange data via loopback. Therefore I take a sine wave and flatten it to string and sent over the network Simple TCP - ServerSINE.viSimple TCP - ServerSINE.vi. Then I have to decode the incoming data in a way that I have the correct numerical values like when I plot them in LabVIEW. I did this so far in python_client.py but the values are wrong.

Does anyone know how to work with the transferred Data in python?

Share this post


Link to post
Share on other sites

The only real answer is "the reverse of sending" but the data has to something reasonable for python to parse. If you are flattening the data in labview to binary rather than a more standard interchange format (I didn't look at the code) you should make sure you understand how labview stores data in memory. Also be careful that the flatten to string defaults to big endian and to prepending lengths to everything. 

Might be easiest to look at this example:
https://decibel.ni.com/content/docs/DOC-47034
or https://github.com/ni/python_labview_automation

and this may be useful too:
https://decibel.ni.com/content/docs/DOC-46761

Edited by smithd

Share this post


Link to post
Share on other sites

Yes one will have to chose a more standard format to sent over the TCP networking interface. It kind of works now, if you plot the readable values from the python console the data matches. But there will still be some try and error to find the best data format or the thing 'by adding size headers to everything' (self-made protocol, might just do the trick) - it will eventually work.

A 'sort-of-solution' in the files. Thx for the github link!

python_client2.txt

Sine.vi

Simple TCP - ServerSINE.vi

Share this post


Link to post
Share on other sites
18 hours ago, mhy said:

Yes one will have to chose a more standard format to sent over the TCP networking interface. It kind of works now, if you plot the readable values from the python console the data matches. But there will still be some try and error to find the best data format or the thing 'by adding size headers to everything' (self-made protocol, might just do the trick) - it will eventually work.

A 'sort-of-solution' in the files. Thx for the github link!

python_client2.txt

Sine.vi

Simple TCP - ServerSINE.vi

The first example you posted..............

There are size headers already. The flatten function adds a size for arrays and strings unless the primitive flag is FALSE. The thing that may be confusing is that the server reverses the string and therefore the byte array before sending. Not sure why they would do that but if the intent was to turn it into a little endian array of bytes then it is a bug. I don't see you catering for that in the python script and since the extraction of the msg length (4 bytes) is correct then python is expecting big endian when it unpacks bytes using struct.unpack but the numeric bytes are reversed.

The second example has reversed the bytes for the length and appended (rather then prepended) it to the message. I think this is why you have said 100 is enough since it's pretty unusable if you have to receive the entire message of an unknown length in order to know it's length :P.

If you go back to the first example. Remove the reverse string, set the "prepend size" flag to FALSE and then sort out the endianess you will be good to go. The flatten primitive even has an option for big endian, network byte order and little endian so you can match the endianess to python before you transmit (don't forget to put a note about that in your script and the LabVIEW code for 6 months time when you forget all this :D )

If you need to change the endianess of the length bytes then you will have to use the "Split Number" and "Join Number" functions if you are not going to cater for it in the python script. All LabVIEW numerics are big endian.

Edited by ShaunR

Share this post


Link to post
Share on other sites
10 hours ago, ShaunR said:

The first example you posted..............

There are size headers already. The flatten function adds a size for arrays and strings unless the primitive flag is FALSE. The thing that may be confusing is that the server reverses the string and therefore the byte array before sending. Not sure why they would do that but if the intent was to turn it into a little endian array of bytes then it is a bug. I don't see you catering for that in the python script and since the extraction of the msg length (4 bytes) is correct then python is expecting big endian when it unpacks bytes using struct.unpack but the numeric bytes are reversed.

The second example has reversed the bytes for the length and appended (rather then prepended) it to the message. I think this is why you have said 100 is enough since it's pretty unusable if you have to receive the entire message of an unknown length in order to know it's length :P.

If you go back to the first example. Remove the reverse string, set the "prepend size" flag to FALSE and then sort out the endianess you will be good to go. The flatten primitive even has an option for big endian, network byte order and little endian so you can match the endianess to python before you transmit (don't forget to put a note about that in your script and the LabVIEW code for 6 months time when you forget all this :D )

If you need to change the endianess of the length bytes then you will have to use the "Split Number" and "Join Number" functions if you are not going to cater for it in the python script. All LabVIEW numerics are big endian.

 

It's nitpicking a bit but the options for the Flatten (and Unflatten) functions are Big Endian (or network byte order, which is the same), Little Endian and native. Big and Little Endian should be clear, native is whatever the current architecture uses, so currently Little Endian on all LabVIEW platforms except when you run the code on an older PowerPC based cRIO.

And LabVIEW internally uses whatever Endianess is used by the native architecture but its default flattened format is Big Endian. Those two are very distinctive things, If it would use Big Endian anywhere, it would need to convert every number everytime it is passed to the CPU for processing.

Edited by rolfk

Share this post


Link to post
Share on other sites
13 hours ago, rolfk said:

And LabVIEW internally uses whatever Endianess is used by the native architecture but its default flattened format is Big Endian. Those two are very distinctive things, If it would use Big Endian anywhere, it would need to convert every number everytime it is passed to the CPU for processing.

While I accept your point of order, how LabVIEW represents numerics under the hood is moot. Whenever you interact with numerics  in LabVIEW as bytes or bits it is always big endian (only one or two primitives allow different representations). Whether that be writing to a file, flattening/casting, split/combine number or TCPIP/serial et. al. As a pure programmer you are correct and as an applied programmer, I don't care  :P:lol:

Share this post


Link to post
Share on other sites
22 minutes ago, ShaunR said:

 As a pure programmer you are correct and as an applied programmer, I don't care  :P:lol:

That's because you don't write shared libraries! :D Only the flattened LabVIEW formats use by default Big Endian. absolutely anything else is native byte order.

And the only places where LabVIEW flattens data, is in its own internal VI Server protocol, when using the Flatten and Unflatten and the Typecast functions, or when writing or reading binary data to/and from disk. Still waiting for the FlexTCPRead and Write, that do not use strings as data input but directly the LabVIEW binary data types (and definitely a byte array instead of a string!! Same for VISA, the byte array I mean, strings simply do not cover the meaning of what is transfered anymore in a world of Unicode and Klingon language support on every embedded OS!!! :D).

Edited by rolfk

Share this post


Link to post
Share on other sites
1 hour ago, rolfk said:

That's because you don't write shared libraries! :D Only the flattened LabVIEW formats use by default Big Endian. absolutely anything else is native byte order.

And the only places where LabVIEW flattens data, is in its own internal VI Server protocol, when using the Flatten and Unflatten and the Typecast functions, or when writing or reading binary data to/and from disk. Still waiting for the FlexTCPRead and Write, that do not use strings as data input but directly the LabVIEW binary data types (and definitely a byte array instead of a string!! Same for VISA, the byte array I mean, strings simply do not cover the meaning of what is transfered anymore in a world of Unicode and Klingon language support on every embedded OS!!! :D).

I don't? (write shared libraries). This issue was solved a long time ago and there are several methods to determine and correct the endianess of the platform at run-time to present a uniform endianess to the application. I have one in my standard utils library so I don't see this problem - ever.

Don't get me started on VISA...lol. I'm still angling for VISA refnums to wire directly to event structures for event driven comms :P Strings are variants to me so I see no useful progress in making the TCP read strictly typed-it would mean I have another type straight-jacket to get around and you would find everyone moaning that they want a variant output anyway.

 

 

Share this post


Link to post
Share on other sites
25 minutes ago, ShaunR said:

I don't? (write shared libraries). This issue was solved a long time ago and there are several methods to determine and correct the endianess of the platform at run-time to present a uniform endianess to the application. I have one in my standard utils library so I don't see this problem - ever.

Don't get me started on VISA...lol. I'm still angling for VISA refnums to wire directly to event structures for event driven comms :P Strings are variants to me so I see no useful progress in making the TCP read strictly typed-it would mean I have another type straight-jacket to get around and you would find everyone moaning that they want a variant output anyway.

I didn't mean to use shared libraries for Endianess issues, that would be madness. But data passed to a shared library is always in native format, anything else would be madness too.

Strings being used as standard anytype datatype is sort of ok in a language that only uses (extended) ASCII characters for strings. Even in LabVIEW that is only sort of true if you use a western language version. Asian versions use the multibyte character encoding, where a byte is not equal to a character at all.

So I consider a byte stream a more appropriate data type for the network and VISA interfaces than a string. Of course the damage has been done already and you can't take away the string variant now, at least not in current LabVIEW. Still I think it would be more accurate to introduce byte stream versions of those functions and drop them by default on the diagram, with an option to switch to the (borked) string version they have now.

I would expect a fundamentally new version of LabVIEW to switch to byte streams throughout for these interfaces. It's the right format since technically these interfaces work with bytes, not with strings.

Edited by rolfk

Share this post


Link to post
Share on other sites

So this is how it can be done. Convert to fractional string representation. Build a protocol which has only a trailer (end of file). decode('ascii') and convert to useful representation with comma seperated values. Check the python.

python_clientASCII.py

TCPServer.PNG

Share this post


Link to post
Share on other sites
1 hour ago, rolfk said:

I would expect a fundamentally new version of LabVIEW to switch to byte streams throughout for these interfaces. It's the right format since technically these interfaces work with bytes, not with strings.

Well. The TCP Write can accept a U8 array so it's a bit weird that the read cannot output to a U8 indicator so simple adapt-to-type on the read primitive would make both of us happy bunnies..

56 minutes ago, mhy said:

So this is how it can be done. Convert to fractional string representation. Build a protocol which has only a trailer (end of file). decode('ascii') and convert to useful representation with comma seperated values. Check the python.

python_clientASCII.py

TCPServer.PNG

Ahem. "\r\n" (CRLF)  instead of EOF (which means nothing as a string in LabVIEW) is the usual ASCII terminator then you can " readlines" from the TCP stream if you want to go this route. You will find there is a CRLF mode on the LabVIEW TCPIP Read primitive for this too.

Watch out for localised decimal point on your number to string primitive which may surprise you on other peoples machines (commas vs dots) and be aware that you have fixed the number of decimal places or width. (8) so you have lost resolution. You will also find that it's not very useful as a generic approach since binary data cannot be transmitted so most people revert back to the length byte sooner or later.

Edited by ShaunR

Share this post


Link to post
Share on other sites
22 minutes ago, ShaunR said:

Well. The TCP Write can accept a U8 array so it's a bit weird that the read cannot output to a U8 indicator so simple adapt-to-type on the read primitive would make both of us happy bunnies..

Wait are you sure? I totally missed that!

Ok well:

2014: no byte stream on TCP Write

2015: no byte stream on TCP Write

2016: haven't installed that currently

Share this post


Link to post
Share on other sites
17 minutes ago, rolfk said:

Wait are you sure? I totally missed that!

Ok well:

2014: no byte stream on TCP Write

2015: no byte stream on TCP Write

2016: haven't installed that currently

I'm using 2009 ;)

Untitled.png

I'll check the others later.

...a little later...

ok for 2014 too (it must be operator error on your machine :D )

 

Edited by ShaunR

Share this post


Link to post
Share on other sites
9 minutes ago, ShaunR said:

I'm using 2009 ;)

Untitled.png

I'll check the others later.

 

Right, only works for byte arrays, no other integer arrays. And they forgot the UDP Write which is actually at least as likely to be used with binary byte streams. Someone had a good idea but only executed it halfway through (well really a quarter way, if you think about the Read). To bad that the FlexTCP functions they added under the hood around 7.x never were finished. 

Edited by rolfk

Share this post


Link to post
Share on other sites
1 hour ago, rolfk said:

Right, only works for byte arrays, no other integer arrays. And they forgot the UDP Write which is actually at least as likely to be used with binary byte streams. Someone had a good idea but only executed it halfway through (well really a quarter way, if you think about the Read). To bad that the FlexTCP functions they added under the hood around 7.x never were finished. 

You can have your cake and eat it if you wrap them in a VI Macro ;)

Edited by ShaunR

Share this post


Link to post
Share on other sites
11 minutes ago, ShaunR said:

You can have your cake and eat it if you wrap them in a VI Macro ;)

Or better yet a Type Enabled Structure

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.