Jump to content

Handling Endianness in between Labview and Windows (Intel Processor Achitecture)


Recommended Posts

I am using udp to transfer data from a Windows PC Host to NI MyRIO. The data transfer is satisfactory by me. The challenge I have is handling the endianness between both. Is there a way to receive the data in labview in little endian or better still, translate the endianness to big endian which is what labview is based on before passing on my data?

Thank you!

Link to comment

The easiest way to do this is usually "Unflatten From String" with the endianness input wired as desired. In order for this to work it needs to be possible to unflatten your data directly into a LabVIEW data type (which could be a cluster).

Link to comment

In order for this to work it needs to be possible to unflatten your data directly into a LabVIEW data type (which could be a cluster).

 

Can you show an example of this?

 

I am receiving the data in labview but it seems I need to deconstruct the unflatten from string function is not being properly deconstructed.

 

  This is what I have:

post-53076-0-32872700-1420568746_thumb.p

Edited by Calorified
Link to comment

It would be more helpful to work with your real data. What is the format of the data you're receiving? If you're receiving the same text that your C# code from the other thread would otherwise be writing to the console, then your problem isn't endianness at all - it's that you need to parse the string into values. On the other hand, if you're sending binary data, then you should be able to unflatten it. The image you posted shows you're trying to receive an array of 16-bit values. Is that actually what you're sending? It might be helpful to capture some of the strings received by UDP Read in an indicator, stop the VI, change to a control, and set the current value as the default value. Then you can upload that VI, and I (and other forum users) can see the real data you receive.

Link to comment

Here's what I am sending from the C# end. It's binary.

this.facePoints3D = frame.Get3DShape();

// UDP Connection :: Talker ::

Boolean done = false;

Boolean exception_thrown = false;

Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);

IPAddress send_to_address = IPAddress.Parse("172.22.11.2");

IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 80);

while (!done)

{

foreach (Vector3DF vector in facePoints3D)

{

//arrange

float zvect = vector.Z;

byte[] bytearray = BitConverter.GetBytes(zvect);

Console.WriteLine("sending to address: {0} port: {1}",

sending_end_point.Address,

sending_end_point.Port);

try

{

sending_socket.SendTo(bytearray, sending_end_point);

}

catch (Exception send_exception)

{

exception_thrown = true;

Console.WriteLine(" Exception {0}", send_exception.Message);

}

if (exception_thrown == false)

{

Console.WriteLine("Message has been sent to the broadcast address");

}

else

{

exception_thrown = false;

Console.WriteLine("The exception indicates the message was not sent.");

}

} //ends foreach statement

} //ends while(!done) statement

I am getting something of this sort from the UDP Read function in labview when I wired an indicator to it: ƒª’?

After unflatteneing, I am getting values which are inherently not corresponding (e.g. numbers like 141, 168, 200) to what I have on my console window (floats such as 1.8543, 1.115 etc).

Let me know what you think.

Link to comment

I am getting something of this sort from the UDP Read function in labview when I wired an indicator to it: ƒª’?

After unflatteneing, I am getting values which are inherently not corresponding (e.g. numbers like 141, 168, 200) to what I have on my console window (floats such as 1.8543, 1.115 etc).

Let me know what you think.

I think that if you're sending floats, then that's the data type you should be unflattening, too. Instead you're unflattening to 16-bit integers, which, unsurprisingly, explains why all the values you see are integers. Change the numeric representation to single-precision (I believe that corresponds to a float in C#) and see if it fixes the problem.

 

The strange strings you see when looking at the UDP data are to be expected as well - you're looking at binary data as ASCII. If you change the string representation to hex you'll see the hex representation of the binary data, which might be more useful. Either way it's the same bytes, just displayed differently.

Link to comment

The same type of the data you're sending. I would use an array of single-precision float. Given that you send each value individually you may get a bunch of single-element arrays (I don't know if LabVIEW will combine multiple packets into a single read). Make sure the Unflatten input that specifies that the string contains the array size is FALSE (the string does NOT include the array size). You might want to modify your C# code temporarily to send a known value, which will make it easier to debug. You might also want to restructure the C# code to send multiple values in a single UDP Send, to cut down on overhead versus real data.

Link to comment

Yes, I agree with the above. You need to unflatten to singles.

 

I am getting something of this sort from the UDP Read function in labview when I wired an indicator to it: ƒª’?

 

If you're still having trouble, can you switch that indicator to show you hex (right click, select hex display) and paste a few lines of that here? Even better if we know what the correct values would be (for example, you could send an array of "1.5")

Edited by infinitenothing
Link to comment

I would defnitely combine the numbers into one packet like this. 

this.facePoints3D = frame.Get3DShape();

// UDP Connection :: Talker ::
Boolean done = false;

Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);

IPAddress send_to_address = IPAddress.Parse("172.22.11.2");
IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 80);

while (!done)
{
    int index = 0;
    byte[] bytearray = new byte[facePoints3D.Length * 4];
    foreach (Vector3DF vector in facePoints3D)
    {
        Array.Copy(BitConverter.GetBytes(vector.Z), 0, bytearray, index, 4);
        index += 4;
    }
    
    try
    {
        sending_socket.SendTo(bytearray, sending_end_point);
        Console.WriteLine("Message has been sent");
    }
    catch (Exception send_exception)
    {
         Console.WriteLine("The exception indicates the message was not sent.");
    }

} //ends while(!done) statement

Then on the LabVIEW side use a Unflatten from String with an array of single precision as datatype input. And of course not forgetting to set the data contains size input to false.

 

post-349-0-03977500-1420590121.png

Link to comment

Right-click the Numeric constant inside the array constant (which is wire to the Type input), choose Representation -> SGL (Single-Precision). You could also wire a numeric constant without the array, since your code sends only one value at a time. When you typed "0.00f" it gave you a double-precision value, which requires 8 bytes. Since you're sending single-precision values (4 bytes) one at a time, there weren't enough bytes to convert to a double-precision floating point, so it gave you the error 74.

Link to comment

Right-click the Numeric constant inside the array constant (which is wire to the Type input), choose Representation -> SGL (Single-Precision).

So I have the array constant but but in LabVIEW 2014, I can't find my way into Choose Representation. or SGL.

You could also wire a numeric constant without the array, since your code sends only one value at a time.

Now, I tried this and I am having floating point numbers as my output. But I notice my data is of the order 1.06607E+9 when in actual fact, it should somewhat be 1.06607.

Am I missing something?

Link to comment

So I have the array constant but but in LabVIEW 2014, I can't find my way into Choose Representation. or SGL.

Can't help you with this one. I don't think anything changed in LabVIEW 2014, but I'm still on an older version. Can you get to the representation for a numeric constant that isn't in an array? The process is identical for a numeric inside an array. What happens when you right-click the numeric inside the array? Do you get a shortcut menu?

 

 

Now, I tried this and I am having floating point numbers as my output. But I notice my data is of the order 1.06607E+9 when in actual fact, it should somewhat be 1.06607.

Am I missing something?

It would make it much easier to help if you save your data to a string control, as I explained in a previous post, and you post that VI. That would make it possible for us to see exactly what you're doing. Again, I'm still on LabVIEW 2012, so you'd need to save the VI for that version in order for me to look at it. The VI needs to contain a string control containing the actual data saved as the default value (this step is critical), wired to "unflatten from string" used exactly as you have it in your VI.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.