Jump to content

decimal number to 4 byte hex array


Recommended Posts

Hi all, Im back with another conversion question!

I need to convert a decimal number into a 4 byte hex array so that it can be written to a file. I have managed to convert a number to Hex (see below), however it gives 5 bytes out and i have to have 4 bytes. Anyone got any suggestions as to how i do the 4 byte conversion?

Cheers

Al

post-1633-1111054580.jpg?width=400

Link to comment
Hi all, Im back with another conversion question!

I need to convert a decimal number into a 4 byte hex array so that it can be written to a file. I have managed to convert a number to Hex (see below), however it gives 5 bytes out and i have to have 4 bytes. Anyone got any suggestions as to how i do the 4 byte conversion?

Cheers

Al

4248[/snapback]

You should read LabVIEW app Note 154, LabVIEW Data Storage, if you haven't already done so.

LabVIEW floating-point values use the IEEE-754 standard for storage and manipulation. A LV double is stored in eight bytes, a single uses four bytes. You can directly convert singles and doubles by wiring them to the 'Type Cast' primitive, found under Advanced->Data Manipulation palette. You'll get a four- or eight-character string as output. (LV strings are the preferred representation for arbitrary byte stream data.)

I'm unclear on what you need as your ultimate output. If you want to get the single or double into a file in the smallest possible amount of storage, you're basically 'done' with the above steps - the string is 4 or 8 bytes long. Or, if you want to display or print a hex representation of this, you can do lots of things. You can wire the string to an indicator, and set the indicator to display in hex mode. You can convert the string to a U8 array ('String to Byte Array' primitive, String->StringArray/Path Conversion palette), then loop through the array and convert the values using the 'Format Into String' primitive, using a %x format - then you'll have a directly-human-readable output string (twice as long!).

Hope this helps. My experience is that many people get very confused by the differences in representation vs. presentation available in LabVIEW when they first encounter a task requiring bitz, bytez, hex, etc. This is not a fault of LabVIEW, which has a rich feature set in this regard. It is often a problem with imprecise communication (between humans) over what they mean.

By the way, I count myself among the people who've gotten confused over this. :rolleyes:

Hope this helps,

Dave

Link to comment

David,

Thanks for the reply. I have been struggling with the 'representation' vs 'presentation' issue, but i think i have figured it out now.

I have included a pic of the new VI that converts a double into an 8 element array of hex numbers incase anyone else is having the same problem I was.

Al

post-1633-1111079284.jpg?width=400

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.