Jump to content

DBL to I16 bug


Recommended Posts

I'm running LabVIEW 8.01

LabVIEW 8.01 handles data type conversions differently.

When converting a Double (36611.0) to an I16, the value of the I16 is maxed out Binary (111111111111111)

When converting a U16 (36611) to an I16, the value of the I16 is the same value as the U16 in Binary (1000111100000011)

Download File:post-3342-1155160832.vi

Link to comment
I'm running LabVIEW 8.01

LabVIEW 8.01 handles data type conversions differently.

When converting a Double (36611.0) to an I16, the value of the I16 is maxed out Binary (111111111111111)

When converting a U16 (36611) to an I16, the value of the I16 is the same value as the U16 in Binary (1000111100000011)

It is strange indeed, in LV 8.2 and 7.1 the same occurs. But looking at the help states that:

This function rounds all floating-point numer values to the nearest integer

But still it should not do a typecast of U16 to I16 but a conversion.. seems like some work for NI

Ton

PS there is no difference when connecting the U16 directly to the indicator

Link to comment
  • 5 months later...
It is strange indeed, in LV 8.2 and 7.1 the same occurs. But looking at the help states that:

But still it should not do a typecast of U16 to I16 but a conversion.. seems like some work for NI

Ton

PS there is no difference when connecting the U16 directly to the indicator

Actually not necessarily. This is a behaviour that also occurres in C, at least the compilers I know of and it has its uses when you read in data from a stream in a certain format but later want to reinterpret some of the data. I know for sure a few cases where I have relied on this fact and changing that now would certainly break lots of peoples VIs.

Rolf Kalbermatter

Link to comment
Actually not necessarily. This is a behaviour that also occurres in C, at least the compilers I know of and it has its uses when you read in data from a stream in a certain format but later want to reinterpret some of the data. I know for sure a few cases where I have relied on this fact and changing that now would certainly break lots of peoples VIs.

Indeed, I've also relied on that many times. If the behaviour is to be changed (which would not be a bad thing on itself IYAM), the automatic version upgrade process should replace the U32 or I32 nodes by Typecasts nodes. Because from the Typecast function that behaviour is expectable.

I find it strange that the typecast node sometimes modifies data. It should never do that.

Joris

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.