Kevin Boronka Posted August 9, 2006 Report Share Posted August 9, 2006 I'm running LabVIEW 8.01 LabVIEW 8.01 handles data type conversions differently. When converting a Double (36611.0) to an I16, the value of the I16 is maxed out Binary (111111111111111) When converting a U16 (36611) to an I16, the value of the I16 is the same value as the U16 in Binary (1000111100000011) Download File:post-3342-1155160832.vi Quote Link to comment
Ton Plomp Posted August 10, 2006 Report Share Posted August 10, 2006 I'm running LabVIEW 8.01LabVIEW 8.01 handles data type conversions differently. When converting a Double (36611.0) to an I16, the value of the I16 is maxed out Binary (111111111111111) When converting a U16 (36611) to an I16, the value of the I16 is the same value as the U16 in Binary (1000111100000011) It is strange indeed, in LV 8.2 and 7.1 the same occurs. But looking at the help states that: This function rounds all floating-point numer values to the nearest integer But still it should not do a typecast of U16 to I16 but a conversion.. seems like some work for NI Ton PS there is no difference when connecting the U16 directly to the indicator Quote Link to comment
Rolf Kalbermatter Posted January 17, 2007 Report Share Posted January 17, 2007 It is strange indeed, in LV 8.2 and 7.1 the same occurs. But looking at the help states that:But still it should not do a typecast of U16 to I16 but a conversion.. seems like some work for NI Ton PS there is no difference when connecting the U16 directly to the indicator Actually not necessarily. This is a behaviour that also occurres in C, at least the compilers I know of and it has its uses when you read in data from a stream in a certain format but later want to reinterpret some of the data. I know for sure a few cases where I have relied on this fact and changing that now would certainly break lots of peoples VIs. Rolf Kalbermatter Quote Link to comment
robijn Posted January 17, 2007 Report Share Posted January 17, 2007 Actually not necessarily. This is a behaviour that also occurres in C, at least the compilers I know of and it has its uses when you read in data from a stream in a certain format but later want to reinterpret some of the data. I know for sure a few cases where I have relied on this fact and changing that now would certainly break lots of peoples VIs. Indeed, I've also relied on that many times. If the behaviour is to be changed (which would not be a bad thing on itself IYAM), the automatic version upgrade process should replace the U32 or I32 nodes by Typecasts nodes. Because from the Typecast function that behaviour is expectable. I find it strange that the typecast node sometimes modifies data. It should never do that. Joris Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.