Jump to content

NaN to U32 behaviour changed


Recommended Posts

Hi All,

 

I am in the process of porting a LV8.5 application to LV2013. One thing I have noticed is that there is a difference in behaviour when converting a NaN (DBL) to U32 using the To Unsigned Long Integer primitive.

 

In LV8.5 NaN is converted to 0

In LV2013 NaN is converted to 4294967295 (the maximum a U32 can hold).

 

Does anybody known when this changed, and does this seem like a reasonable change? This could lead to some very strange bugs!

 

 

Link to comment

According to this post

 

http://forums.ni.com/t5/LabVIEW/nan-change/td-p/1826573

 

It happened some time in version 2010 or 2011 (since it worked one way in 2009 then another in 2011).

 

Now the other question.  Well while I can understand while some would say a NaN should be something and some would say it should be something else when converted, I agree that changing it between versions is asking for trouble.  I know I had a coworker have a similar issue when he was doing limit checking.  He had code to say is the measured value less then X and I think NaN would always return true (or maybe greater then X).  But once the conversion was understood (for that version of LabVIEW) there were no other issues.

  • Like 1
Link to comment

It happened in LV10, and I would probably follow the excellent advice given in the linked post above and do the conversion explicitly yourself.

 

Changing undocumented and ill-defined behavior is uncool, but not unreasonable.  (By the time you compound the decisions to not document the original behavior, then not document the new behavior, and then not document the change in behavior it does seem especially lame). 

 

Relying on these undocumented behaviors is unreasonable.  Unless you document the undocumented behavior you were expecting then you are just as culpable as NI, hopefully it was easy to track down (but not too easy that it was not a lesson learned).  I loathe text comments in a graphical language, and especially the ones that tell you what a chunk of code is doing.  I can read your code, but I can not read your mind (and I often forget what I was thinking a few weeks/months/years ago).  If I am using an undocumented behavior or working around a known bug I will add a comment and when I come back to the code I pay attention because I know I went out of my way to comment on something.

 

And for those keeping score, my C++ compiler gives a very useful warning message when I try a conversion like this.

Link to comment

Thanks guys.

 

The code in question was written by myself some time ago (circa 2007).  I was creating a NaN by dividing by zero, I suppose at the time I did not even bother to consider the "correctness" of the conversion, it worked and I just accepted it and moved on to the next thing.

 

I find it really educational to look back at my old code and see how my style has changed. Coercion dots all over the place, no defensive programming techniques etc etc.

 

 

Link to comment

I heard recently that when you look at your code from 3 years ago that you should feel disgusted.  If you look at your code and think you did a pretty good job, then you aren't improving, or using the new techniques and features in newer versions.

 

Isn't it quite interesting how LabVIEW code looks dated?  I mean I can look at a block diagram and and based on coding styles make a pretty good guess at what version it was developed in.  Oh using white labels instead of transparent on block diagram controls?  Labels on the left or the top of controls?  Comments not part of the wires but intended to be?  Bookmarks?  Subdiagrams on structures?  Lots of polling controls?  Default subVI icons?

 

When I look at C++ code from 5 years ago it more or less looks the same.  But 5 year old LabVIEW code is very different.  Either we're changing too much, or they aren't changing enough.

Link to comment
I heard recently that when you look at your code from 3 years ago that you should feel disgusted.  If you look at your code and think you did a pretty good job, then you aren't improving, or using the new techniques and features in newer versions.

 

Isn't it quite interesting how LabVIEW code looks dated?  I mean I can look at a block diagram and and based on coding styles make a pretty good guess at what version it was developed in.  Oh using white labels instead of transparent on block diagram controls?  Labels on the left or the top of controls?  Comments not part of the wires but intended to be?  Bookmarks?  Subdiagrams on structures?  Lots of polling controls?  Default subVI icons?

 

When I look at C++ code from 5 years ago it more or less looks the same.  But 5 year old LabVIEW code is very different.  Either we're changing too much, or they aren't changing enough.

 

C(++) code does usually vary less over time but extremely between developers. Some prefer to make it look like an Armadillo has been walking over the keyboard while others will spend more time into getting the brackets and spaces perfect than writing the actual code. :D

I personally tend to prefer the neatly formatted C code as it simply helps me understand the code more easily when looking at it a few weeks later.

 

LabVIEW code certainly tends to change its style over time, partly because new features make it simply much easier to write something, partly because new insight and experiences make you write different code to safeguard against all kinds of regular programming errors that you have come across over the time. But even here the variations between developers is usually a lot greater than between code I have written now or a few years ago.

 

However looking at code I wrote in LabVIEW 3.x certainly makes me wonder how I ever could have written it in such a way. :lol:

 

 I know I had a coworker have a similar issue when he was doing limit checking.  He had code to say is the measured value less then X and I think NaN would always return true (or maybe greater then X).  But once the conversion was understood (for that version of LabVIEW) there were no other issues.

 

I doubt that it was a recent change (>= LabVIEW 6 or 7). Any comparison with NaN is according to IEEE considered to be always false. Even (NaN == NaN) should give false. And LabVIEW tried to follow the IEEE standard since its early days but I do remember that they had some issues in very early versions of LabVIEW around LabVIEW 2.5/3.0.

 

Now it could be that they broke this in some LabVIEW version and fixed it in the next and your colleague run into this. But it seems unlikely that they had not employed the correct behavior before if you are not talking about very old LabVIEW versions.

Link to comment
I heard recently that when you look at your code from 3 years ago that you should feel disgusted.  If you look at your code and think you did a pretty good job, then you aren't improving, or using the new techniques and features in newer versions.

 

Isn't it quite interesting how LabVIEW code looks dated?  I mean I can look at a block diagram and and based on coding styles make a pretty good guess at what version it was developed in.  Oh using white labels instead of transparent on block diagram controls?  Labels on the left or the top of controls?  Comments not part of the wires but intended to be?  Bookmarks?  Subdiagrams on structures?  Lots of polling controls?  Default subVI icons?

 

When I look at C++ code from 5 years ago it more or less looks the same.  But 5 year old LabVIEW code is very different.  Either we're changing too much, or they aren't changing enough.

 

I look at my code a week later and feel disgusted :D Since I still use 2009 (through preference) my coding still looks the same as it always has. My VI Icons are better now though. Does that count?  :)

Link to comment
I doubt that it was a recent change (>= LabVIEW 6 or 7). Any comparison with NaN is according to IEEE considered to be always false. Even (NaN == NaN) should give false. And LabVIEW tried to follow the IEEE standard since its early days but I do remember that they had some issues in very early versions of LabVIEW around LabVIEW 2.5/3.0.

I just tested in 2011 on many different comparisons and you are right that it always returns false, and the Coerce In Range.  My coworker must have been confused but claims it was in version 2010.

 

I did discover one other potential issue with this NaN and comparison business.  Lets say I take an average of some values, and then want to command a device over serial with that average value.  I have a valid range so I use the Coerce In Range to be between 0 and 10 which in this hypothetical is my devices limit.  If my average returns NaN by trying to divide by 0, then the coerce in range returns NaN not 0 or 10.  This functionality isn't what I expected, but it is what is documented in the help.

Link to comment

NaN returns false for all LV versions that I know of back to AT LEAST 6.1. I've got tools that rely on that fact.

 

If my average returns NaN by trying to divide by 0, then the coerce in range returns NaN not 0 or 10.  This functionality isn't what I expected, but it is what is documented in the help.

I'm not sure why you didn't expect it... you should expect any computation involving NaN should result in NaN. You tried to coerce something that is not a number into a numerical range. The result is, therefore, not a number.



To give further details about what happened in 2010... this appears to be the result of third party changes, not something anyone within LabVIEW ever consciously decided to change. LV 2010 was the first version to use LLVM as our low-level optimizer. The conversion code from float to integer is generated by LLVM, and so in making the compiler change, we picked up LLVM's convention for handling NaN. Until I just now asked around, no one here knew about this change of behavior. The coercion code hadn't changed in forever, long before there was a nightly test suite, so there was never a test created to check that behavior during refactorings. It wasn't something we did and then decided not to document. It was something that never needed documentation since it had been that way presumably since LV 2.0 and I don't think anyone realized would be impacted by the LLVM change.

Link to comment
I'm not sure why you didn't expect it... you should expect any computation involving NaN should result in NaN. You tried to coerce something that is not a number into a numerical range. The result is, therefore, not a number.

I guess I expected it to coerce a number to be within the range I provided.  This is the first time I found that providing an input to that function didn't force my value to be within the range specified.  I'm not saying it should coerce, and I agree that it behaves the way standards say it should (and the documentation says as well).  I was just taken back because I've never seen it behave that way is all.  

 

It's like if I used the one button dialog box and if I make the message one specific constant it behaved differently.

Link to comment
The trick in this case is if you test NaN for "is it greater than Max?" it will say "False!" and if you ask "Is it less than Min?" it will say "False!" ... therefore the value is correctly in the range of 0 to 10. ;-)

But if I say is the value less then Max then I also get a false, meaning I am outside the valid range and should coerced...

 

(you must have known I would have came to this logic)

Link to comment
But if I say is the value less then Max then I also get a false, meaning I am outside the valid range and should coerced...

 

(you must have known I would have came to this logic)

 

But you need to do the tests Aristos Queue mentioned in order to coerce. If they did your test first they still would have to find out with potentially two other comparisons if they need to coerce to the upper bound or the lower one, making your test just degrading performance for the out of range case.

 

Also an interesting challenge to think about, which limit would be the one you would have expected the NaN value to be coerced to? Upper or lower? Both are equally logical (or rather illogical) :P

Link to comment
This is why in 2014, NI will be shipping the new Bifurcated Timeline LabVIEW. For any dataflow with an ambiguous answer, we'll fork the entire universe and provide one answer to each quantum state vector. Even if you find yourself in the wrong timeline, you can take solace in the fact that in one of the various realities, LabVIEW did exactly what you expected it did and that other you is quite happy. Be happy for your self's good fortune! The feature is undocumented because documentation being provided is one of those quantum states that got forked during testing and this universe lost out. But it's in there, nonetheless. I hope you enjoy it!

 

At least in the Beta, adding superSecretQuantumVersion to the LabVIEW.ini file seems to magically make that documentation available in the help file though. ;)

Link to comment
But you need to do the tests Aristos Queue mentioned in order to coerce. If they did your test first they still would have to find out with potentially two other comparisons if they need to coerce to the upper bound or the lower one, making your test just degrading performance for the out of range case.

 

Also an interesting challenge to think about, which limit would be the one you would have expected the NaN value to be coerced to? Upper or lower? Both are equally logical (or rather illogical) :P

I've already admitted defeat, and hope I am clear that I never asked for it to be one way or another.  But because it is Monday I made the 3 ways to do what we are talking about.  The native way (the actual function), the right way (start with is the value less then Max?) and the wrong way (start with is the value greater then Max?).  Both the right and wrong way have the same amount of logic having two checks each.  I don't see how doing it the wrong way adds extra checks.  After running the test on an array with 100000000 doubles I got a resulting time of 233ms for native, 239ms for right, and 238ms for wrong.  I think we can call that a wash.

Coerce Tests.vi

Link to comment
I've already admitted defeat, and hope I am clear that I never asked for it to be one way or another.  But because it is Monday I made the 3 ways to do what we are talking about.  The native way (the actual function), the right way (start with is the value less then Max?) and the wrong way (start with is the value greater then Max?).  Both the right and wrong way have the same amount of logic having two checks each.  I don't see how doing it the wrong way adds extra checks.  After running the test on an array with 100000000 doubles I got a resulting time of 233ms for native, 239ms for right, and 238ms for wrong.  I think we can call that a wash.

 

You are of course right about only needing two comparisons too. However you switched the Native and AQ time control.

 

And interestingly the "real" AQ comparison is ALWAYS faster on my machine than the other two, with your comparison usually being slightly slower than the native one. However the overall variation between runs is generally higher than the difference between the three methods in one run.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.