Dirk J. Posted September 25, 2009 Report Share Posted September 25, 2009 can anyone confirm (and explain) this behaviour: 1) place a numeric constant on an empty block diagram (defaults to 0, I32) 2) enter fractional number 1.3. representation changes to DBL 3) change representation to SGL In my case, the constant changes to 1.299999952316 (I noticed that this doesn't happen for all entered values.... 1.5 works fine) /dirk Quote Link to comment
Rolf Kalbermatter Posted September 25, 2009 Report Share Posted September 25, 2009 This is a regular topic in any computer forum and the answer is very simple: Computers are binary and discrete. Floating point numbers are indiscrete, meaning they have an infinitesimal small increment (maybe something Planck related ). To store your number 1.3 exactly in the computer it would need an infinite amount of memory and that is still hard to get at nowadays. So the floating point numbers limit the storage for a number to a limited amount of digits and that is what you see. Single has about 8 digits of accuracy and double about 15 digits. Rolf Kalbermatter Quote Link to comment
Dirk J. Posted September 25, 2009 Author Report Share Posted September 25, 2009 ok, I understand that it has to do with the # digits. but still this puzzles me: If I place a constant, change it's representation to SGL and enter 1.3, the constant on the BD reads 1.3 If I then enter any other value, it simply displays what I entered (as I would expect). If I place a constant, change it's representation to DBL and enter 1.3, the constant reads 1.3. If I change the representation to SGL, it shows 1.29..... If I enter any other value, it sometimes displays what I entered, sometimes not. Yet both constants have SGL representation. /d Quote Link to comment
Cat Posted September 25, 2009 Report Share Posted September 25, 2009 Set your 1.3 constant to 10+ significant digits and it will display that it's actually equal to 1.29999995231... Because of what Rolf said, 1.5 can be represented exactly and 1.3 cannot. Cat Quote Link to comment
Cat Posted September 25, 2009 Report Share Posted September 25, 2009 I've been playing around with this some more (I'm too sleepy to do any real work). If you drop an I32 constant on the BD and manually change it to a SGL, it defaults to 6 digits of precision (enough to make 1.299999etc look like 1.3). If you drop an I32 constant on the BD and manually change it to a DBL, it sets it to 13 digits of precision. So when LV automatically converts the I32 constant to a DBL and then you convert it to a SGL, it must keep the 13 digits of precision, and your 1.3 now looks like 1.299999etc All of this is how it works on my system anyway; YMMV. Okay, time to do something useful for the Fleet... Cat Quote Link to comment
Dirk J. Posted September 25, 2009 Author Report Share Posted September 25, 2009 (edited) [Edit: just a couple of minutes too late....] I understand that not all numbers can be represented exactly, but the key to my puzzle is in the default display precision of SGL and DBL. If you configure a constant as a SGL with value 1.3 it defaults to 6 digits of precision. You can change the representation of the SGL back and forth to DBL, but it maintains it's 6 digits (always displaying 1.3) If you configure the constant as a DBL with value 1.3 it defaults to 13 digits of precision (initially showing 1.3) Changing the representation to SGL maintains 13 digits of display precision (not internal precision), thus showing 1.2999..... I somehow expected it to change to 6 digits display precision (it shows 1.3 then). /d Edited September 25, 2009 by Dirk J. Quote Link to comment
Rolf Kalbermatter Posted September 25, 2009 Report Share Posted September 25, 2009 [Edit: just a couple of minutes too late....] I understand that not all numbers can be represented exactly, but the key to my puzzle is in the default display precision of SGL and DBL. If you configure a constant as a SGL with value 1.3 it defaults to 6 digits of precision. You can change the representation of the SGL back and forth to DBL, but it maintains it's 6 digits (always displaying 1.3) If you configure the constant as a DBL with value 1.3 it defaults to 13 digits of precision (initially showing 1.3) Changing the representation to SGL maintains 13 digits of display precision (not internal precision), thus showing 1.2999..... I somehow expected it to change to 6 digits display precision (it shows 1.3 then). Why should it do that? The decision what display precision to use is done at the moment the constant changes from non-float to float but is not changed if you change between float representations. Should it do that? It's IMHO highly discutable. I would not like it to change display precision when changing to a different float type after I have changed it to a specific display precision. So this would mean LabVIEW would have to store an additional attribute to the display precision, indicating if that precision has been user changed in some ways or is some automatic default value. Possible to do but most likely not worth the trouble as this would not just be a simple line of code to add but go deep through the entire datatype handling of LabVIEW. Rolf Kalbermatter Quote Link to comment
Dirk J. Posted September 25, 2009 Author Report Share Posted September 25, 2009 Rolf: fair enough. LV handles so many things for you..... I actually only noticed this when I changed a Complex Double (which wasn't supposed to be complex) to a Single in an example VI which I wanted to send to the students in our group. My major concern was with their potential surprise of seeing number different of the number they entered. Anyhow, it all makes sense after all. Maybe I'm just not having a good day today. /d Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.