Jump to content

constant value changes on changing representation


Recommended Posts

can anyone confirm (and explain) this behaviour:

1) place a numeric constant on an empty block diagram (defaults to 0, I32)

2) enter fractional number 1.3. representation changes to DBL

3) change representation to SGL

In my case, the constant changes to 1.299999952316

(I noticed that this doesn't happen for all entered values.... 1.5 works fine)

/dirk

Link to comment

This is a regular topic in any computer forum and the answer is very simple:

Computers are binary and discrete. Floating point numbers are indiscrete, meaning they have an infinitesimal small increment (maybe something Planck related :rolleyes: ).

To store your number 1.3 exactly in the computer it would need an infinite amount of memory and that is still hard to get at nowadays. :shifty:

So the floating point numbers limit the storage for a number to a limited amount of digits and that is what you see.

Single has about 8 digits of accuracy and double about 15 digits.

Rolf Kalbermatter

Link to comment

ok, I understand that it has to do with the # digits.

but still this puzzles me:

If I place a constant, change it's representation to SGL and enter 1.3, the constant on the BD reads 1.3

If I then enter any other value, it simply displays what I entered (as I would expect).

If I place a constant, change it's representation to DBL and enter 1.3, the constant reads 1.3.

If I change the representation to SGL, it shows 1.29.....

If I enter any other value, it sometimes displays what I entered, sometimes not.

Yet both constants have SGL representation.

/d

Link to comment

I've been playing around with this some more (I'm too sleepy to do any real work).

If you drop an I32 constant on the BD and manually change it to a SGL, it defaults to 6 digits of precision (enough to make 1.299999etc look like 1.3). If you drop an I32 constant on the BD and manually change it to a DBL, it sets it to 13 digits of precision. So when LV automatically converts the I32 constant to a DBL and then you convert it to a SGL, it must keep the 13 digits of precision, and your 1.3 now looks like 1.299999etc

All of this is how it works on my system anyway; YMMV.

Okay, time to do something useful for the Fleet...

Cat

Link to comment

[Edit: just a couple of minutes too late....]

I understand that not all numbers can be represented exactly, but the key to my puzzle is in the default display precision of SGL and DBL.

If you configure a constant as a SGL with value 1.3 it defaults to 6 digits of precision.

You can change the representation of the SGL back and forth to DBL, but it maintains it's 6 digits (always displaying 1.3)

If you configure the constant as a DBL with value 1.3 it defaults to 13 digits of precision (initially showing 1.3)

Changing the representation to SGL maintains 13 digits of display precision (not internal precision), thus showing 1.2999.....

I somehow expected it to change to 6 digits display precision (it shows 1.3 then).

/d

Edited by Dirk J.
Link to comment

[Edit: just a couple of minutes too late....]

I understand that not all numbers can be represented exactly, but the key to my puzzle is in the default display precision of SGL and DBL.

If you configure a constant as a SGL with value 1.3 it defaults to 6 digits of precision.

You can change the representation of the SGL back and forth to DBL, but it maintains it's 6 digits (always displaying 1.3)

If you configure the constant as a DBL with value 1.3 it defaults to 13 digits of precision (initially showing 1.3)

Changing the representation to SGL maintains 13 digits of display precision (not internal precision), thus showing 1.2999.....

I somehow expected it to change to 6 digits display precision (it shows 1.3 then).

Why should it do that? The decision what display precision to use is done at the moment the constant changes from non-float to float but is not changed if you change between float representations. Should it do that? It's IMHO highly discutable.

I would not like it to change display precision when changing to a different float type after I have changed it to a specific display precision. So this would mean LabVIEW would have to store an additional attribute to the display precision, indicating if that precision has been user changed in some ways or is some automatic default value. Possible to do but most likely not worth the trouble as this would not just be a simple line of code to add but go deep through the entire datatype handling of LabVIEW.

Rolf Kalbermatter

Link to comment

Rolf: fair enough. LV handles so many things for you..... :)

I actually only noticed this when I changed a Complex Double (which wasn't supposed to be complex) to a Single in an example VI which I wanted to send to the students in our group.

My major concern was with their potential surprise of seeing number different of the number they entered.

Anyhow, it all makes sense after all.

Maybe I'm just not having a good day today.

/d

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.