Jump to content

Fract/Exp String to Number precision woos


Recommended Posts

I've searched through the forums and LabVIEW's built-in help index, but I have failed to find the solution to what I think is a ridiculous problem. Here's the problem: I have a GPIB VI that controls an Agilent Frequency counter. The VI simply initializes the counter and records the results computed by the counter. Due to speed concerns, this VI records the results in String format and does not parse them into numerical values. That's where the problem comes in and the problem actually has nothing to do with this VI. The problem is with another VI I wrote to convert the files full of strings into files containing numerical representations of the data so I can work on the data with LabVIEW and other software. However, when I use the Fract/Exp String to Number function it converts a string such as "+5.748901924922E+002" into the number 574.89 (according to probes and my attempts to maniputlate the data with other software such as IGOR). Where has all my precision gone? When I have the result go to an indictor and set the indictor up with the appropriate precision, I can see all the numbers just as they should be, but all my probes only show 2 digits of precision and when I save the results to an SGL file I also get only 2 or 3 digits of precision. Is there any way to remedy this other than making my own version of the Fract/Exp String to Number function?

The version of LabVIEW I'm using is 6i and I'm running it on a Mac. Thank you in advance for any and all advice, suggestions, or helpful links.

EDIT: Well, I've found out that I actually do have more precision in my results than I orginally thought; however, there is still a problem which may actually be the expected result of computer arithmetic. The conversion function will convert a string such as "+5.748901924922E+002" into the number 574.8901977539. Is this just a result of the inherent lack of extreme percision in floating point numbers represented by a computer? I know the amount of precision one can achive is limited, but I didn't think this number would reach that limit, though I could very well be wrong. Once again, thanks in advance for any and all assistance.

Link to comment
EDIT: Well, I've found out that I actually do have more precision in my results than I orginally thought; however, there is still a problem which may actually be the expected result of computer arithmetic. The conversion function will convert a string such as "+5.748901924922E+002" into the number 574.8901977539. Is this just a result of the inherent lack of extreme percision in floating point numbers represented by a computer? I know the amount of precision one can achive is limited, but I didn't think this number would reach that limit, though I could very well be wrong. Once again, thanks in advance for any and all assistance.

I just tried to convert your number. When I display the converted value with a SGL-Indicator I get "574.899047851562499000", but when done with a DBL-Indicator I get "574.8990192492200320...". This means that SGL has not enough bits to represent the value with this precision.

You should

- either change to DBL precision

- or think about if you really need that many digits (usually 6 digits are enough; exception is physics, where sometimes values can't be precise enough ;) ).

Link to comment
However, when I use the Fract/Exp String to Number function it converts a string such as "+5.748901924922E+002" into the number 574.89. Where has all my precision gone? When I have the result go to an indictor and set the indictor up with the appropriate precision, I can see all the numbers just as they should be, but all my probes only show 2 digits of precision and when I save the results to an SGL file I also get only 2 or 3 digits of precision. Is there any way to remedy this other than making my own version of the Fract/Exp String to Number function?

Hi,

in LV 6 probes don't display more than two digits, If you would like to see more you

could create a custom control with engineering notation or more than 2 digits of precision.

If you the select custom probe and select the control you just created, you get more precision.

As far as the save to file concerns: have you wired the precision input? Or accidently wired a %.2f to the array to spreadsheet string?

I never had problems when saving data to disk, the system only writes as much digits as you select...

(I,ve added a screenshot of what could be a solution...)

post-2311-1151480239.jpg?width=400

good luck,

TNT

Link to comment
- or think about if you really need that many digits (usually 6 digits are enough; exception is physics, where sometimes values can't be precise enough ;) ).

Sadly, I am a student working with my college's physics deptartment, so I just may need that many digits. However, your advice seems promising and I greatly appreciate it. I think I use the Single Precision Files because those are the kinds of files the other software I use can read... but I suppose I'm just going to have to figure out a way around that or give up on so many digits. Much thanks to all those that replied.

Well, as I suspected, didierj was absolutely correct. It is not the string to number conversion function that is causing the problem, it is this practice of saving data with SGL files. However, I do not see a readily accessible way of saving files with double precision. I'll be on the lookout and working on a method of saving double precision files, but I thought I would go ahead and post to see if anyone already knows a convenient way of saving such files. As always, much thanks for all advice.

Link to comment
...but I thought I would go ahead and post to see if anyone already knows a convenient way of saving such files.

Hi Vyresince:

You might want to consider writing the data as comma separated values in a printable ASCII file-- That this isn't quite as efficient of disk storage as storing directly as a singles, or even as a doubles, but it often saves so much confusion that it is worth it, especially since disk space is so cheap these days.

For huge quantities of data, or other real reasons forcing compact data storage, I've found that PKZip will compress a printable ASCII file down to about the size of the same data stored in packed format. (Sometimes PKZip even does a little better if there are a lot of repeated or similar values in the data.)

This assumes that your downstream analysis tools can accept printable ASCII. And, more important, that these tools can internally represent the data they get in better than single precision-- if not, it doesn't much matter how the data gets to them.

Another approach, if the downstream analysis tools can't work with the raw data in acceptable precision-- Perhaps the data consists of small variations from a relatively larger mean value? If so you can subtract that mean value from within LabView, while the data is still double, then save the data as single. By getting rid of the mean, you'll effectively increase the accuracy of the data after conversion to sgl.

(A final approach might be to get downstream analysis tools that work with doubles. Seems hard to believe that this wouldn't be a better approach, but perhaps you are stuck with legacy code specific to a certain field that's hard to replace. If not there's plenty of really good packages out there that can work with double or even extended precision. Lucky for you, most of these packages, which can be quite costly for us civilians, are available pretty economically if you qualify for an academic license. My favorite is Igor from Wavemetrics, at one point in the dim past only available for Mac, AFAIK still available for Mac.)

Best luck, hope these thoughts help, Louis

Link to comment
Hi Vyresince:

You might want to consider writing the data as comma separated values in a printable ASCII file-- That this isn't quite as efficient of disk storage as storing directly as a singles, or even as a doubles, but it often saves so much confusion that it is worth it, especially since disk space is so cheap these days.

Well, most of my downstream analysis programs assume that the data is in a single precision binary file and not ASCII, so I believe I would prefer to save data in a double precision binary file if such a saving is possible.

Another approach, if the downstream analysis tools can't work with the raw data in acceptable precision-- Perhaps the data consists of small variations from a relatively larger mean value? If so you can subtract that mean value from within LabView, while the data is still double, then save the data as single. By getting rid of the mean, you'll effectively increase the accuracy of the data after conversion to sgl.

Your intuition serves you well, because I am indeed looking at data that varies from a larger mean value. I've thought before of subtracting it out to increase precision as you have suggested, but much of the downstream analysis involves taking FFTs of the data and I believe subtracting the mean value out will cause problems with these FFT programs.

(A final approach might be to get downstream analysis tools that work with doubles. Seems hard to believe that this wouldn't be a better approach, but perhaps you are stuck with legacy code specific to a certain field that's hard to replace. If not there's plenty of really good packages out there that can work with double or even extended precision. Lucky for you, most of these packages, which can be quite costly for us civilians, are available pretty economically if you qualify for an academic license. My favorite is Igor from Wavemetrics, at one point in the dim past only available for Mac, AFAIK still available for Mac.)

Best luck, hope these thoughts help, Louis

I actually do use IGOR and it's quite useful. As you know, it can open my delimited text files the way they are without any conversion into binary files, however my other software (mainly LabVIEW programs) work on the assumption that the incoming data is a binary file representing numeric data. What I really would like to know is if it is possible to save data to a binary file with double precision instead of in single precision. Can I just rewrite the LabVIEW provided single precision saving function to save double precision files? I've tried doing that the naive way by just changing the single precision arrays to double precision, but that resulted in garbage results. So if anyone has any suggestions on how to save data to a double precision binary file I would love to hear them.

Link to comment
...FFTs of the data and I believe subtracting the mean value out will cause problems with these FFT programs.

If the data varies about a constant mean value, that value will only be represented in the 0-Hz bin of the FFT. Subtracting this mean value prior to calculating the FFT should only affect a very small number of points in the FFT. Again, that's assuming that the mean value is constant across the data, and therefore looks like a DC bias.

Link to comment
Can I just rewrite the LabVIEW provided single precision saving function to save double precision files? I've tried doing that the naive way by just changing the single precision arrays to double precision, but that resulted in garbage results. So if anyone has any suggestions on how to save data to a double precision binary file I would love to hear them.

Don't forget to rewrite the data read functions with double precision, or, indeed, you end with garbage: The file read function does not care what datatype is in the file, YOU feed the datatype to the function.

So, if you write a double-precision array to file, LV alignes the elements as 8bytes, if you read single-precision from this file, LV assumes that the file has 4bytes aligned as elements, resulting in a big mess. There is NO typedef check and/or conversion in the file read function.

Link to comment

Hello all. Sorry it has been so long since I've replied, but my boss has had me out at another location doing work without an internet connection. After that I was off for the 4th of July and have just got back. I did manage to rewrite the read and write VIs to work with doubles and it was a lot easier than I thought it would be. Much of this ease was due to all the advice I've gotten from your replies and I thank you all very much for your input. If I ever manage to scrounge up some money, I will definitely donate to these wonderful forums.

Link to comment
EDIT: Well, I've found out that I actually do have more precision in my results than I orginally thought; however, there is still a problem which may actually be the expected result of computer arithmetic. The conversion function will convert a string such as "+5.748901924922E+002" into the number 574.8901977539. Is this just a result of the inherent lack of extreme percision in floating point numbers represented by a computer? I know the amount of precision one can achive is limited, but I didn't think this number would reach that limit, though I could very well be wrong. Once again, thanks in advance for any and all assistance.

As a reference (it's actually mentioned in the online manuals too somewhere) a single precsion number can be only accurate to about 7 significant digits while a double precision number can be accurate to about 15 significant digits.

Rolf Kalbermatter

Link to comment
As a reference (it's actually mentioned in the online manuals too somewhere) a single precsion number can be only accurate to about 7 significant digits while a double precision number can be accurate to about 15 significant digits.

Rolf Kalbermatter

Wow, I actually just got online to refresh my memory on the number of significant digits that single and double can handle and thought I'd check these forums real quick. Thanks much for answering a question I hadn't even thought to ask yet! I was pretty sure that single could handle 6-7, but I couldn't remember if double was 12 or 15-16. Once again, thanks (to Rolf and everyone else who has helped).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.