Jump to content

Raw or calibrated for datalogging


Recommended Posts

I have just started with labview and have a data logging project using cRio-9012 and the 9205 modules. The data logger is connected by a slow radio link and so datasize must be kept to an absolute minimum. It also has to run unattended for three months in a harsh environment.

If I store the calibrated data each measurement will require 4 bytes. If I store the raw data then each measurement will need only two bytes. However the raw data will require separate storage of the calibration and offset data. I assume that both of these will drift with time and temperature and so what sort of period should they be recorded for later correction?

I assume that the calibrated data will be continually corrected. Is it possible to convert the calibrated fixed point data to binary with the original 16 bit resolution for data logging but with the advantage that calibration has been accounted for.

Thanks for your attention.

Roger

Link to comment

QUOTE (Roger Munford @ Apr 22 2009, 10:18 PM)

I have just started with labview and have a data logging project using cRio-9012 and the 9205 modules. The data logger is connected by a slow radio link and so datasize must be kept to an absolute minimum. It also has to run unattended for three months in a harsh environment.

If I store the calibrated data each measurement will require 4 bytes. If I store the raw data then each measurement will need only two bytes. However the raw data will require separate storage of the calibration and offset data. I assume that both of these will drift with time and temperature and so what sort of period should they be recorded for later correction?

I assume that the calibrated data will be continually corrected. Is it possible to convert the calibrated fixed point data to binary with the original 16 bit resolution for data logging but with the advantage that calibration has been accounted for.

Thanks for your attention.

Roger

Hmmmm. Not sure what your asking here.

First of all, a fixed point number is an integer that is scaled by a factor (12.3 can be expressed as the integer 123 scaled by 1/10). If your raw data is only 2 bytes then your calibration data need only be 2 bytes IF you won't exceed the 2 bytes by adding it (65536). If by adding the calibration you exceed this you will require more (1 more byte will give you a ceiling of 16777216) . However, you only need to store 1 calibration number per measurement interface so if you have 8 analogue inputs, you only need 16 bytes to store the cal data for all inputs. Your results data will still be 2 or 3 bytes per measurement.

This is where I'm confused.

You say that if you "store the calibrated data", you have 4 bytes per measurement. But the raw data is 16 bytes. So it could be that your cal data is 4 bytes and you are doing 4byte (32 bit) arithmetic or your cal data is 2 bytes but you are doing 4 byte arithmetic.

Alternatively are you saving the scale and offset for each result? In which case there is no need to. You can just store 1 cal and offset for each channel (2 bytes per channel so for an 8 ch analogue input that would be 16 bytes) and your results would still be 2 bytes (with the caveat outlined earlier).

Another scenario is that you are actually converting to a Single Precision number (4 bytes) which is a waste of time as your data is 16 bits.

Link to comment

QUOTE (Roger Munford @ Apr 22 2009, 02:18 PM)

I assume that the calibrated data will be continually corrected. Is it possible to convert the calibrated fixed point data to binary with the original 16 bit resolution for data logging but with the advantage that calibration has been accounted for.

Roger, I seem to remember when I was using SCXI modules that the calibration (gain and offset measured across an internal accurate resistor usually) were re-adjusted depending on the modules used. So you would have to check for your specific hardware. Call NI. I was measuring very lo voltage values from a current shunt on an isolated channel and needed the mV values to be as accurate as possible.

For the modules I was using, the gain and offset was corrected at start of the DAQ task. If you kept the task running (for 3 months) then it would NOT be corrected in between. If you restarted, then it would be corrected. However with the old DAQ (not DAQmx) there was a way to manually force the re-calibration when required, but this of course obstructs your data stream while it goes and does its thing.

Also with newer modules R-series (or is it S-series?) the calibration is no longer two point but a series of pts with a second order curve fit. So its best to check with NI on your specific hardware.

Also, once you have the calibration parameters and using Raw mode, you should be able to duplicate the values coming out of the DAQ device when using calibrated mode.

Neville.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.