Jump to content

Improving File Saving Rate using High Speed Digitizer NI PCI 5114 card


SINTU

Recommended Posts

To acquire data from an ultrasonic transducer with high sampling rate of 10 MHz, PCI 5114 (with on board memory of 8 MB) high speed digitizer is being used with LABVIEW 2015 version.

We are using a pulser receiver for giving pulses to ultrasonic transducer. The sync out of pulser receiver has being connected to the trigger input of PCI card and thus in the VI also the trigger source is mentioned as EXT Trigger (PFA the datasheet of pulser receiver)

We need to acquire the ultrasonic signals (reference signal & reflected signal ) with a record length of 1000; and save these signals in a text file/ lvm file/tdms , at every 1 ms. As the time taken for the reflected ultrasonic pulse to return to transducer is very critical, the saving of these signals for post processing is our prime requirement.

 

The VI attached is developed using Producer Consumer Architecture, where the Producer loop is acquiring the data from digitizer & the consumer loop is saving the data at the rate of 240 file/ second approx. (our requirement is to save at a rate of 1000 files/sec; 1 millisecond for saving one file).

The acquisition is happening perfectly fine, but the saving rate is not as per our current requirement. 

2. Ultrasonic Sensor Acquisition & Saving_PC Arch.vi

Please give your valuable suggestions on how to improve the code so as the saving rate of files can be improved.

Will increasing the buffer memory or on board memory on PCI card resolve the issue?

 

I Tried with tdms file saving also but the time scale was not properly saved.

DAQ_TDMS.vi

 

Link to comment

So I see that you acquire 1000 samples, (not files) and then log those 1000 samples in a separate loop.  I'd recommend waiting until more data is in the buffer before attempting to write.  Those write operations are probably what is slowing you down, and if you can wait until there is 10,000 samples, or more waiting in the queue, and then call the log function just twice (once for time, once for the data, you will probably be better off.  That being sadi I don't think I see anything really wrong with your VI, except maybe that there is a race condition on stop that might not close the TDMS file reference properly.  Wire through the TDMS reference through the error case.

Link to comment

Thanks alot for your response. I will try as your suggested. The reason for writing 1000 samples for every iteration was to identify the peak shift in the ultrasonic signal, with the time values. As the record length is 1000, and each file will be containing only 1000 data points, it was easier to identify the peak with time. 

Can you please elaborate on this how to log function twice? I didn't got a clear picture.

15 hours ago, hooovahh said:

log function just twice (once for time, once for the data

I checked so, there is no race condition occurring when I am using Stop as a local variable. Still I will take into account and wire TDMS reference through error case.

I even tried saving all the data after the acquisition has stopped, then also the time values were not correct. Please help me build the VI Efficiently.

Thank You

Link to comment
On 20/7/2017 at 1:12 PM, SINTU said:

our requirement is to save at a rate of 1000 files/sec

Quite likely this is a bad requirement, and the combination of your OS/disks is not up to it, and won't be unlike you make special provisions for it - like controlling the write cache and using fast disk systems.

The way to go imho is to stream all this data in big files, with a format which enables indexed access to the specific record. If your data is fixed, like e.g. 1000 doubles + one double as timestamp, even just dumping everything to a binary file and retrieving it by seek & read is easy (proviso - disk writes are way more efficient if unbuffered and write an integer number of sectors at a time). TDMS, etc, adds flexibility, but at some price (which probably you can afford to pay at only 80MB/sec and a reasonably fast disk); text is the way to spoil completely speed and compactness with formatting and parsing, with the only advantage of human readability.

You say timing is critical to your postprocessing; but alas, do you postprocess your data by rereading it from the filesystem, and expect to do it with low latency? Do you need to postprocess your data online in real time or offline? Do you care for timestamping your data the moment it is transferred from the digitizer into memory (which already lags behind the actual acquisition, obviously), not at the moment of writing to disk, I hope?

Link to comment
9 hours ago, SINTU said:

Can you please elaborate on this how to log function twice? I didn't got a clear picture.

I was just referring to the fact that the TDMS write function should be called as seldom as possible.  So in your case you want to call it twice, once for the Time Channel, and once for the Data Channel.  Even if you are enqueueing 1000 elements, and only logging every 10,000, then you should only call the TDMS write twice, for every 10,000 samples.

9 hours ago, SINTU said:

I checked so, there is no race condition occurring when I am using Stop as a local variable.

There is a race condition.  If your top loop stops, it will destroy the queue.  If the bottom loop is waiting at a dequeue when this happens an error will be generated, and then the file close operation won't work properly (since TDMS ref wasn't wired through) and then you won't display the data in the Viewer either.  Pretty minor, and probably won't be seen.

2 hours ago, ensegre said:

Quite likely this is a bad requirement, and the combination of your OS/disks is not up to it, and won't be unlike you make special provisions for it - like controlling the write cache and using fast disk systems.

I'm pretty sure OP meant 1000 samples/sec not files.  The posted VI only creates one file.

Link to comment
28 minutes ago, hooovahh said:

I'm pretty sure OP meant 1000 samples/sec not files.  The posted VI only creates one file.

His TDMS attempt tries indeed to write a single file, but timestamps the data when it is dequeued by the consumer loop.

His first VI, however, uses a Write Delimited Spreadsheet.vi within a while loop, with a delay of 1ms (misconception, timing is anyway dictated by the elements in queue) and a new filename at each iteration.

Edited by ensegre
Link to comment
On 7/20/2017 at 6:00 PM, hooovahh said:

 

On 7/20/2017 at 6:00 PM, hooovahh said:
On 7/21/2017 at 4:10 PM, ensegre said:

The way to go imho is to stream all this data in big files, with a format which enables indexed access to the specific record. If your data is fixed, like e.g. 1000 doubles + one double as timestamp, even just dumping everything to a binary file and retrieving it by seek & read is easy (proviso - disk writes are way more efficient if unbuffered and write an integer number of sectors at a time). TDMS, etc, adds flexibility, but at some price (which probably you can afford to pay at only 80MB/sec and a reasonably fast disk); text is the way to spoil completely speed and compactness with formatting and parsing, with the only advantage of human readability.

You say timing is critical to your postprocessing; but alas, do you postprocess your data by rereading it from the filesystem, and expect to do it with low latency? Do you need to postprocess your data online in real time or offline? Do you care for timestamping your data the moment it is transferred from the digitizer into memory (which already lags behind the actual acquisition, obviously), not at the moment of writing to disk, I hope?

-- I tried to write in Binary file also, but the values are not readable.

--  The post processing has to be done offline that is the reason for saving the data in file

-- I care about the timestamps of data when it is acquiring from the PCI card.

Initially I tried with the simplest method of directly writing the acquired data into a write vi (text, spreadsheet, tdms, binary etc) in the same loop, but the whole iteration was slowed down. Then only I implemented the Producer/Consumer Architecture.

As the record length is fixed i.e. 1000, for each iteration in the producer loop, 1000 data points are getting acquired. How should I queue those each 1000 data points to make a buffer of 10,000 points and then save?

On 7/21/2017 at 6:17 PM, hooovahh said:

I was just referring to the fact that the TDMS write function should be called as seldom as possible.  So in your case you want to call it twice, once for the Time Channel, and once for the Data Channel.  Even if you are enqueueing 1000 elements, and only logging every 10,000, then you should only call the TDMS write twice, for every 10,000 samples.

I'm pretty sure OP meant 1000 samples/sec not files.  The posted VI only creates one file.

I tried this for logging every 1000 elements but the consumer loop was slowed down and there was some data lose in saving also.

Link to comment
On 7/21/2017 at 6:42 PM, ensegre said:

His TDMS attempt tries indeed to write a single file, but timestamps the data when it is dequeued by the consumer loop.

His first VI, however, uses a Write Delimited Spreadsheet.vi within a while loop, with a delay of 1ms (misconception, timing is anyway dictated by the elements in queue) and a new filename at each iteration.

Putting 1ms delay in consumer loop was suggested by NI Support Engineer. Before that I was using timed while loop with period of 1 ms

Link to comment

Concerns:

1. The datatype I have selected in NI SCOPE Fetch is WDT waveform as it is mentioned in the help that waveform data type will include the time stamp values also. Is there any better and appropriate datatype to be selected for the acquisition to happen accurately?

2. I came across another vi in NI SCOPE palette, FETCH MEASUREMENT(POLY) which obtains waveform & returns specified measurement while NI FETCH returns scaled voltage data, what should be implemented for better acquisition?

3.  the basic requirement is to save each 1000 data points acquired from NI Fetch VI with the time at which the data are getting acquired. The most prior parameter for our application is the time. Hence even if it is saved after the acquisition or in parallel with the acquisition, the data and time should be saved in a file format most suitable for postprocessing.

Please do reply your valuable suggestions or example vi which can meet this requirement.

Thank you

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.