Jump to content

Data Buffering in 8.2


EJW

Recommended Posts

I am converting an old LabWindows 3 program to LabView 8.2 (don't ask). I have basically started from scratch as there are

some new things our test lab would like the program to do.

The one that i am sorta stuck on, although have thought of a couple methods, is buffering data.

We read 16 AI channels at a rate of 1KHz. Based on the current cycle the program is in, it writes a single (averaged) data point from each channel to a measurement file.

What I have been tasked to do is when the test fails (destructive testing of a part) which is typically after 200 hours or more of testing, they would like the last 5 minutes of full rate (1KHz data, 300K data points per channel essentially) written to a separate file so that they can see exactly how the test part failed.

Any ideas on how to store 16 channels worth of data for that many data points/ time frame?

My original idea was to use shift registers and initalize each one with 300K elements and then use the rotate 1D array and replace array subset functions. Maybe this is the best way, maybe not.. I could sure use some input!

Thanks!

Link to comment

QUOTE(EJW @ Feb 19 2007, 07:49 PM)

I am converting an old LabWindows 3 program to LabView 8.2 (don't ask). I have basically started from scratch as there are

some new things our test lab would like the program to do.

The one that i am sorta stuck on, although have thought of a couple methods, is buffering data.

We read 16 AI channels at a rate of 1KHz. Based on the current cycle the program is in, it writes a single (averaged) data point from each channel to a measurement file.

What I have been tasked to do is when the test fails (destructive testing of a part) which is typically after 200 hours or more of testing, they would like the last 5 minutes of full rate (1KHz data, 300K data points per channel essentially) written to a separate file so that they can see exactly how the test part failed.

Any ideas on how to store 16 channels worth of data for that many data points/ time frame?

My original idea was to use shift registers and initalize each one with 300K elements and then use the rotate 1D array and replace array subset functions. Maybe this is the best way, maybe not.. I could sure use some input!

Thanks!

I think I would stream all the readings into a file - maybe one file pr. hour, and keep deleting the oldest file, so you have at least the last two present on disk. After the test ends, process the last file(s), extracting the last 5 mins. of data.

-Mikkel :)

Link to comment

QUOTE(EJW @ Feb 19 2007, 12:49 PM)

My original idea was to use shift registers and initalize each one with 300K elements and then use the rotate 1D array and replace array subset functions. Maybe this is the best way, maybe not.. I could sure use some input!

I initially thought you could use a queue, but I forgot that fixed-size queues aren't lossy. You could use a queue, preallocate it, flush it, then start putting elements into it. You'd have to check if the queue was full, and if so dequeue an element before putting the next one in.

I wouldn't use the Rotate 1D Array function since it reallocates the array. You could use an array in a shift register and an index to the next data point. Basically, create an actual double-buffered data structure.

Link to comment

QUOTE(ned @ Feb 19 2007, 05:35 PM)

Here's an implementation of this that I wrote, extracted from a larger VI. It stores all channels of data as a one-dimensional array. When your trigger condition occurs, you'll need to do a rotate array (or some other trick) to get the segment of data you want, and then reshape array to get it back into two dimensions. I don't think this makes unnecessary memory allocations, and I'd welcome comments and suggestions.

While this VI looks like it would work, I don't understand why you're converting to a 1-D array. If it were me, I'd store all my internal data in a large 2-D array (channels x samples). Then, when you grab the data from the AI Read, do a replace array subset with the 2-D data, putting it in the correct places. (And this way, you get rid of the extra buffer allocations for 'Reshape 1D Array')

However, keep in mind that this will be a lot of data in memory. 300K points (5 minutes at 1KHz) for 16 AI channels in DBL precision = 38.4 MB of data. If anything happens to the computer while acquiring the data, the entire set is lost. You may want to consider offloading to disk once and a while and maintaining a circular buffer on disk.

Link to comment

QUOTE(ned @ Feb 19 2007, 05:35 PM)

Here's an implementation of this that I wrote, extracted from a larger VI. It stores all channels of data as a one-dimensional array. When your trigger condition occurs, you'll need to do a rotate array (or some other trick) to get the segment of data you want, and then reshape array to get it back into two dimensions. I don't think this makes unnecessary memory allocations, and I'd welcome comments and suggestions.

The vi looks like it does one channel ok, but how about all 16 channels? (Unless I am not following the .vi correctly) I would think this would be kind of messy even if most of that code was in a FOR LOOP to process each channel. The time it would take to process the loop 16 times may make an impact on performance.

This is where i was stuck at 1 or 2 channels isn't so bad, but 16 is a lot of data to manipulate and store. I'm not worried so much about the computer crashing before the test ends and losing the buffer as long as it doesnt crash in the last 5 minutes before failure!!

Link to comment

QUOTE(EJW @ Feb 20 2007, 09:25 AM)

The vi looks like it does one channel ok, but how about all 16 channels? (Unless I am not following the .vi correctly) I would think this would be kind of messy even if most of that code was in a FOR LOOP to process each channel. The time it would take to process the loop 16 times may make an impact on performance.

This handles multiple channels just fine. The reason for converting to a 1-D array is that "Split Array" only works on 1-D arrays, and I use that to split the incoming data when it "wraps around" the end of the buffer. Here's one way you could use this: let your end-of-test condition stop the while loop. Outside the while loop (after the test completes), take the value from the shift register (the entire buffer). Rotate the array based on the index value in the other shift register so that the beginning of your 5-minute period is at the beginning of the array. Reshape the array back to 16 channels and you have your 5 minutes of data.

EDIT: Here's the algorithm. The DAQmx task is set up for continuous acquisition. DAQmx read returns an unknown number of samples. Transpose the array of samples, then reshape to a 1-D array. Subtract the current location in the buffer (the integer shift register) from the total buffer size, and split the array of new samples at that point. Normally, the first array will contain all the data, and the second array will be empty. However, if the current location in the buffer is near the end of the buffer, then we need to wrap around and write the remaining data to the beginning of the array. The new buffer location is the previous location added to the number of new samples, unless we wrote data to the beginning of the buffer, in which case the buffer location is the number of samples written to the beginning of the buffer. Try this with a 2-D array and you'll find it's harder to split the buffer in the right place.

Link to comment

QUOTE(ned @ Feb 20 2007, 09:48 AM)

EDIT: Here's the algorithm. The DAQmx task is set up for continuous acquisition. DAQmx read returns an unknown number of samples. Transpose the array of samples, then reshape to a 1-D array. Subtract the current location in the buffer (the integer shift register) from the total buffer size, and split the array of new samples at that point. Normally, the first array will contain all the data, and the second array will be empty. However, if the current location in the buffer is near the end of the buffer, then we need to wrap around and write the remaining data to the beginning of the array. The new buffer location is the previous location added to the number of new samples, unless we wrote data to the beginning of the buffer, in which case the buffer location is the number of samples written to the beginning of the buffer. Try this with a 2-D array and you'll find it's harder to split the buffer in the right place.

I see where you're going with this. I'll have to give it a try in a mini .vi to see what happens, especially when i dump the data to a file.

I've just begun the conversion of the old LabWindows program and this segment will be a ways down the road, but i knew i'd better look into it early so i get the code added in the right place at the right time.

Thanks for all your help.

I'll stay open to other ideas if anyone has them, in the meantime i have ned's and mine to work through, thanks everyone.

Link to comment

There's tip I read about on ni's forums that sounded like a simple quick-and-dirty solution, but I haven't benchmarked it on large datasets so I can't really vouch for it.

Anyhow, here goes. The 'Waveform Chart' UI indicator has lossy circular buffering built in. The tip is to hide the control so the CPU never has to draw the data, just buffer it. The "History[]" property can be used to read the data back out. I really don't know how well it would work for your size dataset.

-Kevin P.

Link to comment

QUOTE(Kevin P @ Feb 21 2007, 09:21 AM)

There's tip I read about on ni's forums that sounded like a simple quick-and-dirty solution, but I haven't benchmarked it on large datasets so I can't really vouch for it.

Anyhow, here goes. The 'Waveform Chart' UI indicator has lossy circular buffering built in. The tip is to hide the control so the CPU never has to draw the data, just buffer it. The "History[]" property can be used to read the data back out. I really don't know how well it would work for your size dataset.

-Kevin P.

Quick question, i have seen it mentioned several times, but what is "lossy circular buffering" as opposed to...."not lossy"???

Link to comment

Typically, when doing streaming to disk, you want to append the data to a file as you collect inside the loop, not after the collection. This means you don't have to build up a large array. That's my opinion. Write everything to disk all the time and after the test is over extract the last 5 mins you want. Also, why are you assuming 16 nodes of data collection? Normally you can sweep all 16 channels with one daq node. Have you considered TDMS files? They are optimized for high speed disk streaming (NI claims). Anyway, just my 2 cents worth. In the end you're the one who has to get it working...

Link to comment

QUOTE(Michael_Aivaliotis @ Feb 22 2007, 11:32 AM)

Typically, when doing streaming to disk, you want to append the data to a file as you collect inside the loop, not after the collection. This means you don't have to build up a large array. That's my opinion. Write everything to disk all the time and after the test is over extract the last 5 mins you want. Also, why are you assuming 16 nodes of data collection? Normally you can sweep all 16 channels with one daq node. Have you considered TDMS files? They are optimized for high speed disk streaming (NI claims). Anyway, just my 2 cents worth. In the end you're the one who has to get it working...

He's talking about collecting 16 channels of data at 1kHz for 200 hrs. By my quick calculation, that would be an 87 GB data file. I'm sure that's not impossible, but it probably opens up entirely new avenues of challenges. Does TDMS perform any compression on the data it streams to disk?

Link to comment

QUOTE(eaolson @ Feb 22 2007, 07:47 PM)

He's talking about collecting 16 channels of data at 1kHz for 200 hrs. By my quick calculation, that would be an 87 GB data file. I'm sure that's not impossible, but it probably opens up entirely new avenues of challenges. Does TDMS perform any compression on the data it streams to disk?

As described in post #2, you could make several datafiles and delete older files when they become obsolete.

This would solve the 87 GB problem...

-Mikkel :)

Link to comment

QUOTE(eaolson @ Feb 22 2007, 01:47 PM)

He's talking about collecting 16 channels of data at 1kHz for 200 hrs. By my quick calculation, that would be an 87 GB data file. I'm sure that's not impossible, but it probably opens up entirely new avenues of challenges. Does TDMS perform any compression on the data it streams to disk?

This is true. The typical test is 200-250 hours long (Really big file if streamed!!). I really only need the last 5 minutes of data at the full 1KHz rate. I could also probably rotate my data through a 2D array with just one shift register, which is what it may come down to. I am just trying to get a feel for what may work best. I really don't want to stress the hard drive with all that streaming.

Link to comment
QUOTE(eaolson @ Feb 22 2007, 10:47 AM)
He's talking about collecting 16 channels of data at 1kHz for 200 hrs. By my quick calculation, that would be an 87 GB data file. I'm sure that's not impossible, but it probably opens up entirely new avenues of challenges. Does TDMS perform any compression on the data it streams to disk?
That's not exactly what I meant. The OP does not specify what the sampling rate is for the 200hrs btw. In any case, you would continue to save data at the slow rate to file A, however you would also have a file B that would always contain the last 5 mins of data at the high speed rate. Basically creating a buffer on disk. Appending to a file is faster than you might think (OS disk caching is pretty smart and writes in chunks anyway). Also, there is the chance that the application will hang or even crash when you try to save a large file of data from memory at the end. You might even run out of physical memory and the OS will resort to swapping which has it's own slowdown problems.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.