Jump to content

Buffer maximum advised size - memory issue


Recommended Posts

Hi, this message being my first one, please excuse me if it's not at the right location. I really tried to find an existing topic but couldn't find one.

This is my question:

My application is buffering (it's not a circular buffer) data in a 2D array of DBL values. This array is in a global variable so that other VIs (graphs...) can access it to display some of the buffered channels. My buffer rate is quite slow (1 record every 10 seconds) but I need to be able to display data over at least 24 hours, more if possible. Since the graphs themselves also use memory I guess, I need to define the maximum size of my array (meaning the maximum record time), as well as the history lenght of my graphs.

This application is working beside another one made by another guy, handling the hardware acquisition and updating a shared memory. So I can't do any kind of hardware buffer.

Could you advise me about

- the maximum size in MegaBytes my array can take

- the maximum number of elements arrays can accept in LabVIEW (if there is any maximum)

Thanks a lot for your help

Link to comment

QUOTE (Manudelavega @ Jan 21 2009, 06:44 AM)

Could you advise me about

- the maximum size in MegaBytes my array can take

- the maximum number of elements arrays can accept in LabVIEW (if there is any maximum)

I would say that the maximum size of an array can be pretty large. It should be limited by RAM basically.

Maximum size of an array is (to be confirmed) U32 resolution (2^32) for each dimensions. So if you get a 2D array, you could have 2^64 elements, that's 18 exaelements (10^18)... and it's way too large for any RAM (so far ;) ), even with the smallest datatype you can find (1-bit boolean).

If you want to keep a huge data available, I would advise to save it to a single file location and have your different graph UIs extract the portion you need to display and manage the "zoom" factor. What I mean by that is that when you display a full day of measurements, you surely won't need to put every data points on the graph, you could interleave one every five points or any other ratio you prefer... this will reduce your memory usage dramatically.

Link to comment

Hi Manudelavega:

Welcome to LAVA.

Your array size is pretty small: 8640 samples/channel for the 24 hour period. Even with a large number of channels, you shouldn't have any problems in that regard, you could fit the whole array in memory if there were not other problems with the approach. (If you use an array, be sure to initialize it to the size you need when you begin, since you know what the size will be that is most efficient-- see other threads on memory allocation.)

Trusting the computer to stay running for 24 hours is another issue. For that reason, you should probably be writing your data to a file, so you don't end up losing a day's work. The probability of having Windows crash within a day is less than it used to be, but still unacceptable if the cost of a day's experiment is at all high.

If the reading and graphing routine can afford the slight delay (it probably can) the easiest approach would be to write the file, and let the other guy's display programs read the same file. If not, keep the array in memory, but also back it up to a file often.

Best, Louis

Link to comment

Thank you for your quick answers !

Actually my job is only a display issue. The real data are continuously stored in a file by the acquisition software. If Windows XP crashes, then the data are not lost and a report can still be generated. So I'm not too concerned about that.

What I need to know is the reasonable size I can give to my buffer. Let's make the calculation for 48 hours:

- Rate = 10s means 6 records per minute, 360 per hour, 8640 for a day. So I need an array of DBL of 100 lines and 17280 columns for 48 hours. What I understand from what you told me is that it's far from the limit.

- 100 x 17280 = 1728000 values of 8 bytes meaning = 13 MBytes for 48 hours. I guess it's also far from the limit.

Now the graphs: I have 10 graphs with 10 channels maximum. This requires the same memory as the array: 13 MBytes for 48 hours.

So finally my whole display system requires 26 MBytes in RAM. Do you reckon it's still reasonable ? We have to take into account that this will run simultaneously with the acquisition system which is a bit heavy.

So far I'm not doing any initialization. I start with an empty array and I use the "insert into array" function. Should I rather initialize an array 100x17280 and use the "Replace" function ? In this case, what should the software do after 48 hours of running ? (I don't think this will really happen but we never know)

Thanks again !

Manudelavega

Link to comment

I'd definitely initialize it from start. If you contemplate having your data run for more than 48 hours, you could implement a "rotate 1D array" solution when you reach your maximum array size. For a 2D array, do it for each column and use the "Replace Array Subset" as you mentioned. It might not be the fastest way to go, but we a 0.1Hz acquisition, your concern is rightfully on memory management and not on execution speed.

Link to comment

QUOTE (Manudelavega @ Jan 21 2009, 10:41 AM)

So far I'm not doing any initialization. I start with an empty array and I use the "insert into array" function. Should I rather initialize an array 100x17280 and use the "Replace" function ? In this case, what should the software do after 48 hours of running ? (I don't think this will really happen but we never know)

Definitely start with empty array and use "insert into array" -- much more efficient, especially for larger arrays. (many threads discussing this in both LAVA and NI site.)

What to do when the array fills, if it ever does? Sort of up to you... Several approaches:

1) Blank out the whole array, start again from scratch.

2) Start at the beginning, overwrite the first sample, but leave the old data up for display, then overwrite the second sample, etc.

3) Like (2) but blank out the first few samples, overwrite the first sample, that way there is a scrolling blank spot between the newest sample (at the right end of the left part of the trace) and the oldest sample (at the left end of the right part of the trace)

4) Like (2) but rotate the data before you display it, so the oldest data sample shows up at the left edge of the graph, and the newest at the right, doing this means the display will start to scroll once if fills up. A bit more work than some of the other choices.

By the way, initializing the array and blanking out some samples are an excellent place to use NaN-- Not a Number. Just type this into an array of data and play with it to see how it works-- in display and calculations-- if you are not familiar with it.

Best, Louis

Link to comment

If you are working with the entire array of data have you considered using queues. I have done some analysis of the efficiency of array manipulation for large circular buffer and found that queues are more efficient. In addition, in 8.6 you can configure them to automatically drop the oldest data when you set a maximum size.

Link to comment

Thanks to all of you. Since execution time is not really an issue and since I don't have much time, I don't think I'm going to change my code. But I'll definitely use all your advices and remarks for my next project that is likely to be using LabVIEW.

Bye all

Link to comment

One thing that you may want to look out for is that arrays need to be in a contiguous memory block - that is Windows needs to not only have enough spare memory in total, but it must all be in one place. This caught me out in the past - I needed a large 3D array and although the machine I was working on had 4Gb of RAM, the largest array I could allocate in LabVIEW was about 300Mb. I know these numbers are much higher than you have suggested, but its still something to watch out for (especially if you are on a system with a smaller amount of RAM and/or lots of programs open).

Shaun

Link to comment

QUOTE (Manudelavega @ Jan 21 2009, 06:44 AM)

... This array is in a global variable so that other VIs (graphs...) can access it ...

A LV Global can not be accessed "in-place" so accessing the data in a global will force data copies for each of its instance in your program.

A beter approach is using queues or an Action Engine (also known as Functional Global).

Ben

Link to comment

QUOTE (normandinf @ Jan 21 2009, 02:46 PM)

[...] and have your different graph UIs extract the portion you need to display and manage the "zoom" factor. What I mean by that is that when you display a full day of measurements, you surely won't need to put every data points on the graph, you could interleave one every five points or any other ratio you prefer... this will reduce your memory usage dramatically.

I wrote some code to do that in a project a while ago. Basically using the graph ref to get the graph width in pixel and then decimate the data (not exactly decimating, but taking min/max/median on a certain chunk of data ;) ) to avoid dumping 10^9 point plot into a 200 pix wide graph.

I'm planing to clean it and propose it to LAVAcr but don't hold your breath.. there is too much snow in the alps for me to spend my WEs on that before a while :P

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.