Jump to content

Queue Memory Management


Recommended Posts

I need some help with queue memory management. In my current project, it seems as though if a queue gets backed up, it grabs memory (obviously) but then after the dequeue process catches up, the memory is not released.

I have an application that uses several queues to pass data around. One of these queues receives processed data and in another loop the data is dequeued and saved to disk. I am using a 1TB disk that is almost full and quite fragmented (purposefully). The closer I get to the disk being full, the slower the writes to disk happen. My write data queue gets backed up and starts gobbling up memory. Each elements of the queue is over 1.5MB, so it doesn't take much of a backup to use up a lot of memory

All of this is understandable. What I don't get is that if the disk manages to catch up and the queue empties, all that memory that was grabbed to store the queue (I assume) is still taken. This becomes a real problem if more than a few of these slow disk cycles occur and it eats up all the memory in my machine.

I spent a couple hours on the web researching, and while this issue has been raised before, I couldn't find an answer. I tried searching here on queue implementation, and got over 3000 hits. Many of those turned out to be unrelated posts from Aristos Queue...

Any thoughts on this, or pointers to where to look for pertinent info?

Cat

Link to comment

Once a queue grabs space, it does not deallocate it. That is intended, designed behavior.

Hey! A queue-related post!

So, I'm assuming: 1) just flushing the queue isn't going to deallocate the memory, and 2) destroying the queue will deallocate the memory.

Are both of these assumptions correct?

Link to comment
So, I'm assuming: 1) just flushing the queue isn't going to deallocate the memory, and 2) destroying the queue will deallocate the memory.

Are both of these assumptions correct?

Both assumptions are correct, but there is something you can do with respect to assumption #1 depending upon your particular queue.

Suppose you have a queue of arrays of 32-bit integers. The queue is filled by 20 arrays each of 100 elements. If you flush the queue, the queue will simply note that its current buffer index is zero and the length is zero, but no elements will actually be deallocated. No help there. But if you flush the queue and then enqueue 20 empty arrays, the bigger arrays in the queue buffer will be deallocated, giving you a bit over 400 bytes of memory back, but the queue buffer itself, 20x(size of pointer [4 bytes on 32 bit machine]), won't be deallocated unless the queue is destroyed.

  • Like 1
Link to comment

A few ideas:

A) If you have many bits of Data why don't you store them in a repository made of a loop with an array of cluster of the data chunks and just pass the index of the data chunk with the queue. Thus the queue won't grow that fast. In the repository you can release the memory once the data is written to disk.

B) Perhaps it helps if you cluster the data (e. g. cluster of an 2D-array of DBL instead of an 2D-array of DBL) before writing to the queue. I could imagine that the cluster is saved as a pointer in the queue. So memory for the data can be released while the space for the pointers in the queue could remain. The pointer would use less space.

C) Buy another Hard disk ;-)

Link to comment

Suppose you have a queue of arrays of 32-bit integers. The queue is filled by 20 arrays each of 100 elements. If you flush the queue, the queue will simply note that its current buffer index is zero and the length is zero, but no elements will actually be deallocated. No help there. But if you flush the queue and then enqueue 20 empty arrays, the bigger arrays in the queue buffer will be deallocated, giving you a bit over 400 bytes of memory back, but the queue buffer itself, 20x(size of pointer [4 bytes on 32 bit machine]), won't be deallocated unless the queue is destroyed.

Having a little trouble following this. Let's see if I have it right.

A queue is an array of memory pointers. When you enqueue, memory is grabbed for the data, and it's grabbed for the pointer. As more elements are put into the queue, memory allocations have to occur for both the pointer array and for the data.

When a flush occurs, LabVIEW runs through the pointer array, grabs the data that each element is pointing to, and packages everything up and puts it out on a wire. It then sets a write pointer and read pointer (or something similar) back to zero, but doesn't actually do anything with the memory.

Here's where I get confused:

The next time data is enqueued, there will be no memory allocations necessary for either the pointers or the data, so long as you don't enqueue more data than was there before the flush?

Now, with what you described, enqueing the empty data array deallocates the memory space associated with the data. First, how is that different from subsequent enqueues with different data length? Is there actually memory management going on whenever an enqueued element is of different length than the element that was previously at that queue location?

Second, wouldn't your recommendation to Cat defeat the purpose of the flush not automatically deallocating?

A while ago, Ben made a suggestion of preallocating queues by enqueueing a bunch of data, then flushing it. I'm beginning to get the impression that unless each queue element is the same size every time, this only preallocates the pointer array, not the actual data space. Is that true?

Link to comment

Gary,

Good questions.

I have one to follow it up as well that I'm curious about.

I've always had an expectation of Single Element Queues to be the most memory efficient and fastest way of getting information from point A to point B by reference.

And that if you do a modification of that element, by DeQ, modify, EnQ that you will be doing an in-place memory operation. This makes me think otherwise unless the compiler is as smart as we hope.

AQ?

Link to comment

I've always had an expectation of Single Element Queues to be the most memory efficient and fastest way of getting information from point A to point B by reference.

And that if you do a modification of that element, by DeQ, modify, EnQ that you will be doing an in-place memory operation. This makes me think otherwise unless the compiler is as smart as we hope.

I'm going to guess that the answer is yes, provided that the element is the same size as it was before. Just a guess, though.

Link to comment

I'm a bit busy putting together NI Week presentations about OO features at the moment, but if I get time next week, I'll try to write up a big summary document of queues and notifiers, pulling together all the various parts I've posted over the years.

Thank you. I (and I'm sure many others) would appreciate that.

If you're taking requests, I would love to learn more about queue operation/performance/optimization when passing relatively large chunks of data. My queue elements are smaller than the 1.5MB that Cat mentioned, but I feel like my application is somewhat similar to hers. Much of what I've read previously on LAVA seemed to concentrate on queues for messaging rather than data passing.

Thanks,

Gary

Link to comment

[...]

Second, wouldn't your recommendation to Cat defeat the purpose of the flush not automatically deallocating?

That's what I'm hoping. shifty.gif

I've coded up a vi that:

1) reads the number of elements remaining in the queue

2) flushes the queue

3) creates an empty array, size of remaining elements

4) enqueues array

5) flushes the queue

I added step 5 in order to get those empty elements out of the queue since I'm going to need it again on the next data run. I'll only call this at the end of a data run, since I don't want empty data getting into the recorded data stream (yes, I could check for it, but I'm attempting to disturb the "real" code as little as possible). One of the issues with this design is that it's very possible that the # elements in the queue when I read it is not the max number that were ever enqueued. I would need to carry along a holder for the max value to implement this right.

BUT, my real problem is being created by my disk not being able to keep up with my data rate. I am saving data in both it's raw form (data and lots of status info about the data packet) and in partially processed form, in order to decrease the post-processing time. This amounts to streaming to disk at somewhere in the neighborhood of 56MB/s. My disk can only keep up with that until it's about half full. My fallback position is to just save the raw data and spend more time post-processing after the fact. That cuts my write rate almost in half.

I wrote a little vi to verified to myself that LV does indeed reuse the memory allocated for the queue. As I said in my original post, this doesn't seem to be happening in my project code. The queue backs up, lots of memory is allocated, the queue empties, no memory is deallocated, the queue starts to back up again and more memory is immediately allocated. Since this whole process also involves writing to disk, I'm wondering at this point if it doesn't have something to do with buffering before the write to disk. But that question would probably be for another post...

Thanks for all the input, everyone, and AQ, I'm looking forward to whatever summary info you can give us on queues and notifiers.

Cat

Link to comment

This amounts to streaming to disk at somewhere in the neighborhood of 56MB/s. My disk can only keep up with that until it's about half full.

I haven't done any exhaustive testing (like filling up the disk), but we've gotten good write speeds using a 2-disk SATA RAID. Is something like that an option?

Gary

Link to comment

I haven't done any exhaustive testing (like filling up the disk), but we've gotten good write speeds using a 2-disk SATA RAID. Is something like that an option?

My platform is a laptop with one eSATA port -- so only 1 drive. For rev2 I might be able to look at getting an ExpressCard SATA card with a couple ports and trying a RAID setup. Or even just 2 disks -- 1 to archive the raw data, and the other for the partially processed data. For the moment, I'm probably going to save the raw data to the eSATA drive, off-load it during down time to a USB drive, and do all of the post-processing there.

Thanks for your input.

Link to comment

post-7603-124754561949_thumb.png

I just ran some quick testing using this block diagram and monitoring the memory allocation using Process Explorer. I discovered that during the second loop when empty arrays are queued, LV won't start deallocating memory until the second iteration, after which one element's worth of memory is deallocated with each iteration. It looks like LV keeps a buffer around in case you queue up another element. When the second loop is finished there is still one element worth of memory (30 MB) allocated for the queue.

post-7603-12475456368_thumb.png

That's good news for SEQs--it looks like they are in place operations. I wonder if this strategy was adopted as a way to minimize the number of memory (de)allocations or if was designed to make SEQ operations more efficient.

Incidentally, if you need every last ounce of memory and want to get rid of that large buffer, write one extra empty element to the queue. Putting 8 empty arrays on the queue instead of 7 gave me this: smile.gif

post-7603-124754564715_thumb.png

Link to comment

My platform is a laptop with one eSATA port -- so only 1 drive. For rev2 I might be able to look at getting an ExpressCard SATA card with a couple ports and trying a RAID setup.

There are a bunch of eSATA external RAIDS listed on Newegg. I don't have any personal experience with them, but it looks like some use a single eSATA port. Based on the Newegg reviews, it also looks like you get what you pay for.

Link to comment
  • 2 weeks later...

Just another thought because I am working on an application where the data chunks have the same size as your: I am aquiring 600 images with SGL data type. I am writing each image immediatly after aquisition to a temporary directory and each image in a separate file which will be closed immediatly. I do it this way because the risk of loosing all images is reduced in case the application stucks somewhere. After finishing the aquisition process the separate files are copied to one file.

If you are low in memory compared to the amount of data, you should avoid having all data in memory because Windows will punish you by intensive use of the swap file putting the throughput almost to zero.

Regards

Link to comment
  • 2 weeks later...

BUT, my real problem is being created by my disk not being able to keep up with my data rate. I am saving data in both it's raw form (data and lots of status info about the data packet) and in partially processed form, in order to decrease the post-processing time. This amounts to streaming to disk at somewhere in the neighborhood of 56MB/s. My disk can only keep up with that until it's about half full. My fallback position is to just save the raw data and spend more time post-processing after the fact. That cuts my write rate almost in half

Sounds like a good excuse to get a SSD. :yes:

Link to comment

Wouldn't that be nice!

Maybe as soon as they are available in 1Tb versions. For under $150.

I'm not holding my breath.smile.gif

250 GB SSD for about $1000 is about 2 man days ($ for $). Ya just have to convince the powers that be it will take you more than 8 days to to find a solution and code around the drive limitation (and throw in also that even then it may not work.....risk :P) and highlight it will cut your delivery timescale by 2 weeks. Be creative.

Link to comment

250 GB SSD for about $1000 is about 2 man days ($ for $). Ya just have to convince the powers that be it will take you more than 8 days to to find a solution and code around the drive limitation (and throw in also that even then it may not work.....risk tongue.gif) and highlight it will cut your delivery timescale by 2 weeks. Be creative.

Interesting idea! However, I'd have to get 4 SSDs to replace my 1TB drive. And I'm delivering the system with 2 drives...

I may go ahead and buy an SSD just to see if they're as fast as they say they are. My *real* application (the Big Project) would need to replace 70 Tb drives, about 6 times a year. I don't think that's happening any time in the near future! I'd have a hard time justifying that to my mother when she asks me if I've been spending her tax dollars wisely. smile.gif

Link to comment

Interesting idea! However, I'd have to get 4 SSDs to replace my 1TB drive. And I'm delivering the system with 2 drives...

I may go ahead and buy an SSD just to see if they're as fast as they say they are. My *real* application (the Big Project) would need to replace 70 Tb drives, about 6 times a year. I don't think that's happening any time in the near future! I'd have a hard time justifying that to my mother when she asks me if I've been spending her tax dollars wisely. smile.gif

Your big project is for your mother? :blink:

I'm working on justifying something like this :)

Link to comment

Your big project is for your mother? blink.gif

I work for the American taxpayer. She just happens to be the one who cares enough to ask. smile.gif

I'm working on justifying something like this

Wow! That could even handle my aggregate data rate of +300MB/s. Wow again!

I think that we will at some point in the future go to SSD if only because of durability. Zeros and ones tend to fall off of regular drives if you drop them, and losing data from a several hundred thousand dollar test is a Bad Thing. SSDs, at least the ones currently out on the market, can handle that sort of thing. I think I'll need to get a trampoline for the office tho, so I can thoroughly test them. laugh.gif

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.