Jump to content

Queue Memory Management


Recommended Posts

  • 2 weeks later...

So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea....  I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome :thumbup1: . DEfinitely going to go about it some other wat next time....

Link to comment

So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome :thumbup1: . DEfinitely going to go about it some other wat next time....

Have you ever actually had a problem with your queues in the past? I think you've misunderstood something here. There is no reason not to use queues in applications that run continuously. You're not creating a memory leak since LabVIEW hasn't lost track of that memory; you've just "reserved" that memory for the next time that data needs to be put into the queue. Unless you actually need that memory back immediately for some other purpose there's no problem.

Link to comment

Have you ever actually had a problem with your queues in the past?  I think you've misunderstood something here.  There is no reason not to use queues in applications that run continuously.  You're not creating a memory leak since LabVIEW hasn't lost track of that memory; you've just "reserved" that memory for the next time that data needs to be put into the queue.  Unless you actually need that memory back immediately for some other purpose there's no problem.

Yes, you're right, memory leak isn't the right term for it. If had had problems, I wouldn't have known it. But sometimes it has to acquire a lot of data, sometimes less. So the amount of memory that's been allocated is governed by the largest data set acquired. Not really a big deal, just doesn't sit well with me.

Link to comment

So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome thumbup1.gif . DEfinitely going to go about it some other wat next time....

I've used queues in similar ways in applications that ran all day and had no trouble with them. I did suffer from a few memory leaks early on in the application development, but they had nothing to do with the queues.

Link to comment
Yes, you're right, memory leak isn't the right term for it. If had had problems, I wouldn't have known it. But sometimes it has to acquire a lot of data, sometimes less. So the amount of memory that's been allocated is governed by the largest data set acquired. Not really a big deal, just doesn't sit well with me.
It's a question of reallocation on every call vs leaving the allocation in place to maximize performance. If the data can occassionally ramp up to "large number X", and you have enough ram, then it's a good idea to just leave large block always allocated for that large data event, no matter how rare it is. The larger the desired block, the longer it takes to allocate -- memory may have to be compacted to get a block big enough. Also, keep in mind that we're talking about the space for the top-level data, not the data itself. So if you enqueue three 1 megabyte arrays, and then you flush the queue, the queue is just keeping allocated the 12 bytes needed to store the array handles, not the arrays themselves.
Link to comment

Wow! That could even handle my aggregate data rate of +300MB/s. Wow again!

I think that we will at some point in the future go to SSD if only because of durability. Zeros and ones tend to fall off of regular drives if you drop them, and losing data from a several hundred thousand dollar test is a Bad Thing. SSDs, at least the ones currently out on the market, can handle that sort of thing. I think I'll need to get a trampoline for the office tho, so I can thoroughly test them. laugh.gif

Off topic but Tom's Hardware recently crushed the performance of the video you saw: 3.4 GB/s

Sorta on topic, be very careful about the brand and model SSD you buy. There are a ton of useless drives that shouldn't even be sold. The good ones like Intel, Samsung, OCZ are insane performance coming from the world of spinning disks. I have an OCZ Vertex 250 GB drive. With 512 KB blocksize (reasonable for your data acq I/O), the drive averaged 145 MB/sec reads and 185 MB/sec writes. It could keep that up all day since the blocksize you are writing is >= to the internal flash blocksize.

Link to comment
  • 1 month later...

I just noticed that in LV8.6, a flush of a single-element queue seems to be considerably faster than a preview. Is this expected?

Thanks,

Gary

This might have to do with data copies. Depending on what you do with the previewed queue element, and the internal queue implementation, the preview may require creating a copy of the element so that the original element can remain in the queue. When you flush the queue no copy is necessary since that element is no longer in the queue.

Link to comment

This might have to do with data copies. Depending on what you do with the previewed queue element, and the internal queue implementation, the preview may require creating a copy of the element so that the original element can remain in the queue. When you flush the queue no copy is necessary since that element is no longer in the queue.

As I understand it, a preview element always makes a copy of the element data, but a properly used dequeue-enqueue implementation can be made inline.

/J

Link to comment
As I understand it, a preview element always makes a copy of the element data, but a properly used dequeue-enqueue implementation can be made inline.
Bingo.

Think about it --- if you do "preview", you have to have a copy on your local wire and a copy left behind in the queue. Why? Because as soon as your VI continues past the Preview node, some other thread somewhere could Dequeue the data in the queue. If you haven't made your own copy, you're now sharing data between VIs... and much badness ensues (one stomps on the other, one disposes the data while the other is still using it... ug).

For the record, Notifiers always do the equivalent of a preview because there may be any number of Wait nodes listening for the same notification.

Link to comment

Bingo.

Think about it --- if you do "preview", you have to have a copy on your local wire and a copy left behind in the queue. Why? Because as soon as your VI continues past the Preview node, some other thread somewhere could Dequeue the data in the queue. If you haven't made your own copy, you're now sharing data between VIs... and much badness ensues (one stomps on the other, one disposes the data while the other is still using it... ug).

For the record, Notifiers always do the equivalent of a preview because there may be any number of Wait nodes listening for the same notification.

I was surprised at how different the runtimes were given that the queue contained a cluster of 4 scalars. I didn't think that copy that would be a very big deal. I guess all things are relative when you're talking about 1e-4 vs. 1e-5 ms.

Link to comment

Everything counts in large amounts...

Ben puts his "Large Disk Specalist" Hat on.

Fragmented drives slow down disk transfers because the read operations involve physicl movement of the heads or waiting for the sectore we want to pass under the R/W head. When a disk is not fragmented and is empty, data can be W/R as fast as the disk spins. This is common knowlege but I repeat it for the next ideas.

When we write a file to disk we not only have to wait for the postioner to move and the disk to rotate, but we also have to allocate the space, and update directory information so the OS can find the file again latter. For large files that are growing the OS has to stop and maintain the file system while we are growing the file. This is very similar to the hits we take when building an array except the disk is orders of magnitude slower than RAM.

Fast Disk Writing Trick!

Whenever we can make an estimate about how large a file could be, we can "pre-write" a file that is twice as big as what we expect. The "pre-writting" moves all of the disk structure overhead to a set-up step and keeps these housekeeping tasks from slowing down the writes. After the data is collected, copy it to the final path and truncate the file for the actual amount of data.

Ben

Link to comment
  • 2 months later...

I'm a bit busy putting together NI Week presentations about OO features at the moment, but if I get time next week, I'll try to write up a big summary document of queues and notifiers, pulling together all the various parts I've posted over the years.

Wanted to bump this up, in case AQ has time in his life again.

Link to comment
  • 1 month later...

This is another bump from a guy who's currently trying to push an algorithm as fast as it will go. Looking for memory savings where possible. Did you ever finish the run through on queues AQ?

No, and as much as I'd like to, I really don't see it happening. Writing ths requires a large block of unbroken time and it keeps getting trumped by other things.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.