Gary Rubin Posted August 13, 2009 Report Share Posted August 13, 2009 We once had a data recorder fall out of a rack because the slides weren't installed right. After that, I felt like I could tout that the system had been drop-tested from a height of 3 feet. Quote Link to comment
MarkCG Posted August 24, 2009 Report Share Posted August 24, 2009 So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome . DEfinitely going to go about it some other wat next time.... Quote Link to comment
ned Posted August 24, 2009 Report Share Posted August 24, 2009 So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome . DEfinitely going to go about it some other wat next time.... Have you ever actually had a problem with your queues in the past? I think you've misunderstood something here. There is no reason not to use queues in applications that run continuously. You're not creating a memory leak since LabVIEW hasn't lost track of that memory; you've just "reserved" that memory for the next time that data needs to be put into the queue. Unless you actually need that memory back immediately for some other purpose there's no problem. Quote Link to comment
MarkCG Posted August 24, 2009 Report Share Posted August 24, 2009 Have you ever actually had a problem with your queues in the past? I think you've misunderstood something here. There is no reason not to use queues in applications that run continuously. You're not creating a memory leak since LabVIEW hasn't lost track of that memory; you've just "reserved" that memory for the next time that data needs to be put into the queue. Unless you actually need that memory back immediately for some other purpose there's no problem. Yes, you're right, memory leak isn't the right term for it. If had had problems, I wouldn't have known it. But sometimes it has to acquire a lot of data, sometimes less. So the amount of memory that's been allocated is governed by the largest data set acquired. Not really a big deal, just doesn't sit well with me. Quote Link to comment
PaulG. Posted August 24, 2009 Report Share Posted August 24, 2009 So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome . DEfinitely going to go about it some other wat next time.... I've used queues in similar ways in applications that ran all day and had no trouble with them. I did suffer from a few memory leaks early on in the application development, but they had nothing to do with the queues. Quote Link to comment
Aristos Queue Posted August 24, 2009 Report Share Posted August 24, 2009 Yes, you're right, memory leak isn't the right term for it. If had had problems, I wouldn't have known it. But sometimes it has to acquire a lot of data, sometimes less. So the amount of memory that's been allocated is governed by the largest data set acquired. Not really a big deal, just doesn't sit well with me. It's a question of reallocation on every call vs leaving the allocation in place to maximize performance. If the data can occassionally ramp up to "large number X", and you have enough ram, then it's a good idea to just leave large block always allocated for that large data event, no matter how rare it is. The larger the desired block, the longer it takes to allocate -- memory may have to be compacted to get a block big enough. Also, keep in mind that we're talking about the space for the top-level data, not the data itself. So if you enqueue three 1 megabyte arrays, and then you flush the queue, the queue is just keeping allocated the 12 bytes needed to store the array handles, not the arrays themselves. Quote Link to comment
Matthew Zaleski Posted August 28, 2009 Report Share Posted August 28, 2009 Wow! That could even handle my aggregate data rate of +300MB/s. Wow again! I think that we will at some point in the future go to SSD if only because of durability. Zeros and ones tend to fall off of regular drives if you drop them, and losing data from a several hundred thousand dollar test is a Bad Thing. SSDs, at least the ones currently out on the market, can handle that sort of thing. I think I'll need to get a trampoline for the office tho, so I can thoroughly test them. Off topic but Tom's Hardware recently crushed the performance of the video you saw: 3.4 GB/s Sorta on topic, be very careful about the brand and model SSD you buy. There are a ton of useless drives that shouldn't even be sold. The good ones like Intel, Samsung, OCZ are insane performance coming from the world of spinning disks. I have an OCZ Vertex 250 GB drive. With 512 KB blocksize (reasonable for your data acq I/O), the drive averaged 145 MB/sec reads and 185 MB/sec writes. It could keep that up all day since the blocksize you are writing is >= to the internal flash blocksize. Quote Link to comment
Gary Rubin Posted October 1, 2009 Report Share Posted October 1, 2009 I just noticed that in LV8.6, a flush of a single-element queue seems to be considerably faster than a preview. Is this expected? Thanks, Gary Quote Link to comment
ned Posted October 1, 2009 Report Share Posted October 1, 2009 I just noticed that in LV8.6, a flush of a single-element queue seems to be considerably faster than a preview. Is this expected? Thanks, Gary This might have to do with data copies. Depending on what you do with the previewed queue element, and the internal queue implementation, the preview may require creating a copy of the element so that the original element can remain in the queue. When you flush the queue no copy is necessary since that element is no longer in the queue. Quote Link to comment
Mellroth Posted October 1, 2009 Report Share Posted October 1, 2009 This might have to do with data copies. Depending on what you do with the previewed queue element, and the internal queue implementation, the preview may require creating a copy of the element so that the original element can remain in the queue. When you flush the queue no copy is necessary since that element is no longer in the queue. As I understand it, a preview element always makes a copy of the element data, but a properly used dequeue-enqueue implementation can be made inline. /J Quote Link to comment
Aristos Queue Posted October 1, 2009 Report Share Posted October 1, 2009 As I understand it, a preview element always makes a copy of the element data, but a properly used dequeue-enqueue implementation can be made inline. Bingo.Think about it --- if you do "preview", you have to have a copy on your local wire and a copy left behind in the queue. Why? Because as soon as your VI continues past the Preview node, some other thread somewhere could Dequeue the data in the queue. If you haven't made your own copy, you're now sharing data between VIs... and much badness ensues (one stomps on the other, one disposes the data while the other is still using it... ug). For the record, Notifiers always do the equivalent of a preview because there may be any number of Wait nodes listening for the same notification. Quote Link to comment
Gary Rubin Posted October 1, 2009 Report Share Posted October 1, 2009 Bingo. Think about it --- if you do "preview", you have to have a copy on your local wire and a copy left behind in the queue. Why? Because as soon as your VI continues past the Preview node, some other thread somewhere could Dequeue the data in the queue. If you haven't made your own copy, you're now sharing data between VIs... and much badness ensues (one stomps on the other, one disposes the data while the other is still using it... ug). For the record, Notifiers always do the equivalent of a preview because there may be any number of Wait nodes listening for the same notification. I was surprised at how different the runtimes were given that the queue contained a cluster of 4 scalars. I didn't think that copy that would be a very big deal. I guess all things are relative when you're talking about 1e-4 vs. 1e-5 ms. Quote Link to comment
Aristos Queue Posted October 2, 2009 Report Share Posted October 2, 2009 I was surprised at how different the runtimes were given that the queue contained a cluster of 4 scalars. I didn't think that copy that would be a very big deal. I guess all things are relative when you're talking about 1e-4 vs. 1e-5 ms. Everything counts in large amounts... Quote Link to comment
Grampa_of_Oliva_n_Eden Posted October 2, 2009 Report Share Posted October 2, 2009 Everything counts in large amounts... Ben puts his "Large Disk Specalist" Hat on. Fragmented drives slow down disk transfers because the read operations involve physicl movement of the heads or waiting for the sectore we want to pass under the R/W head. When a disk is not fragmented and is empty, data can be W/R as fast as the disk spins. This is common knowlege but I repeat it for the next ideas. When we write a file to disk we not only have to wait for the postioner to move and the disk to rotate, but we also have to allocate the space, and update directory information so the OS can find the file again latter. For large files that are growing the OS has to stop and maintain the file system while we are growing the file. This is very similar to the hits we take when building an array except the disk is orders of magnitude slower than RAM. Fast Disk Writing Trick! Whenever we can make an estimate about how large a file could be, we can "pre-write" a file that is twice as big as what we expect. The "pre-writting" moves all of the disk structure overhead to a set-up step and keeps these housekeeping tasks from slowing down the writes. After the data is collected, copy it to the final path and truncate the file for the actual amount of data. Ben Quote Link to comment
Gary Rubin Posted December 18, 2009 Report Share Posted December 18, 2009 I'm a bit busy putting together NI Week presentations about OO features at the moment, but if I get time next week, I'll try to write up a big summary document of queues and notifiers, pulling together all the various parts I've posted over the years. Wanted to bump this up, in case AQ has time in his life again. Quote Link to comment
AlexA Posted February 10, 2010 Report Share Posted February 10, 2010 This is another bump from a guy who's currently trying to push an algorithm as fast as it will go. Looking for memory savings where possible. Did you ever finish the run through on queues AQ? Quote Link to comment
Aristos Queue Posted February 11, 2010 Report Share Posted February 11, 2010 This is another bump from a guy who's currently trying to push an algorithm as fast as it will go. Looking for memory savings where possible. Did you ever finish the run through on queues AQ? No, and as much as I'd like to, I really don't see it happening. Writing ths requires a large block of unbroken time and it keeps getting trumped by other things. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.