Götz Becker Posted April 1, 2008 Report Share Posted April 1, 2008 Hi, I am running small testapps to check design ideas for a bigger project if they run without memory leaks. My current app sends different length waveforms to a save routine using queues which writes them into a TDMS-file. I understand that the way I do this could lead to memory fragmentation. The RT System Manager showed this: After startup after 6 days running after 11 days running The main question for me is, when does the runtime will free some memory again. Does it only kick in if a threshold of available memory is under-run? Which would require a different test application to produce cyclic memory shortages or so. Greetings from a cloudy Munich Quote Link to comment
Aristos Queue Posted April 1, 2008 Report Share Posted April 1, 2008 LabVIEW doesn't have a garbage collection system -- there isn't a "memory manager" that is in charge of periodically deallocating data. (I put "memory manager" in quotes because there is something that we call the memory manager, but it doesn't do the job you're describing.) Let's take a VERY simple case: Suppose you have a subVI that takes an array and an int32 N as input. The subVi's job is to concatenate N items onto the end of the array. When that subVI loads into memory, it's arrays are all size zero, so they take little memory. Now call the subVI, passing in an array of size 5 and 10 for N. The front panel control will allocate space to have a copy of the 5 element array for its display. The indicator will allocate to display a 15 element array. Various terminals on the block diagram may allocate to have buffer copies of the array (use the Show Buffer Allocations tool to see these allocations). So now your VI uses more data than it did before. The subVI will not release that data when it finishes running. Those terminals stay "inflated". If you call the subVI again with a size 5 array, those allocations will be reusued for copies of the new array. If you call with a smaller array, then LV will release the memory that it doesn't need. If you call with a larger array, LV will allocate more. If you're running a test VI over and over again with the same inputs, you should see the data size remain constant after the first execution of the VIs because after that point, all the data space that the VI needs is fully allocated. If you're seeing a growth of the amount of memory (which you are seeing), it is because you're processing ever larger arrays or because you're reserving system resources and never releasing them. The common example of this is opening a queue or notifier reference and never closing it. Every Obtain Queue call allocates 4 bytes for a new refnum, and those 4 bytes will only be returned when the reference gets Released. LV will release the reference for you when the VI goes idle, but if you're calling in a loop, you should be releasing the resources manually or they'll just build up. Another common leak is allocating reentrant clones using Open VI Reference that you're never closing. You can force the subVIs to deallocate as soon as they finish running, so they're back in the pristine "just loaded into memory" state by dropping the Request Deallocation primitive onto the diagram. Doing so can reduce the amount of memory that LV uses at any one time, but that generally results in really bad performance characteristics. Quote Link to comment
Val Brown Posted April 1, 2008 Report Share Posted April 1, 2008 QUOTE (Aristos Queue @ Mar 31 2008, 11:11 AM) ...dropping the Request Deallocation primitive onto the diagram <...> can reduce the amount of memory that LV uses at any one time, but that generally results in really bad performance characteristics. Can you say a little bit more about this: ie, in what way will using Request Deallocation degrade performance? If there are any good KBs or other resources re: this, could you post some here? Quote Link to comment
Aristos Queue Posted April 1, 2008 Report Share Posted April 1, 2008 QUOTE (Val Brown @ Mar 31 2008, 01:45 PM) Can you say a little bit more about this: ie, in what way will using Request Deallocation degrade performance? If there are any good KBs or other resources re: this, could you post some here? Basically the subVI will have to reallocate all the stuff it just deallocated every time it executes. Very very time consuming. The only time when Request Deallocation is advantageous is when you have a subVI that has a very large array pass through it and you don't expect to call that subVI again for a very long time (we're talking arrays on the order of a million elements and delays between calls of at least a few full seconds). In those cases, there can be some advantages to going ahead and deallocating the subVI after every call. Quote Link to comment
jzoller Posted April 1, 2008 Report Share Posted April 1, 2008 QUOTE (Aristos Queue @ Mar 31 2008, 12:11 PM) Basically the subVI will have to reallocate all the stuff it just deallocated every time it executes. Very very time consuming. The only time when Request Deallocation is advantageous is when you have a subVI that has a very large array pass through it and you don't expect to call that subVI again for a very long time (we're talking arrays on the order of a million elements and delays between calls of at least a few full seconds). In those cases, there can be some advantages to going ahead and deallocating the subVI after every call. Aristos, can you spare any comments on how LV deals with http://en.wikipedia.org/wiki/Memory_fragmentation' rel='nofollow' target="_blank">memory fragmentation ? Joe Z. (Edit: specifically, is external fragmentation an issue... you covered internal above) Quote Link to comment
Aristos Queue Posted April 1, 2008 Report Share Posted April 1, 2008 QUOTE (jzoller @ Mar 31 2008, 02:43 PM) From http://zone.ni.com/reference/en-XX/help/371361D-01/lvconcepts/vi_memory_usage/' target="_blank">The Manual , about halfway down. Yep. That would be the actual memory manager that I referenced in my original post. Quote Link to comment
Götz Becker Posted April 2, 2008 Author Report Share Posted April 2, 2008 Hi and thanks for your replies, I still don´t know why my memory consumption grows. The used code is in the attachement Download File:post-1037-1207038045.zip The queue with the waveforms is limited to 500 elements and the producer VI, which writes random data into the Q, has a max Wfm length of 5000 (DBL Values). So the Q size in memory should max out at about 20MB. Perhaps someone has an idea where all my memory is used. Edit: Sorry I didn´t make it clear which is the Main VI (Q_Get_Write_Copy.vi) Quote Link to comment
Rolf Kalbermatter Posted April 2, 2008 Report Share Posted April 2, 2008 QUOTE (Götz Becker @ Apr 1 2008, 04:26 AM) Hi and thanks for your replies,I still don´t know why my memory consumption grows. The used code is in the attachement http://lavag.org/old_files/post-1037-1207038045.zip'>Download File:post-1037-1207038045.zip The queue with the waveforms is limited to 500 elements and the producer VI, which writes random data into the Q, has a max Wfm length of 5000 (DBL Values). So the Q size in memory should max out at about 20MB. Perhaps someone has an idea where all my memory is used. That doesn't load as project. And just looking at the subVIs itself won't show any leaks for sure. Rolf Kalbermatter Quote Link to comment
LAVA 1.0 Content Posted April 2, 2008 Report Share Posted April 2, 2008 Just adding my 2 cents. The link to VI Memory Usage in LabVIEW 8.5 (see previsou post) says : QUOTE Conditional Indicators and Data Buffers The way you build a block diagram can prevent LabVIEW from reusing data buffers. Using a conditional indicator in a subVI prevents LabVIEW from optimizing data buffer usage. A conditional indicator is an indicator inside a Case structure or For Loops. Placing an indicator in a conditionally executed code path will break the flow of data through the system and LabVIEW will not reuse the data buffer from the input, but force a data copy into the indicator instead. When you place indicators outside of Case structures and For Loops, LabVIEW directly modifies the data inside the loop or structure and passes the data to the indicator instead of creating a data copy. You can create constants for alternate cases instead of placing indicators inside the Case structure. I saw that your methods all have control terminals inside of the error case structue. This might not change a lot, but if I understand correctly the quote above, then it can only be better to have the control terminals outside of the case structure. See also this thread on NI forum http://forums.ni.com/ni/board/message?boar...=191622#M191622 Hope this can help Quote Link to comment
Götz Becker Posted April 2, 2008 Author Report Share Posted April 2, 2008 QUOTE (TiT @ Apr 1 2008, 12:34 PM) I saw that your methods all have control terminals inside of the error case structue. This might not change a lot, but if I understand correctly the quote above, then it can only be better to have the control terminals outside of the case structure.Hope this can help Hi, thanks for the link. I didn´t thought about the controls inside the case structures. Usually I only look for nested indicators and the dataflow of "passed-through" data like references. I´ll try the hint and hope for the best Quote Link to comment
Mellroth Posted April 2, 2008 Report Share Posted April 2, 2008 QUOTE (Götz Becker @ Apr 1 2008, 09:26 AM) I still don´t know why my memory consumption grows... Hi, I think that one of the reasons your memory grows is that you start filling the Queue with elements of length 1,2,3,4 etc. this means that the queue element only allocates this amount of memory. At some point the queue element that was once initialized with a size of 1, will get a larger buffer written to it, and will therefore keep this new larger buffer in memory. This will continue for all your queue elements. In the end, all your queue elements will have a allocated size of 4999*8 bytes. This could then explain an increase in memory of about 4999*8*500 ~ 20MB. To test this, add some code right after the InitQueue primitive that adds 4999 dbl values to all buffer elements, and then removes all elements (thus pre-initializing the amount of memory the queue will use). /J Quote Link to comment
Götz Becker Posted April 3, 2008 Author Report Share Posted April 3, 2008 QUOTE (JFM @ Apr 1 2008, 03:36 PM) To test this, add some code right after the InitQueue primitive that adds 4999 dbl values to all buffer elements, and then removes all elements (thus pre-initializing the amount of memory the queue will use). Hi, good point! I added a prealloc before and restarted my test (and hope for the best ) http://lavag.org/old_files/monthly_04_2008/post-1037-1207125388.png' target="_blank"> Greetings Götz Quote Link to comment
BrokenArrow Posted April 4, 2008 Report Share Posted April 4, 2008 My 2c on RT memory issues... Try disconnecting the error inputs on queues and notifiers. This is especially true if you believe the error cluster may be carrying a warning. Try placing queues or notifiers outside of case structures. Off the subject a bit, but don't use string shared variables in RT version 8.5 I have seen three separate RT memory leaks fixed by each bullet. cheers! Quote Link to comment
Götz Becker Posted May 6, 2008 Author Report Share Posted May 6, 2008 Hi again , after the switch to 8.5.1 together with a new PXI controller the memory consumption remained stable. The test application did run now for about a week and makes me feel happy again :thumbup: After start: Today: Quote Link to comment
crelf Posted May 6, 2008 Report Share Posted May 6, 2008 QUOTE (Götz Becker @ May 5 2008, 04:13 AM) ...after the switch to 8.5.1 together with a new PXI controller the memory consumption remained stable. The test application did run now for about a week and makes me feel happy again Those plots look much better :thumbup: Quote Link to comment
BrokenArrow Posted May 6, 2008 Report Share Posted May 6, 2008 QUOTE (Götz Becker @ May 5 2008, 04:13 AM) Hi again ,after the switch to 8.5.1 together with a new PXI controller the memory consumption remained stable...... That's great news. NI said the next release (after 8.5) would fix some of the leaks. I'm glad to see that ugrading may be worth the trouble. Thanks for checking back in. Quote Link to comment
Götz Becker Posted May 6, 2008 Author Report Share Posted May 6, 2008 QUOTE (BrokenArrow @ May 5 2008, 04:16 PM) ... Thanks for checking back in. You are welcome... since you all helped me on that... and I have probably more questions in the future. Quote Link to comment
MikaelH Posted May 6, 2008 Report Share Posted May 6, 2008 I have had to use the "Deallocation VI", in one of my current test system. But I also needed to increase the memory to 3GB for LabVIEW to use. It's a simple switch in the "c:\boot.ini" file. [boot loader]timeout=30default=multi(0)disk(0)rdisk(0)partition(2)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /noexecute=optin /fastdetect [b][color="#ff0000"]/3GB[/color][/b] You can read more about this in the section "Enhancing Virtual Memory Usage" in the LabVIEW help Contents: "LabVIEW 8.5 Features and Changes" Cheers, Mikael Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.