Jump to content

When does the MemoryManager release memory?


Recommended Posts

Hi,

I am running small testapps to check design ideas for a bigger project if they run without memory leaks.

My current app sends different length waveforms to a save routine using queues which writes them into a TDMS-file.

I understand that the way I do this could lead to memory fragmentation. The RT System Manager showed this:

post-1037-1206978355.png?width=400

After startup

post-1037-1206978369.png?width=400

after 6 days running

post-1037-1206978378.png?width=400

after 11 days running

The main question for me is, when does the runtime will free some memory again.

Does it only kick in if a threshold of available memory is under-run? Which would require a different test application to produce cyclic memory shortages or so.

Greetings from a cloudy Munich

Link to comment

LabVIEW doesn't have a garbage collection system -- there isn't a "memory manager" that is in charge of periodically deallocating data. (I put "memory manager" in quotes because there is something that we call the memory manager, but it doesn't do the job you're describing.)

Let's take a VERY simple case: Suppose you have a subVI that takes an array and an int32 N as input. The subVi's job is to concatenate N items onto the end of the array.

When that subVI loads into memory, it's arrays are all size zero, so they take little memory. Now call the subVI, passing in an array of size 5 and 10 for N. The front panel control will allocate space to have a copy of the 5 element array for its display. The indicator will allocate to display a 15 element array. Various terminals on the block diagram may allocate to have buffer copies of the array (use the Show Buffer Allocations tool to see these allocations). So now your VI uses more data than it did before.

The subVI will not release that data when it finishes running. Those terminals stay "inflated". If you call the subVI again with a size 5 array, those allocations will be reusued for copies of the new array. If you call with a smaller array, then LV will release the memory that it doesn't need. If you call with a larger array, LV will allocate more.

If you're running a test VI over and over again with the same inputs, you should see the data size remain constant after the first execution of the VIs because after that point, all the data space that the VI needs is fully allocated. If you're seeing a growth of the amount of memory (which you are seeing), it is because you're processing ever larger arrays or because you're reserving system resources and never releasing them. The common example of this is opening a queue or notifier reference and never closing it. Every Obtain Queue call allocates 4 bytes for a new refnum, and those 4 bytes will only be returned when the reference gets Released. LV will release the reference for you when the VI goes idle, but if you're calling in a loop, you should be releasing the resources manually or they'll just build up. Another common leak is allocating reentrant clones using Open VI Reference that you're never closing.

You can force the subVIs to deallocate as soon as they finish running, so they're back in the pristine "just loaded into memory" state by dropping the Request Deallocation primitive onto the diagram. Doing so can reduce the amount of memory that LV uses at any one time, but that generally results in really bad performance characteristics.

Link to comment

QUOTE (Aristos Queue @ Mar 31 2008, 11:11 AM)

...dropping the Request Deallocation primitive onto the diagram <...> can reduce the amount of memory that LV uses at any one time, but that generally results in really bad performance characteristics.

Can you say a little bit more about this: ie, in what way will using Request Deallocation degrade performance? If there are any good KBs or other resources re: this, could you post some here?

Link to comment

QUOTE (Val Brown @ Mar 31 2008, 01:45 PM)

Can you say a little bit more about this: ie, in what way will using Request Deallocation degrade performance? If there are any good KBs or other resources re: this, could you post some here?

Basically the subVI will have to reallocate all the stuff it just deallocated every time it executes. Very very time consuming. The only time when Request Deallocation is advantageous is when you have a subVI that has a very large array pass through it and you don't expect to call that subVI again for a very long time (we're talking arrays on the order of a million elements and delays between calls of at least a few full seconds). In those cases, there can be some advantages to going ahead and deallocating the subVI after every call.

Link to comment

Hi and thanks for your replies,

I still don´t know why my memory consumption grows.

The used code is in the attachement Download File:post-1037-1207038045.zip

The queue with the waveforms is limited to 500 elements and the producer VI, which writes random data into the Q, has a max Wfm length of 5000 (DBL Values). So the Q size in memory should max out at about 20MB.

Perhaps someone has an idea where all my memory is used.

Edit:

Sorry I didn´t make it clear which is the Main VI (Q_Get_Write_Copy.vi)

post-1037-1207056596.png?width=400

Link to comment

QUOTE (Götz Becker @ Apr 1 2008, 04:26 AM)

Hi and thanks for your replies,

I still don´t know why my memory consumption grows.

The used code is in the attachement http://lavag.org/old_files/post-1037-1207038045.zip'>Download File:post-1037-1207038045.zip

The queue with the waveforms is limited to 500 elements and the producer VI, which writes random data into the Q, has a max Wfm length of 5000 (DBL Values). So the Q size in memory should max out at about 20MB.

Perhaps someone has an idea where all my memory is used.

That doesn't load as project. And just looking at the subVIs itself won't show any leaks for sure.

Rolf Kalbermatter

Link to comment

Just adding my 2 cents.

The link to VI Memory Usage in LabVIEW 8.5 (see previsou post) says :

QUOTE

Conditional Indicators and Data Buffers

The way you build a block diagram can prevent LabVIEW from reusing data buffers. Using a conditional indicator in a subVI prevents LabVIEW from optimizing data buffer usage. A conditional indicator is an indicator inside a Case structure or For Loops. Placing an indicator in a conditionally executed code path will break the flow of data through the system and LabVIEW will not reuse the data buffer from the input, but force a data copy into the indicator instead. When you place indicators outside of Case structures and For Loops, LabVIEW directly modifies the data inside the loop or structure and passes the data to the indicator instead of creating a data copy. You can create constants for alternate cases instead of placing indicators inside the Case structure.

I saw that your methods all have control terminals inside of the error case structue. This might not change a lot, but if I understand correctly the quote above, then it can only be better to have the control terminals outside of the case structure.

See also this thread on NI forum http://forums.ni.com/ni/board/message?boar...=191622#M191622

Hope this can help

Link to comment

QUOTE (TiT @ Apr 1 2008, 12:34 PM)

I saw that your methods all have control terminals inside of the error case structue. This might not change a lot, but if I understand correctly the quote above, then it can only be better to have the control terminals outside of the case structure.

Hope this can help

Hi,

thanks for the link. I didn´t thought about the controls inside the case structures. Usually I only look for nested indicators and the dataflow of "passed-through" data like references.

I´ll try the hint and hope for the best :rolleyes:

Link to comment

QUOTE (Götz Becker @ Apr 1 2008, 09:26 AM)

I still don´t know why my memory consumption grows...

Hi,

I think that one of the reasons your memory grows is that you start filling the Queue with elements of length 1,2,3,4 etc. this means that the queue element only allocates this amount of memory.

At some point the queue element that was once initialized with a size of 1, will get a larger buffer written to it, and will therefore keep this new larger buffer in memory. This will continue for all your queue elements.

In the end, all your queue elements will have a allocated size of 4999*8 bytes.

This could then explain an increase in memory of about 4999*8*500 ~ 20MB.

To test this, add some code right after the InitQueue primitive that adds 4999 dbl values to all buffer elements, and then removes all elements (thus pre-initializing the amount of memory the queue will use).

/J

Link to comment

My 2c on RT memory issues...

  • Try disconnecting the error inputs on queues and notifiers. This is especially true if you believe the error cluster may be carrying a warning.
  • Try placing queues or notifiers outside of case structures.
  • Off the subject a bit, but don't use string shared variables in RT version 8.5

I have seen three separate RT memory leaks fixed by each bullet.

cheers!

Link to comment
  • 1 month later...

QUOTE (Götz Becker @ May 5 2008, 04:13 AM)

...after the switch to 8.5.1 together with a new PXI controller the memory consumption remained stable. The test application did run now for about a week and makes me feel happy again

Those plots look much better :thumbup:

Link to comment

QUOTE (Götz Becker @ May 5 2008, 04:13 AM)

Hi again :) ,

after the switch to 8.5.1 together with a new PXI controller the memory consumption remained stable......

That's great news. :yes: NI said the next release (after 8.5) would fix some of the leaks. I'm glad to see that ugrading may be worth the trouble. Thanks for checking back in.

Link to comment

I have had to use the "Deallocation VI", in one of my current test system.

post-941-1210024602.png?width=400

But I also needed to increase the memory to 3GB for LabVIEW to use.

It's a simple switch in the "c:\boot.ini" file.

 [boot loader]timeout=30default=multi(0)disk(0)rdisk(0)partition(2)\WINDOWS

[operating systems]

multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /noexecute=optin /fastdetect [b][color="#ff0000"]/3GB[/color][/b] 

You can read more about this in the section "Enhancing Virtual Memory Usage" in the LabVIEW help Contents: "LabVIEW 8.5 Features and Changes"

Cheers,

Mikael

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.