Jump to content

Insane memory jump


Recommended Posts

I had memory leaks that I investigated and fixed in the past but never of that kind! Usually the symptom is a slow but constant increase of the memory, in the order of 1MB every few minutes which adds up to 100MB in a few hours and 1GB after a few days. When I look at the memory usage in a graph, I see a constant gentle slope and I know I have a leak. Usually a while loop which keeps opening a reference and never releases it.

But this time I had 5 hours of no leak at all where my application stayed nicely at 356MB, and suddenly it jumped to 1.5G in a 2s interval! Then it increased by 69MB/s for about 20s before stabilizing at 3GB. Then it regularly jumped back to 2GB for 1s and then back to 3GB for 30s, 2GB for 1s, 3GB for 30s, and so on...

How could that be possible? Can I really have a piece of code that could manage to allocate 69MB/s? Any comments welcome!

 

Capture.PNG

Link to comment

If you do images, or call something inside a DLL, nothing would be too insane. But I guess you already did your homework in trying to track that down. What look strange are the saturation at 3 GB and then the sudden drops and recovers. Makes me suspect of a corner problematic case of LV's garbage collector...

I don't know if it helps, your post reminded me of this old discussion. There I hijacked the thread to complain about what definitely turned out to be a bug of LV webserver, which appeared in one LV version and was silently covered up a couple of versions afterwards. That thread goes a bit on on the tone "trimming has nothing to do with a bug", "yes there is a bug", but essentially is about a call to the windows API to trim the process working set, which might be of some use to your testing.

Edited by ensegre
Link to comment
3 hours ago, ensegre said:

Makes me suspect of a corner problematic case of LV's garbage collector

LabVIEW doesn't have a garbage collector.

 

I've seen these sorts of behaviours with race conditions when creating and freeing resources.

Edited by ShaunR
  • Like 1
Link to comment
On 5/5/2017 at 9:55 PM, ShaunR said:

LabVIEW doesn't have a garbage collector.

That is not entirely true, depending on your more or less strict definition of a garbage collector. You are correct that LabVIEW does allocate and deallocate memory blocks explicitly, rather than just depending on a garbage collector to scan all the memory objects periodically and determine what can be deallocated.

However LabVIEW does some sort of memory retention on the diagram where blocks are not automatically deallocated whenever they are going out of scope, because they can be then simply reused on the next iteration of loops or for the next run of the VI.

And there is also some sort of low level memory management where LabVIEW doesn't usually return memory to the system heap whenever it is released inside LabVIEW but instead holds onto it for future memory requests. However this part has been changed several times in the history of LabVIEW, with early versions having a very elaborate memory manager scheme built in, at some point even using a third party memory manager called Great Circle, in order to improve on the rather simplistic memory management scheme of Windows 3.1 (and MacOS Classic) and also to allow much more fine grained debugging options for memory usage.

More recent versions of LabVIEW have shed much of these layers and rely much more on the memory management capabilities of the underlying host platform. For good reasons! Creating a good, performant and most importantly flawless memory manager is an entire art in itself.

Edited by rolfk
Link to comment
28 minutes ago, rolfk said:

That is not entirely true, depending on your more or less strict definition of a garbage collector. You are correct that LabVIEW does allocate and deallocate memory blocks explicitly, rather than just depending on a garbage collector to scan all the memory objects periodically and determine what can be deallocated.

However LabVIEW does some sort of memory retention on the diagram where blocks are not automatically deallocated whenever they are going out of scope, because they can be then simply reused on the next iteration of loops or for the next run of the VI.

And there is also some sort of low level memory management where LabVIEW doesn't usually return memory to the system heap whenever it is released inside LabVIEW but instead holds onto it for future memory requests. However this part has been changed several times in the history of LabVIEW, with early versions having a very elaborate memory manager scheme built in, at some point even using a third party memory manager called Great Circle, in order to improve on the rather simplistic memory management scheme of Windows 3.1 (and MacOS Classic) and also to allow much more fine grained debugging options for memory usage.

More recent versions of LabVIEW have shed much of these layers and rely much more on the memory management capabilities of the underlying host platform. For good reasons! Creating a good, performant and most importantly flawless memory manager is an entire art in itself.

AQ gets quite irate when people talk about LabVIEWs "garbage collector". I will defer to his expertise and definition ;)

Just to get in before someone pipes up about THAT function......."Request Deallocation" is not a garbage collector in any sense. "

Edited by ShaunR
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.