Jump to content

Memory Fragmentation


Recommended Posts

I have a very large application that is experiencing a memory fragmentation problem. The root cause of the problem seems to be the fact that I am building and modifying an array of complex clusters. This data structure is used to mirror the data displayed by the tree control. I have done this to avoid writing to the tree control until the user chooses to display the VI that contains the tree. The idea was to encapsulate all GUI writes into a single VI and to avoid them altogether if the data is never requested to be displayed. But the result is a large and ever growing data structure that slowly fragments the memory of the PC until the app slows to a standstill. In my app I create N instances of this data structure to track all the DUTs being tested and then I run it for 24 hours. So, by the end of the test period, 100s of MB have been consumed and every little memory allocation takes forever.

After digging into the code for causes and solutions, I find that there are some functions where memory buffers are being created that I don't understand and can't seem to eliminate. These allocations seem to be making copies of the entire data structure and some of it's largest components, causing ever increasing memory allocations. I was hoping someone could point out the obvious mistake and offer a solution.

Also, I was wondering if the new InPlace structure in LV8.5 could solve this problem. How complex can the code within the InPlace structure be and does ALL of it have to be in place operations?

Attached is the code and an RTF document with some screen shots of the code showing the offending allocations.

Thanks for any ideas you can offer,

-John

Link to comment

W/out exact testing, I would have to say definitely that this is an exact example for the use of the In-Place memory structure.

The reason that the copies are made is because you have branched your cluster really.

There is still a copy of it in the main/owning cluster and the other one that you have pulled out. 2 separate instances of the data 2 wires 2 copies in memory.

Link to comment

QUOTE(jlokanis @ Nov 12 2007, 04:31 PM)

Also, I was wondering if the new InPlace structure in LV8.5 could solve this problem. How complex can the code within the InPlace structure be and does ALL of it have to be in place operations?

Attached is the code and an RTF document with some screen shots of the code showing the offending allocations.

Thanks for any ideas you can offer,

-John

The new Inplace element structure would indeed help avoid extra memory allocations, but you can also get about 90% of the improvements you'd see here without it, simply by keeping an important thing in mind regarding LabVIEW's inplaceness algorithm, which I'll explain.

LabVIEW is really good at operating on data inplace when you unbundle something, then bundle it back in. However, LabVIEW can't be certain when it's safe to operate inplace if you either unbundle or bundle conditionally. That's exactly what you're doing, and it's causing LabVIEW to back up the entire data structure for every operation, which becomes exceedingly costly. To avoid this, try to always unbundle and bundle regardless of the situation. In other words don't place one of these nodes in a Case Structure when you don't have the partner node in that same Case Structure.

The better solution in your case would be to always unbundle the data, but then to decide whether you bundle back in the same untouched data or the modified data. LabVIEW's a lot better at processing that.

It's true that the inplace element structure would save you an extra allocation when you index and then replace array subset. Here LabVIEW always makes a temporary copy. However, I'm guessing this is not the cause of the big problem you are seeing. I'm attaching a modified version of your VI below. Note that I really didn't spend much time analyzing this except to make sure the buffer allocation dots went away for the main Tree Data structure. You should definitely double-check everything to make sure it functions the same as it did.

Link to comment

QUOTE(ragglefrock @ Nov 12 2007, 10:42 PM)

The new Inplace element structure would indeed help avoid extra memory allocations, but you can also get about 90% of the improvements you'd see here without it, simply by keeping an important thing in mind regarding LabVIEW's inplaceness algorithm, which I'll explain.

LabVIEW is really good at operating on data inplace when you unbundle something, then bundle it back in. However, LabVIEW can't be certain when it's safe to operate inplace if you either unbundle or bundle conditionally. That's exactly what you're doing, and it's causing LabVIEW to back up the entire data structure for every operation, which becomes exceedingly costly. To avoid this, try to always unbundle and bundle regardless of the situation. In other words don't place one of these nodes in a Case Structure when you don't have the partner node in that same Case Structure.

The better solution in your case would be to always unbundle the data, but then to decide whether you bundle back in the same untouched data or the modified data. LabVIEW's a lot better at processing that.

It's true that the inplace element structure would save you an extra allocation when you index and then replace array subset. Here LabVIEW always makes a temporary copy. However, I'm guessing this is not the cause of the big problem you are seeing. I'm attaching a modified version of your VI below. Note that I really didn't spend much time analyzing this except to make sure the buffer allocation dots went away for the main Tree Data structure. You should definitely double-check everything to make sure it functions the same as it did.

Thanks for the reply and the VI. Unfortunatly. I am still stuck on 8.20 at this time. Any chance you can post an 8.20 version of the VI or a screenshot of the important changes ot the BD?

I will try to edit the VI based on your comments above in the meantime.

Thanks again,

-John

Link to comment

Something just smacked me right in the face w/ regards to inplaceness and memory structures and variant attributes.

I'm assuming that if you try to store data in the Variant DB that the compiler can never figure out if you are doing in-place operations/modifications of an already existing attribute.

Does anyone have feedback on that?

The reason being, is that I have created some modifications to the TreeAPI that allows you to embed data with a specific tag and have it travel around w/ it. The problem at the moment is that I store the data in a Variant DB.

I'll post the code to the CR discussion board soon.

Link to comment

QUOTE(Norm Kirchner @ Nov 13 2007, 11:45 AM)

Something just smacked me right in the face w/ regards to inplaceness and memory structures and variant attributes.

I'm assuming that if you try to store data in the Variant DB that the compiler can never figure out if you are doing in-place operations/modifications of an already existing attribute.

Does anyone have feedback on that?

The reason being, is that I have created some modifications to the TreeAPI that allows you to embed data with a specific tag and have it travel around w/ it. The problem at the moment is that I store the data in a Variant DB.

I'll post the code to the CR discussion board soon.

I am wondering the same thing. How can I do an 'InPlace' edit to my structure if that edit is to replace a string with a longer one? Or, add an element to a sub array of a cluster? This seems like it would force a memory allocation. My problem is finding a way to represent the data a Tree control can contain but outside the Tree Control itself. This data is quite complex when you get into multiple child rows and cell BG colors. I used to just write the data to the tree 'on the fly' but when you have 80 instances of the VI with the Tree all running in parallel and you don't need to display any of them unless a user chooses to, it seemed like a better idea to store the 'source' data in an array of clusters and then only write to the tree when it is displayed.

Anyone have a better idea? I am about to abandon the tree control altogether...

Link to comment

I have always supported not using the display of the tree to store the data, rather, storing the data in a separate source and merely using the tree as a tool to display that data when needed, which goes in-line w/ what you are trying to do.

W/ the implementation that I'm working with though, you would still use the tree "assist" in storing the data, but only really format the tree when needed.

Maybe this code snibbit will help show what I mean when I say "assist"

I would still recommend using the tree to help maintain the relationships because doing this in the external structure may prove dicey. {hence the reason I went this route}

post-208-1194985856.jpg?width=400

Link to comment

QUOTE(jlokanis @ Nov 13 2007, 02:11 PM)

I am wondering the same thing. How can I do an 'InPlace' edit to my structure if that edit is to replace a string with a longer one? Or, add an element to a sub array of a cluster? This seems like it would force a memory allocation. My problem is finding a way to represent the data a Tree control can contain but outside the Tree Control itself. This data is quite complex when you get into multiple child rows and cell BG colors. I used to just write the data to the tree 'on the fly' but when you have 80 instances of the VI with the Tree all running in parallel and you don't need to display any of them unless a user chooses to, it seemed like a better idea to store the 'source' data in an array of clusters and then only write to the tree when it is displayed.

Anyone have a better idea? I am about to abandon the tree control altogether...

Well, if you need to add data, you need to add data. That memory has to come from somewhere. There is a distinction, however in resizing an existing buffer versus allocating a completely new one. You can't allocate a new buffer with the inplace element structure, but you can resize that buffer. LabVIEW's in charge of resizing buffers for you so you don't have to think about it. I'm pretty sure (although this is hearsay) that what it does is resize the buffer to be bigger than you requested so that there's room to grow. Then when you grow beyond those bounds, it resizes again. That way it avoids constant allocation.

But if you don't trust that or want to customize it, you could implement the same functionality. Just initialize a very large array of elements and have a field in your cluster that denotes where you are in that array and its size. Then use Replace Array Subset instead of Build Array until you run out of room. Then use Reshape Array to resize the array to be twice as big, for instance. That's a sustainable model that requires relatively little memory allocation and should run smoothly.

Regarding the Variant Database Norm's speaking of: Yes, I believe LabVIEW will be forced to make copies of items when you check them out. An overkill method might be to store many single-element queue refs in the Variant Database instead of the data directly. Then you're copying a queue refnum (32 bits) instead of the whole thing. That's kind of a pain, I admit.

Link to comment

QUOTE(Norm Kirchner @ Nov 13 2007, 12:31 PM)

I see you are using Varient Attributes. While this is a natural data structure for representing a tree of data, I have found that accessing/adding/updating attribute elements is a very slow process in LV. I suspect that this is due to the memory allocation issue again. This would be fine for small datasets, but in my case, the tree is displaying a 24 hour looping test with over 100 elements per loop, and 100s of iterations in 24 hours, so the dataset gets huge fast. Couple that with the multiple instancs of this strucutre (50-80 on average) and my poor 8 core 4GB PC comes to a grinding halt!

I am thinking I may need to start over with this whole process.

Link to comment

QUOTE(jlokanis @ Nov 13 2007, 03:36 PM)

I see you are using Varient Attributes. While this is a natural data structure for representing a tree of data, I have found that accessing/adding/updating attribute elements is a very slow process in LV. I suspect that this is due to the memory allocation issue again. This would be fine for small datasets, but in my case, the tree is displaying a 24 hour looping test with over 100 elements per loop, and 100s of iterations in 24 hours, so the dataset gets huge fast. Couple that with the multiple instancs of this strucutre (50-80 on average) and my poor 8 core 4GB PC comes to a grinding halt!

I am thinking I may need to start over with this whole process.

I saved the VI I posted in 8.2, so you should be able to look at it. This doesn't require the inplace structure and should by itself give you a big usage memory improvement.

If you do rethink this, then I would suggest finding a way to write the important data to file, and then be able to read the file and build a tree-friendly data structure from that. Your current architecture requires you to read and then update things, which would be difficult with File IO, so it'd only work if you find a way to simply write data to file without worrying about what's already been written.

For instance, if you have a property called Symbol for a tree that you want to update occasionally, then you should write the tree tag and the symbol to the file as a pair. Then if you need to update it later, you don't change what you wrote to the file the first time, but you just append a new symbol and tree tag pair to the file. When you get around to displaying the tree, you read all the messages from the file and only take the most recent messages for each tree tag. In other words, you'd read a message to update the tree symbol twice, and the second message would overwrite the first.

This would help you completely avoid memory allocation issues. The downside is that you have to spend a little extra time catching up when the user's ready to view the tree.

Link to comment

QUOTE(Anish Prabu @ Nov 17 2007, 08:05 AM)

I have modified your code little bit to avoid the unnessary memory allocations. See the code for details. the Array that is unbundled is passed as a tunnel and edited (replace array element) so LabVIEW duplicates the memory for the array.

The modified code makes use of the same memory of the complex data structure.

- Anish Prabu T.

CLA

Thanks! This looks like the least amount of buffer allocations possible.

For those of you with exp with the map class, will that really improve the memory useage of this complex array? I thought I remember reading somewhere that LV stores arrays as linked lists already, so using the map class to construct a linked list would be functionally equivalent.

Link to comment

QUOTE(Anish Prabu @ Nov 17 2007, 08:05 AM)

I have modified your code little bit to avoid the unnessary memory allocations. See the code for details. the Array that is unbundled is passed as a tunnel and edited (replace array element) so LabVIEW duplicates the memory for the array.

The modified code makes use of the same memory of the complex data structure.

- Anish Prabu T.

CLA

Thanks! This looks like the least amount of buffer allocations possible.

For those of you with exp with the map class, will that really improve the memory useage of this complex array? I thought I remember reading somewhere that LV stores arrays as linked lists already, so using the map class to construct a linked list would be functionally equivalent.

Link to comment
  • 4 weeks later...

Just an update:

It was not memory fragmentation afterall but rather a resource leak cause by this little bugger:

post-2411-1197933305.jpg?width=400

Always always ALWAYS close the reference that this thing returns! :headbang:

So, in the end, as usual the problem was my own creation... :oops:

Time to go drink many :beer: :beer: :beer: ....

Link to comment

QUOTE(jlokanis @ Dec 17 2007, 03:17 PM)

Just an update:

It was not memory fragmentation afterall but rather a resource leak cause by this little bugger:

http://lavag.org/old_files/monthly_12_2007/post-2411-1197933305.jpg' target="_blank">post-2411-1197933305.jpg?width=400

Always always ALWAYS close the reference that this thing returns! :headbang:

So, in the end, as usual the problem was my own creation... :oops:

Time to go drink many :beer: :beer: :beer: ....

Thanks for posting the cause, you've saved me from a big headache. Since I made the same mistake.

Matt W

Link to comment

QUOTE(jlokanis @ Dec 18 2007, 12:17 AM)

http://lavag.org/old_files/monthly_12_2007/post-2411-1197933305.jpg' target="_blank">post-2411-1197933305.jpg?width=400

Always always ALWAYS close the reference that this thing returns! :headbang:

Yes, absolutely!

But if we go this far, we might as well go so far as to say:

Always close any references you open!

This should save us from a huge number of memory leaks. :thumbup:

(I know, there are some cases in which you don't actually need to close a certain reference, but I won't do any harm to take this into your own hands and be shure it's really closed.)

Link to comment

QUOTE(jlokanis @ Nov 13 2007, 09:11 PM)

Anyone have a better idea? I am about to abandon the tree control altogether...

Hi,

You could also, instead of creating a copy of your original data to compare with, create a copy of the tree data to compare with. Most data of the tree can be represented by a 2D array of strings and a 1D array of indentation values.

Joris

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.