Jump to content

Varian memory buffers


Recommended Posts

Hi,

I'm wondering the mysteries of variant memory buffer allocations, as they seem to differ from what I would expect. As LabVIEW doesn't have memory manager, the buffers contain the data last written to them and the next data written to the buffer replaces what ever the buffer contained before. This is true for normal data types but doesn't seem to be true for variants. Take a look at the attached code. An array is created at each iteration of the loop every 5 seconds. The size of the array is either 10M elements every other iteration and the array is empty every other iteration. When I do not convert the array to a variant, LabVIEW behaves as I would expect; the old buffer get written over by the new buffer every iteration. This can be verified from the Windows Task manager. When the VI is stopped, LabVIEW doesn't do any cleanup but the buffer allocations remain as they were when the VI was running. When I added the conversion to a variant, LabVIEW seems to behave differently. When the big array is converted to a variant, LabVIEW reserves memory. However this memory never gets released, even when I replace the array with an empty one. Only when I call request deallocation node, LabVIEW seems to free the memory for the variant.

Is this a bug or a feature?

post-4014-1212063720.png?width=400

Link to comment

QUOTE (Tomi Maila @ May 29 2008, 07:27 AM)

Hi,

I'm wondering the mysteries of variant memory buffer allocations, as they seem to differ from what I would expect. As LabVIEW doesn't have memory manager, the buffers contain the data last written to them and the next data written to the buffer replaces what ever the buffer contained before. This is true for normal data types but doesn't seem to be true for variants. Take a look at the attached code. An array is created at each iteration of the loop every 5 seconds. The size of the array is either 10M elements every other iteration and the array is empty every other iteration. When I do not convert the array to a variant, LabVIEW behaves as I would expect; the old buffer get written over by the new buffer every iteration. This can be verified from the Windows Task manager. When the VI is stopped, LabVIEW doesn't do any cleanup but the buffer allocations remain as they were when the VI was running. When I added the conversion to a variant, LabVIEW seems to behave differently. When the big array is converted to a variant, LabVIEW reserves memory. However this memory never gets released, even when I replace the array with an empty one. Only when I call request deallocation node, LabVIEW seems to free the memory for the variant.

Is this a bug or a feature?

http://lavag.org/old_files/monthly_05_2008/post-4014-1212063720.png' target="_blank">post-4014-1212063720.png?width=400

First saying that LabVIEW does not have a memory manager is a bit of a stretch. It's not a garbage collector memory manager like Java has and consequently requires the application to be secure about memory allocation/deallocation to avoid memory leaks during operation but there is nevertheless a layer between LabVIEW and the crt memory allocation routines that I would consider a sort of memory manager. It used to be a lot smarter in old days with help of a memory allocator called Great Circle to compensate for the inadeqacies of what Windows 3.1 could provide for memory management.

The behaviour you see is quite likely a feature. I come to this conclusion because of two reasons. First it's behaviour is similar to how LabVIEW uses memory for data buffers when calling subVIs. This memory is also recycled and often not really deallocated. Also the fact that Request Deallocation cleans it up would definitly speak against a leak. Leaks are memory whose reference the application has lost for some reason. This seems not to be the reason here. The Variant most likely keeps the array handle and on negative resizing simply adjusts the dimSize without invoking the memory manager layer to resize that handle. An interesting test would be to see what happens if the small variant does not contain 0 elements but a few instead. Because I could imagine that on an incoming 0 size (or maybe very small size array) the existing internal buffer is reused (with copying of the incoming data to the internal buffer for small sizes) but on larger sized arrays the incoming handle is used instead and the internal handle gets really deallocated.

Rolf Kalbermatter

Link to comment

QUOTE (rolfk @ Jun 3 2008, 09:09 AM)

An interesting test would be to see what happens if the small variant does not contain 0 elements but a few instead. Because I could imagine that on an incoming 0 size (or maybe very small size array) the existing internal buffer is reused (with copying of the incoming data to the internal buffer for small sizes) but on larger sized arrays the incoming handle is used instead and the internal handle gets really deallocated.

Good idea. I executed the test. Both arrays stay in memory, the large and the small. So it seems using variants can result in memory consumption cumulation issues.

Link to comment

QUOTE (Tomi Maila @ Jun 3 2008, 03:04 AM)

Good idea. I executed the test. Both arrays stay in memory, the large and the small. So it seems using variants can result in memory consumption cumulation issues.
Not really. I mean, there's no way to just continually get more and more allocation of memory on successive calls to the VI. There are two buffers. The first one gets large, gets swapped into the second. Then the VI runs again. The first one gets large and swaps with the second. When something small comes down the wire, it reduces the first and swaps with the second. Now something small comes down the wire again. Now both buffers are back to small.

If you're in a case where this is really a concern -- ie, when a large piece of data comes through the system only very rarely AND the subVI doesn't get called again very often -- the Request Deallocation prim will deflate all your terminals.

Link to comment

This swapping method actually sounds pretty nice as no new memory buffers are needed. At least as long as the buffers are not too big and we are not too close to LabVIEW memory limit (actually my team is very close to the physical memory limits all the time).

Is there any way to access the content of the swapped memory buffer i.e. the buffer that used to be the output of To Variant but after execution becomes the input of "To Variant"?

Hmm... I decided to make a test of using to variant in a loop (see below). It ended up crashing LabVIEW, which was something I was actually expecting...

post-4014-1212560732.png?width=400

Link to comment
QUOTE (Tomi Maila @ Jun 4 2008, 01:26 AM)
Hmm... I decided to make a test of using to variant in a loop (see below). It ended up crashing LabVIEW, which was something I was actually expecting...
Yeah, that's a primitive that really ought to have error in and error out terminals. It doesn't handle infinite memory allocation (which is what you've done here). Probably worth filing a CAR. Do you want to do that or should I?
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.