Jump to content

Memory management


Recommended Posts

Do any of you have experience with the Request Deallocation function? I'm wondering what proper use is.

One of my beta testers for an application I'm writing occasionally reports out of memory errors. When the application instance reports this, it has usually climbed up to about 800-900 MB of memory.

I guess I have a few questions with regards to LabVIEW and memory as I've never created any LabVIEW applications that manage large data spaces. Is there a hard limit on what LabVIEW 32-bit is capable of? Am I looking at a 3 GB or so ceiling? I'd never expect to have more than maybe 500 MB of data resident in memory at one time for extreme cases, but the application will be managing gigabytes worth of data space which is cached to disk and can be called up on demand for display purposes.

I suspect that what I'm really running into is out of continuous memory required for allocations as there is ample memory available on the system. Does requesting deallocation help keep memory space unfragmented or is that a function of the OS which LabVIEW essentially has no control over?

-m

Link to comment

Do any of you have experience with the Request Deallocation function? I'm wondering what proper use is.

One of my beta testers for an application I'm writing occasionally reports out of memory errors. When the application instance reports this, it has usually climbed up to about 800-900 MB of memory.

I guess I have a few questions with regards to LabVIEW and memory as I've never created any LabVIEW applications that manage large data spaces. Is there a hard limit on what LabVIEW 32-bit is capable of? Am I looking at a 3 GB or so ceiling? I'd never expect to have more than maybe 500 MB of data resident in memory at one time for extreme cases, but the application will be managing gigabytes worth of data space which is cached to disk and can be called up on demand for display purposes.

I suspect that what I'm really running into is out of continuous memory required for allocations as there is ample memory available on the system. Does requesting deallocation help keep memory space unfragmented or is that a function of the OS which LabVIEW essentially has no control over?

-m

As I understand it...

It can be used for those dynamically loaded number crunching memory pig plug-ins. When you load one of thos monster I can allocate a lot of memory and as long as it is resident, the memory stays allocated. By using the "request dealloc..." at the tail end of the plug-ins run, it can try to give ack what was allocated before it was run.

There were other uses in the old days but gradually they have been down graded (don't work as well as it used to do).

I think there as also a "Lazydeallocation" switch but I don't remeber if that was something aded or taken away.

Ben

Link to comment

Do any of you have experience with the Request Deallocation function? I'm wondering what proper use is.

I shall be interested in how this gets answered. I've put it in code where I knew there was a large amount of memory usage, but I've not seen performance improvements in how I've used them. A contractor we had in put them in every VI he created, but that doesn't seem right either.

I guess I have a few questions with regards to LabVIEW and memory as I've never created any LabVIEW applications that manage large data spaces. Is there a hard limit on what LabVIEW 32-bit is capable of? Am I looking at a 3 GB or so ceiling? I'd never expect to have more than maybe 500 MB of data resident in memory at one time for extreme cases, but the application will be managing gigabytes worth of data space which is cached to disk and can be called up on demand for display purposes.

I've had Windows XP terminate a program (happens when program uses >= 2 GB) before LabVIEW 8.0 was unable to gather more memory.

Tim

Link to comment

I shall be interested in how this gets answered. I've put it in code where I knew there was a large amount of memory usage, but I've not seen performance improvements in how I've used them. A contractor we had in put them in every VI he created, but that doesn't seem right either.

I've had Windows XP terminate a program (happens when program uses >= 2 GB) before LabVIEW 8.0 was unable to gather more memory.

Tim

i believe in old versions, the deallocate would try to deallocate. In recent vesions, the VI hierachy has to go idle. That is why I mentioned the dynamic loading case since they will go idel when they finish.

I have busted through the 2Gig limit.

There is a switch for XP to make the OS aware of the extra memory.

That message is saying there was not a large enough block of contiguous memory when requested. I got around that by only using a bunch of small buffers. One possilbe contruct would be to use an array of queue refs with only a sinlge element in each queue. As long as the OS can finf a slot big enough for the queue element it should work.

Ben

Link to comment

Do any of you have experience with the Request Deallocation function?

Lots of experience. For some reason I keep trying to use it, even tho it rarely does anything. As Ben says, the whole calling chain has to go away before it works.

Is there a hard limit on what LabVIEW 32-bit is capable of? Am I looking at a 3 GB or so ceiling?

You're using LV 2010, right? It should be already Large Address Aware, but if you're using 8.6 take a look here.

I suspect that what I'm really running into is out of continuous memory required for allocations as there is ample memory available on the system.

Yes, contiguous memory is the problem with large data sets, particularily when you're trying to plot them.

Before I pull in a large amount of data, I read the available system memory, divide it in half, and then decimate my data to fit in that space. Yes, it's a WAG that even that will be enough memory (not to mention really frustrating to have lots of memory out there and not be able to use it), but it's been working pretty well so far.

Link to comment

Lots of experience. For some reason I keep trying to use it, even tho it rarely does anything. As Ben says, the whole calling chain has to go away before it works.

You're using LV 2010, right? It should be already Large Address Aware, but if you're using 8.6 take a look here.

Yes, contiguous memory is the problem with large data sets, particularily when you're trying to plot them.

Before I pull in a large amount of data, I read the available system memory, divide it in half, and then decimate my data to fit in that space. Yes, it's a WAG that even that will be enough memory (not to mention really frustrating to have lots of memory out there and not be able to use it), but it's been working pretty well so far.

Sound like the old DOS days when we had to unload the first half of the program before we could run the the second.

Ben

Link to comment

Good feedback, thanks folks.

Yes, 2010 SP1.

I think the take home for me will be to examine my hierarchy and see which parts of it can get unloaded after they're done (and if that even makes a difference). There is a definite one-time analysis phase which is quite dynamic from a memory stand point. After that, processing overhead is minimal and the application is visualization of mostly static data. We'll see where this lands...

Link to comment

Yes, it's a WAG ...

Okay, there's one I'm not familiar with. What's WAG?

the whole calling chain has to go away

Just to make sure I understand this ('cause I've *cough* occasionally been known to be wrong)... If I have TopLevel.vi which calls SubA.vi which calls SubB.vi, and I put a request deallocation in SubB.vi, then TopLevel.vi has to go idle before the deallocation occurs?

Tim

Link to comment

Okay, there's one I'm not familiar with. What's WAG?

Just to make sure I understand this ('cause I've *cough* occasionally been known to be wrong)... If I have TopLevel.vi which calls SubA.vi which calls SubB.vi, and I put a request deallocation in SubB.vi, then TopLevel.vi has to go idle before the deallocation occurs?

Tim

Wild Arse Guess

Yes, a has to go idle.

If B was not part of A's hierachy and was loaded using Open VI ref then run using invoke node run... then when it complete the deallocate wold have an effect.

Ben

Link to comment

I shall be interested in how this gets answered. I've put it in code where I knew there was a large amount of memory usage, but I've not seen performance improvements in how I've used them. A contractor we had in put them in every VI he created, but that doesn't seem right either.

It depends what you mean with performance. For me performance is mostly about speed and Deallocate Memory has only a negative effect on that if any at all. In most situations it does nothing nowadays. In earlier LabVIEW versions it was supposed to do some memory compacting but that had mostly bad slow downs as a result and helped little in squeezing out more memory from a machine. I believe Ben's statement that nowadays it will only affect claimed data chunks from VIs that have gone idle is correct.

Link to comment

Good feedback, thanks folks.

Yes, 2010 SP1.

I think the take home for me will be to examine my hierarchy and see which parts of it can get unloaded after they're done (and if that even makes a difference). There is a definite one-time analysis phase which is quite dynamic from a memory stand point. After that, processing overhead is minimal and the application is visualization of mostly static data. We'll see where this lands...

Hi mje,

LabVIEW is large address aware but that is only a benefit on 64 bit OS. On a 32 bit OS LabVIEW is limited to 2GB which can be increased to 3GB with an flag in the boot.ini file for the OS. Large address aware means if you run 32 bit LabVIEW on a 64 bit OS then it can access 4GB.

A way that I have tried to control allocations on large data sets in the past is using data value references in LabVIEW. I was building large 2D maps for robotics but without knowing what size I needed (and also not always wanting square arrays) I built a 2D array of references to 2D arrays. What this meant is that each element had an address to the reference and then an address within that array but this allowed me to allocate parts of the array as and when I needed then and so maintain a bit more control over my memory. I posted the idea at the time at http://decibel.ni.com/content/people/JamesMcN/blog/2010/02/23/using-referencing-to-make-weird-shaped-grids which talks in a bit more detail.

Cheers,

Mac

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.