Jump to content

Error 2: Memory is full - but it isn't


Recommended Posts

Hi,

 

I've recently run into problems with an application that creates a lot of objects. Half way through it errors out: 

Error 2 occurred at New Data Value Reference in some.vi
Error: 2 (LabVIEW:  Memory is full.
=========================
NI-488:  No Listeners on the GPIB.)

This happens both in the dev env and in an executable. I know it's not actually running out of memory because the PC has several GB free when it happens and the application is using less than 2GB and running on 64bit. It's also not talking GPIB.

 

The application goes through a number of measurement files, analyses (and closes) them and creates up to 20 objects of results per file (plus another 20 for the base class). Those are all by-reference objects (GOOP4, DVR). The reason I'm storing my results in by-reference objects is because I have to remember them all in a dynamically growing array and I'd much rather use U32 as a type for that array than a 1kB cluster with the actual results.

 

The point where it falls over is fairly reproducible after having opened around 35800 files. The number of objects created at this time is around 1 million. The first thing I did to debug is open a new vi and create 5 million of those objects in a loop - of course that worked. DETT didn't help much either; got over 200MB of log files just from logging User Events in the two create methods.

 

Now I'm a little bit stuck and out of ideas. With the error occurring in "New Data Value Reference" that eliminates all the traps related to array memory allocation in contiguous blocks that could trigger an out of memory error... :frusty:

Unfortunately, I can't easily share the original code that generates the error.

 

Any suggestions?

Link to comment

Hi,

 

I know it's not actually running out of memory because the PC has several GB free when it happens and the application is using less than 2GB and running on 64bit.

 

I presume you meant that you are running a 64-bit OS. However, note that a 32-bit application running in 64-bit Windows is still limited to 2 GB RAM by default: http://stackoverflow.com/questions/639540/how-much-memory-can-a-32-bit-process-access-on-a-64-bit-operating-system

 

 

It's also not talking GPIB.

 

This is a historical relic of LabVIEW. The same error code was used to represent two different errors in different modules (Code 2 could mean LabVIEW-out-of-memory, OR GPIB-has-no-listeners). Unfortunately, that means LabVIEW now can't differentiate between the two errors. So, both error messages are displayed: http://forums.ni.com/t5/LabVIEW-Idea-Exchange/More-sensible-error-messages/idi-p/2244422

Edited by JKSH
Link to comment

I presume you meant that you are running a 64-bit OS. However, note that a 32-bit application running in 64-bit Windows is still limited to 2 GB RAM by default: http://stackoverflow.com/questions/639540/how-much-memory-can-a-32-bit-process-access-on-a-64-bit-operating-system

 

I meant that I'm using the 64bit version of LabVIEW to run and build the application. On a 64bit windows.

The 2GB memory usage was just coincidental and could as well have been 4GB

Edited by ThomasGutzler
Link to comment

I haven't seen this error but I have seen "Not enough memory to complete this operation".  This error happened on a corrupted VI.  I kept deleting things until the only thing open was this single VI with a blank block diagram but I still got the error.  After playing around with it enough the error went away and I was able to compile.  Copying the contents of the VI to a new one fixed it as well.  Not sure if these are related.

Link to comment

Now I'm a little bit stuck and out of ideas. With the error occurring in "New Data Value Reference" that eliminates all the traps related to array memory allocation in contiguous blocks that could trigger an out of memory error... :frusty:

I'm not sure of that one. You stated that you add items to a dynamically growing array. Now in your RAM you need a contiguous chunk of memory that is large enough to hold the entire array. Even if you have 16GB RAM in your computer, if the largest chunk is 500MB, than that is the maximum amount of memory your array can allocate. Otherwise you get an "out of memory" exception. Also be aware, that with dynamically allocated arrays, if the chunk is to small to harbor the next element, the entire array will be moved in memory to a location that is large enough to harbor it. This will double the amount of required memory for some time, as a copy is made in the process.

Could you initialize your array with a decent amount of elements and try again?

 

I presume you meant that you are running a 64-bit OS. However, note that a 32-bit application running in 64-bit Windows is still limited to 2 GB RAM by default

That's not entirely true. LabVIEW 32-bit running on a 64-bit OS can allocate up to 4GB of memory as stated in this document: http://digital.ni.com/public.nsf/allkb/AC9AD7E5FD3769C086256B41007685FA

 

On a 64-bit Windows operating system, LabVIEW 32-bit can access up to 4 GB of virtual memory without modification.
Edited by LogMAN
Link to comment

 

LabVIEW 32-bit running on a 64-bit OS can allocate up to 4GB of memory as stated in this document: http://digital.ni.com/public.nsf/allkb/AC9AD7E5FD3769C086256B41007685FA

On a 64-bit Windows operating system, LabVIEW 32-bit can access up to 4 GB of virtual memory without modification.

 

Ah, I didn't realize that LabVIEW had LAA support. Thanks for sharing!

Link to comment

You stated that you add items to a dynamically growing array. Now in your RAM you need a contiguous chunk of memory that is large enough to hold the entire array. [...] Also be aware, that with dynamically allocated arrays, if the chunk is to small to harbor the next element, the entire array will be moved in memory to a location that is large enough to harbor it. This will double the amount of required memory for some time, as a copy is made in the process.

 

I'm aware of that, and that is why I chose to create an array of references to DVRs containing clusters instead of an array of clusters. In my specific case the out-of-memory error pops up when the array reaches a size of about 1 million. I find it hard to believe that there isn't a single contiguous block of 4MB available - every time I run the program, even on different PCs. (We know that windows suffers from fragmentation issues but it can't be *that* bad :))

 

Also, if I can trust the error message, the source of the error is inside "New Data Value Reference". That is not where the array is being grown.

The cluster I'm feeding into "New Data Value Reference" has a size 76 bytes of when it's empty. What could possibly cause that to fail?

 

Edit:

I caught my software when the error occurred in the development environment and paused execution.

Then I opened a new VI with the following code:

post-28303-0-73661400-1421124952_thumb.p

 

It produced this output no matter if I wired the I8 constant or the cluster containing 10x I64 constants:

post-28303-0-78189700-1421124957.png

 

To me that means it's not an obvious memory issue but some soft of DVR related weirdness that only NI engineers with highest security clearance can understand... or is it?

Edited by ThomasGutzler
Link to comment

This little snippet will create 1048576 (2^20) DVRs and then error out at "New Data Value Reference".

post-28303-0-83566300-1421128355.png

 

I know I said above that I've successfully created 5 million objects in a loop. I can't reproduce that.

What I can reproduce is an out-of-memory error after creating 1048576 GOOP4 objects.

 

So, what's the ini key to increase the number of DVRs you can create in parallel? :)

Edited by ThomasGutzler
Link to comment

You have over a million open DVRs at the same time? Let me guess... 1,048,576 open refnums? Isn't that a rather suspicious number... a power of 2. Hmm...

 

This simple VI will fail after exactly 1,048,576 iterations:

post-5877-0-42210900-1421129529.png

 

 

 

LabVIEW uses a rather clever algorithm to allocate DVRs (or any refnum kind) in such a way to EFFICIENTLY support this trick: you can allocate a DVR, release it, and then allocate a new one, and you won't get the same refnum, which means that your other threads that might still be holding onto that old refnum will get errors about the refnum having been disposed instead of messing with the new refnum. 

 

I haven't dug into all the details of the algorithm, but the result is that you can only have 1,048,576 refnums of the same type open at any given time. After that, we can't allocate more until you free some -- you'll still get a unique refnum when you do this. 

 

So, that's the way it works. Congratulations... you're the first user in 15 years I've ever heard complain about this --- I hadn't had to dig into this before now. 

 

 

So... 

 

I've recently run into problems with an application that creates a lot of objects.

 

That's "by-reference objects." Can I ask why you have all your objects as refnums? What application do you have that requires that sort of architecture? 

 

Let me clarify my question...

 

 I'd much rather use U32 as a type for that array than a 1kB cluster with the actual results.

 

If your array is an array of objects, then the top-level array size will be pointer sized... on a 64-bit system, that's a U64. 

Edited by Aristos Queue
  • Like 2
Link to comment

So now we know that reference based GOOP4/G# (the DVR type), only supports 2^20 objects.

So if you need reference based architecture, GOOP3 would work, since it's a functional global in the bottom.

It is much much slower but should work.

 

If there only was a ini-file key that you could change ;-)

Link to comment

LabVIEW [...] can only have 1,048,576 refnums of the same type open at any given time. After that, we can't allocate more until you free some. 

 

So, that's the way it works. Congratulations... you're the first user in 15 years I've ever heard complain about this --- I hadn't had to dig into this before now.

 

You're welcome :)

I'm sure it was just a matter of time anyway.

 

So... 

 

I've recently run into problems with an application that creates a lot of objects.

 

That's "by-reference objects." Can I ask why you have all your objects as refnums? What application do you have that requires that sort of architecture? 

 

I characterise performance of optical devices. To do that I can specify a number of test parameters. Each parameter requires a certain set of measurements. Some measurements are shared between parameters. Some test parameters require data from multiple measurements to calculate their result.

 

Result are calculated in a 2-step process. First all measurements are analysed and the intermediate results are stored in the memory of the test parameter - one object per result. This is where I'm building my array of references to objects. Measurements can be linked to multiple test parameters. If a measurement is linked to two test parameters it gets analysed twice in slightly different ways (that's two objects from one measurement). The final result of a parameter is calculated by finding the worst case of all intermediate results stored in its memory.

 

This many-to-many relationship between measurement data and test results makes it very difficult to split the pool of measurement data in smaller parts without breaking any of the relationships with the test parameters.

 

Time is critical because it's a production environment and it is most efficient to test all parameters in a single run rather than splitting them up into multiple runs. For the same reason (time) I decided to keep all intermediate results in memory rather than loading measurements from file and analysing them as I go along. Being unaware of the 1M reference limit, I couldn't see a problem with this design.

 

 I'd much rather use U32 as a type for that array than a 1kB cluster with the actual results.

 

If your array is an array of objects, then the top-level array size will be pointer sized... on a 64-bit system, that's a U64. 

 

My array is an array of GOOP4 objects - not pointers. A GOOP4 object creates a DVR of its attributes and type casts it into a U32, which is stored in a cluster. That's 4 bytes.

Link to comment

 

This many-to-many relationship between measurement data and test results makes it very difficult to split the pool of measurement data in smaller parts without breaking any of the relationships with the test parameters.

 

 

Perfect use-case (on the surface) for a relational database.

  • Like 1
Link to comment

Absolutely, so many times I thought how much easier this would all be if only I had a database...

However, it would slow things down a little bit because I'd have to host it on the network. I can see a case study coming :)

Naah. You only want to replace your home-grown. memory/handle hungry, local object database. We all know of a self contained, blisteringly fast, serverless RDMS that would fit the bill nicely :)  I don't understand folks that don't use it even if it's just for error logging.

  • Like 1
Link to comment

However, it would slow things down a little bit because I'd have to host it on the network. I can see a case study coming :)

Why couldn't you install the database on the same machine? Local comms should be fast and depending on how complex your math is you might be able to make the db take care of it for you in the form of a view or function.

 

My array is an array of GOOP4 objects - not pointers. A GOOP4 object creates a DVR of its attributes and type casts it into a U32, which is stored in a cluster. That's 4 bytes.

Unless I'm mistaken, he was pointing out that the representation of data in memory is in the form of pointers (http://www.ni.com/white-paper/3574/en/#toc2, section "What is the in-memory layout..." or http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/how_labview_stores_data_in_memory/). So you have an array of handles, not an array of the actual data. If you replace an array element you're probably just swapping two pointers, as an example.

Link to comment

Why couldn't you install the database on the same machine?

 

I might want to run the analysis on "any" computer without having to trouble the user to have a DB server installed.

 

Unless I'm mistaken, he was pointing out that the representation of data in memory is in the form of pointers (http://www.ni.com/white-paper/3574/en/#toc2, section "What is the in-memory layout..." or http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/how_labview_stores_data_in_memory/). So you have an array of handles, not an array of the actual data. If you replace an array element you're probably just swapping two pointers, as an example.

 

Of course, this makes sense. I can just convert my GOOP4 class into a native by-value class, put those directly in my array and the resulting array, if it had 10 class objects in it, would require a block  of memory 80 bytes long (on 64-bit).

 

And I checked, I can have more than 2^20 by-value classes :)

Hooray!

Link to comment

I might want to run the analysis on "any" computer without having to trouble the user to have a DB server installed.

 

An SQLite database consists of a single standalone file, which your application can place anywhere it wants. No need to install a server. (That's why it's called "Lite")

 

There's even the option of an in-memory SQLite database (which exists in RAM only, not on disk -- the user will never know that SQLite was involved)

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.