Jump to content

Working with arrays of potentially thousands of objects


Recommended Posts

We're working on parsing sets of data. As part of this process, I go from a TDMS file on-disk to an array of objects. Each object contains data from a single sub-test and provides access to several methods that help in analyzing and displaying the results.

 

The size of this array could potentially be large, with perhaps a few thousand elements.

 

We're concerned that this may cause memory management problems in LabVIEW. We'll be running tests to try it out, but we're curious if anyone else has done something similar, and what barriers or issues they may have run into.

 

We're considering alternatives, such as performing read/writes directly from the TDMS file at all times. But regardless, we may have to plot large sets of data, which would require the data to be in memory anyway.

 

Does LabVIEW have any problems automatically deallocating memory for large arrays, especially large arrays of objects? Some of my colleagues are concerned about this.

 

My instinct is that it's no different than any other array. I think that once a class is loaded into memory, all the metadata about the class is locked in and can't be removed (the info about the private class and methods). But I don't think that particular caveat applies to specific instances of the class, which can be created or destroyed like instances of any other datatype (I think).

 

I'll be looking through the Managing Large Data Sets in LabVIEW white paper. Any other references would be appreciated.

 

Thanks.

Link to comment

We're working on parsing sets of data. As part of this process, I go from a TDMS file on-disk to an array of objects. Each object contains data from a single sub-test and provides access to several methods that help in analyzing and displaying the results.

 

The size of this array could potentially be large, with perhaps a few thousand elements.

 

We're concerned that this may cause memory management problems in LabVIEW. We'll be running tests to try it out, but we're curious if anyone else has done something similar, and what barriers or issues they may have run into.

 

We're considering alternatives, such as performing read/writes directly from the TDMS file at all times. But regardless, we may have to plot large sets of data, which would require the data to be in memory anyway.

 

Does LabVIEW have any problems automatically deallocating memory for large arrays, especially large arrays of objects? Some of my colleagues are concerned about this.

 

My instinct is that it's no different than any other array. I think that once a class is loaded into memory, all the metadata about the class is locked in and can't be removed (the info about the private class and methods). But I don't think that particular caveat applies to specific instances of the class, which can be created or destroyed like instances of any other datatype (I think).

 

I'll be looking through the Managing Large Data Sets in LabVIEW white paper. Any other references would be appreciated.

 

Thanks.

We have conquered this using DVRs, but unfortunately I didn't work on any of the projects that have done this, so I don't have a lot of familiarity with it. I'll see if I can get a chance to look at the work that was done and provide some feedback. As for the plots, just make sure you decimate the data.

Link to comment

I once asked similar question on the other side, with excellent response from AQ: http://forums.ni.com/t5/LabVIEW/LVOOP-Objects-sizes/m-p/2134884. It's been almost two years (whoa! really!? two years!? O_o) since then, so some kind of update might be necessary (as AQ stated there: "The compiler and its optimization systems were massively different in 2009, slightly different in 2010, somewhat different in 2011, and should be vastly different in 2013").

 

As for my experience, also from the application I was talking about then: I haven't ever reached the point where I must make optimizations specifically because LVOOP is wasting too much memory. The DVR in class private data is a way to go if you want to constrain data allocations, but this is not OOP-specific, you'd do the same if you use simple clusters.

Link to comment

I expect arrays of objects have a level of indirection. By this I mean I don't think contiguous address space is required to store each element because the type (and size) of each element can't be pre-determined can change. You would require contiguous space for the array of pointers/handles/references or however LabVIEW handles its indirection, but that should be trivial.

 

So if you have enough memory to store those objects by themselves at the same time, I don't expect that they're in an array to be any different. Also note that I said "expect"-- I don't know for certain if this is how LabVIEW operates but I can't see it being able to be done any other way with dynamic types.

  • Like 1
Link to comment

 

I expect arrays of objects have a level of indirection.... I don't know for certain if this is how LabVIEW operates...

 

It is. Each object is basically a pointer, either to the class default data or to the data of the specific instance (it's actually more than one pointer, but that's not really relevant), so the contiguous space required for the array should be N*pointer size. This information is documented in the LVOOP white paper.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.