LabVIEW is quite flexible when we want to change array sizes at runtime and of course may of us have run into the out of memory errors because we assume that it takes care of everything. I’ve been reading more into how arrays in LabVIEW work and from my understanding; I don’t think arrays are really contiguous!
Let’s take an array of strings. An array of strings is stored as an array of pointers. Each element of the array can be in any part of the memory (provided the element itself remains in a contiguous location) and doesn't need to be contiguous with the rest of the array. If I change the size of an element, only that element is reassigned, not the whole array. In other words, the array along with its elements is non-contiguous. Essentially if I have a 3 element array of strings in a 32 bit environment, I need 4*3+overhead bytes to store the array + size of element 1 (+overhead) + size of element 2 (+overhead) + size of element 3 (+overhead) minimum free bytes. This means that in memory I need to find 4 contiguous locations, each with a minimum size of the respective component to be placed there. If I change the size of element 2, only 2 is reassigned to another memory location (if it can't grow it in its current location), nothing changes for elements 1 and 3.
So basically if I have an array strings that is 1GB in size, I don’t need 1GB of contiguous memory to assign it. The largest contiguous location I need is the size of the largest element. This doesn’t apply to fixed element size arrays like a 2 directional array of doubles, which needs to be contiguous.
Please read the second paragraph of this link very carefully - http://zone.ni.com/r...flattened_data/
Also makes some good reading
I’m not arguing the merits of choosing the best way to store arrays. (Putting a 1D array of doubles inside a cluster and adding that cluster to an array so we don’t require contiguous locations in memory vs simply creating a 2D Double array) I’m trying to see if there is a flaw in this reasoning?