I had a memory leak that I had to plug and just want to share and discuss the results:
Background
Nothing new, but I have a Buffer Interface Class that is composed of an array LabVIEW Objects. I does all the required transactions for a buffer on the array.
I inherit from this Parent and create wrappers for specific datatypes to create a much nicer API for my application code.
After discovering the memory leak for a specific datatype (1D DBL + timestamp) I was able to squash it.
I have a 100 pt array of DBLs + timestamp (6528 bits) in a 5000 element buffer = (6528 * 5000)/(8*10^6) = 4.08 MB buffer
What really surprised me was the ferocity of the leak.
Check out the stats!
I initially used Build Array primitive originally, thinking that an allocation has to occur here anyways. As you can see from the Test VI on the left, these means that I am using Build Array in a loop which is LabVIEW 101 of what not to do.
That is fine, but what this resulted in was a 250+ MB with 50+ MB swings of memory allocations!
It was running so slow after 3000+ inserts I gave up waiting.
Switching to Insert Array primitive, the allocation was only 12MB which makes sense: the buffer, the FP of the test VI and most likely a subVI interface. A gradual allocation occurs because as the buffer fills up as the Buffer Interface returns a subset of the full buffer. Once the buffer is full tho, the full buffer is returned and no further allocations occur (yay!).
I am really surprised at the difference between the two primitives, I thought from using the build array I would get some of it back, instead it went crazy.
But it sure was a fun testing it all.