Jump to content
Axelwlt

TDMS Read to DVR - Memory Usage

Recommended Posts

Hi

I am reading big TDMS files (~1GB) and I try to minimize memory usage.

I read a signal once and I want to keep it for later usage, so I tried to write it to a DVR in the hope that I have only one instance of the data in memory.

image.png.c141382bf34f0f62efa1c654500f6b31.png

However the task manager tells me that uses 2 times more memory than it should.

I think the problem is that the array is not deallocated although I don't need it after writing it to the DVR.

Is there a solution to make sure I don't have copies of data?

Share this post


Link to post
Share on other sites

in theory i think you can use the 'swap values' primitive between the dvr read and dvr write to swap the specific handle over into the dvr, but that may be wrong.

tdms also has an e-dvr option: https://zone.ni.com/reference/en-XX/help/371361R-01/glang/tdms_advanced_functions_dataref/

that should have almost no memory overhead.

Share this post


Link to post
Share on other sites

The swap value does not seem to get rid of the problem. The second data instance is still in memory. (When I call the VI several times in a row the number copies grows less with the swap, but that's still not good enough.)

If I use the Request Deallocation VI, that seems to get rid of the copies when the VI finishes, but I don't know if that's a good design. And the second copy is anyway in the memory during that the VI runs, which I would prefer to avoid.

I already looked at the TDMS with external DVR but I don't know if/how I can use that. I don't seem to be allowed to wire a normal  DVR to the VI.

Share this post


Link to post
Share on other sites

A) LabVIEW will hold onto memory once allocated until the system says it is running low or until we have a large enough block to give back to the OS. Task Manager tells you nothing about how much memory LabVIEW is using, only how much it has available to use. 
 

B) Every wire can maintain an allocation of the data if the compiler determines that keeping the buffers inflated gives us the best performance. You can force a a subVI to release those allocations at the end of every execution by calling the Request Deallocation primitive, but I REALLY DO NOT recommend this. You’ll kill performance and possibly not save anything as people usually discover their app immediately re-requests the freed memory. Unless you are getting out of memory errors or are a real pro at data flow memory optimization, I recommend that you let LabVIEW do its job to manage memory. Beating LabVIEW and OS memory management generally requires bursty data flow with infrequent but predictable data spikes, where you can teach LabVIEW something about the spikes. 

  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.