Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/23/2014 in all areas

  1. I was not aware of this function either (still using 2009 whenever I can ), How big are your images? This is how I would approach it. It is the way I have always, with high speed acquisition and have never found a better way even with all the new fangled stuff. The hardware gets faster, but the software gets slower Once you have grabbed the data, immediately delete the DVR. The output of the Delete DVR primitive will give you the data and the sub process will be able to go on to acquire the next without waiting. The data from the Delete DVR you copy/wire into a Global Variable (ooooh, shock horror) which is your application buffer that your file and UI can just read when they need to. This is the old fashioned "Global Variable Data Pool" and is the most efficient method (in LabVIEW) of sharing data between multiple process and perfectly safe from race conditions AS LONG AS THERE IS ONLY ONE WRITER. You may need a small message (Acquired-I would suggest the error cluster as the contents) just to tell anyone that wants to know that new data has arrived (mainly for your file process. Your UI can just Poll the Global every N ms). The process here is that you only have one, deterministic, data copy that affects the acquisition (time to use those Preferred Execution Systems ; ) ) and you have the THE most efficient method of sharing the data (bar none) but - and this is a BIG but - your TDMS writing has to be faster than your acquisition otherwise you will lose frames in the file.You will never run out of memory,or get performance degradation because of buffers filling up, though, and you can mitigate data loss a bit by again buffering the data in a queue (the TDMS write side, not the acquisition) if you know the consumer will eventually catch up or you want to save bigger chunks than are being acquired. However, if the real issue is that your producer is faster than your consumer; that is always a losing hand and if it's a choice between memory meltdown or losing frames, the latter wins every time unless you are prepared to throw hardware at it... I've used the above technique to stream data using TDMS at over 400MB/sec on a PXI rack without losses (I didn't get to use the latest PXI chassis at the time that could theoretically do more than 700MB/sec ).. The main software bottle-neck was event message flooding (next was memory throughput, but you have no control over that) and the only way you can mitigate it is by increasing the amount you acquire in one go (reduce the message rate) which looks much, much easier with this function.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.