Report How to implement triple buffering in Machine Vision and Imaging Posted October 17, 2014 So if you put a large array in a queue, and dequeue it in another loop, no data copy is ever made? Yes and no. I've had to characterize this recently with a cluster of various arrays and I've found using the RT trace toolkit (9068 in LVRT2013) that: -If you pass a dataset into a queue, the original (upstream) producer of that data set must generate a new buffer because the current buffer is handed off to the queue -If that upstream producer is an empty queue, an empty buffer must be allocated. For some reason I don't understand this ends up with like 5 wait on memory flags in the trace. -If that upstream producer is a full queue, no new buffer will be allocated -If the buffer (pulled out of any of the queues) is fed to another API, like IMAQ, you'll end up losing that buffer and you'll need to allocate a new one, unless the API can avoid stomping on the data. tl;dr: in normal operation you won't make a copy of the full dataset by using a queue until you pass that dataset to someone else. For determinism purposes dequeuing an empty queue will cause an allocation, which is why we have RT fifos. If you can avoid another API stomping on the data, you can pass a set of buffers through a loop of queues (A->B->C->A...) without allocations. Obviously all of the above is for my use case only and you should always use a trace tool to confirm that your code behaves as expected if performance is critical to your application.