The test machine I'm using is a 2.79 GHz Pentium 4, WinXP, 1 GB RAM system with a PCI-7831R card in it. I'm running LabVIEW 8.6.1.
I'm trying to stream data from the FPGA to the PC over a DMA FIFO. I've picked 250 kHz as my current benchmark as that is significantly above where the actual system is going to need to run. The FPGA side uses a fixed time interval (4 usec) FIFO write. The PC side uses two loops, one to read a latched timeout flag in the FPGA and one to read out the FIFO in a 20 msec loop.
This works most of the time until I do something like minimize Windows Task Manager. The CPU usage doesn't blip and stays at ~25%; the number of elements remaining in the FIFO jumps to 30,000+ and I start to lose data points as the FIFO write in the FPGA times out.
I've attached my test code.
I appreciate any light people can shed on this.