Jump to content

M Series & DAQmx issues...


JackHamilton

Recommended Posts

:D I leapted with both feet into recoding an *working* application to DAQmx and an M Series card.

The old code was using an 6023E and traditional DAQ performing 'Re-trigged continuous acquisition'. The REASON for the change was with the E Series and traditional DAQ - the CPU consumption was 100%.

We purchased an 6022 M Series card and recoded the retriggered acquisition with DAQmx. Preliminary capability code proved the M series card performance to be exceptional. CPU usage for a comparable task was about 40%.

When we started to code up ways of configuring the DAQmx task and propagate the data out of the DAQmx loop to other modules via Queues or Notifiers numerous odd behaviors occurred. As our application 'images' the data acquired - we noticed that we were quite consistently - but erratically missing data. After nearly a week of looking at the DAQ triggering (thinking the acquisition was missing the trigger) the elimination of that as the cause - make us look elsewhere.

On a wild shot, I added some debug code to the queue messaging that sends the data out of the DAQmx timed loop to and receiving module. We discovered that the Queue was simply missing data! Even though display probes show the data faithfully entering the "Enqueue Message" function - the data simply did not show up at the other side!. The 'pending to receive' always showed some number much smaller than the total allowed queue stack size.

Further, during debug I hastily dropped an Intensity graph in the receiving loop to display the contents of the accumulated data buffer which is a 2D array. The intensity graph performance is very slow, but the queue buffering should have decoupled that problem to the capacity of the buffer and given us about 1 minute of buffer time. What happen was the DAQmx timed loop - slowed it acquisition to the rate at which the intensity graph updated!

Trying to understand this, I dropped Case statements into the receiving code to: disable writing to the intensity graph, and/or added a Msec delay timer to 'emulate' a long delay in the receiving loop. By removing the graph update and adding even huge delays to code worked as expected: The DAQmx loop ran at this own rate -the queue buffer stacked up. BUT when the intensity graph updated was added - The SENDing DAQmx loop execution time slowed!!!!!

NI tech support is still trying to believe my claims. In the interim my solution was to remove the "Timed loop" with a simple While Loop for the DAQmx task. This seemed to reduce the problem to occurring in the 1%.

:unsure: The strangeness does not end there. As we have about 20 existing systems out there running 6023 E series card, we decided to try the new DAQmx code on the E board. In my code I have only to select the 'Dev2 from the control and rerun the VI.

The 6023E card performance - Based on CPU usage was much better than the M Series???? It was about 11% compared to 40% for the M.

That said I not lambasting NI. DAQmx is new the M Series cards are new. I don't expect perfection out-of-the-shoot. My post is simply a warning to others. As I ate about 2 months of consulting time, because of these bizarre problems - with functions in LabVIEW that I have come to trust. ie. Queues and Notifiers.

I am quite impressed with the DAQmx performance. I wish there was a 'real' example of the Timed loop. I had to grope for a day or two to create a timed loop link to an actual DAQ card event timer. The examples don't use hardware at all.

Regards

Jack Hamilton

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.