Jump to content

Data Acquisition


Recommended Posts

I'm trying to catch/count a random 90 usec pulse. I'll admit I'm just using brute force.... I'm acquiring Finite Samples, 100000 samples per channel (just one channel) at a rate of 200000 on a PXIe-6341, when I'm confident this test is working it will run for a year. 

Interjecting my own 90 usec pulse I catch the pulse 100 times out of 100 attempts. That part I like... 

I define Acquisition Time as the time it takes to acquire my huge acquisition and Cycle Time as the time it takes to queue my acquisition and circle back for the next acquisition. There are other things happening on this computer, but this vi only acquires this acquisition.

Acquisition time is .502 seconds and Cycle Time is .0000207 seconds so if my pulse is 90 usec and my cycle time is 20 usec I should always catch the pulse. I think. Now for the part that concerns me...

I've run this test for days and I'm catching the Max Cycle Time and Max Acquisition Time catching being every time the value is larger than the last largest value I toss it into an array (I only have 28 elements in this array over a several day period). My Max Acquisition Time is .5139955, however my Max Cycle Time is .0002743 considerably larger then my 90 usec pulse. 

I've moved almost everything out of the acquisition loop, I just don't know what to try next. Is there a way to prioritize this? Any thoughts?

Thanks in advance...

Bob

image.png.40be3cc2bcd10d17c7940f23640a4b05.png

image.png.0ab306bf33672222dc40d8c5b6cd7d9b.png

 

 

Link to comment

That means you are not reading from the DAQ quick enough. You can set the DAQ to continuous acquisition, and then read a constant number of samples from the buffer (say 1/10th of your sampling rate) every iteration of your loop and this will give you a nice stable loop period. Of course this assumes your loop is not doing other stuff that will take up 100 ms worth of time.

Edited by Neil Pate
Link to comment

I've returned to continuous samples, I set my rate to 100000 and samples per channel to 1000 and on my DAQmx Read.vi I've hard-wired Number of Samples Per Channel = 10000 (1/10th of my rate) when I interject my own 90 usec pulse I catch every pulse.

But I'm worried, the only difference from my old setup which could run for months before getting a buffer overflow error is the hard-wired Number of Samples Per Channel = 10000.

In the past I used the DAQmx Read Property Node to check Available Samples Per Channel and would read that value instead of the now hard-wired value. 

image.png.aa671e065606e65ab3fc611f3de42b17.png

Does my current setup sound appropriate to catch the random 90 usec pulses?

Rate = 100000

Samples per Channel = 1000

Hard-wired into the DAQmx Read.vi Number of Samples Per Channel = 10000 (1/10th of my rate)

Thanks,


Bob

Link to comment
On 12/8/2022 at 11:37 PM, rharmon@sandia.gov said:

I've returned to continuous samples, I set my rate to 100000 and samples per channel to 1000 and on my DAQmx Read.vi I've hard-wired Number of Samples Per Channel = 10000 (1/10th of my rate) when I interject my own 90 usec pulse I catch every pulse.

But I'm worried, the only difference from my old setup which could run for months before getting a buffer overflow error is the hard-wired Number of Samples Per Channel = 10000.

In the past I used the DAQmx Read Property Node to check Available Samples Per Channel and would read that value instead of the now hard-wired value. 

image.png.aa671e065606e65ab3fc611f3de42b17.png

Does my current setup sound appropriate to catch the random 90 usec pulses?

Rate = 100000

Samples per Channel = 1000

Hard-wired into the DAQmx Read.vi Number of Samples Per Channel = 10000 (1/10th of my rate)

Thanks,


Bob

While your previous approach might pose problems depending on what you intend to do with the data, as the number of read samples can be very variable, your current approach sounds honestly corner case. What do you mean samples per channel being 1000? Is that at the Create Task? This would be the hint for DAQmx about how much buffer to allocate and should be actually higher than the number of samples you want to read per iteration. My experience is that one read per 10 ms is not safe under Windows, but 1 read per 100 to 200 ms can sustain operation for a long time if you make the internal buffer big enough (I usually make it at least 5 times the intended number of read samples per interval).

Edited by Rolf Kalbermatter
Link to comment

To make things really efficient, echoing what was said earlier:

  1. Read about 100 ms of data (1/10 of the sample rate). Except set the number of samples to the closest even multiple of the disk sector size. Typically the sector size is 512 B, so for that case set the number of samples to 10240.
  2. Set the buffer size to an even multiple of (1) that is at least 4 - 8 s worth of data.
  3. Use the built in logging function in DAQmx to save the data; don't roll your own with a queue as in your example.
  4. Get rid of the unbounded array in your example; it keeps growing.

I have not taken data for years non-stop, but using 1 & 2 above, taken 2 weeks continuously 8 channels at 2MSa/s each, using a laptop and USB-6366.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.