Jump to content

DAQ- How fast is fast?


EJW

Recommended Posts

For those of you who have been doing this for a long time...

What is considered "High Speed data acquistion." or system intensive calculations, etc. where everyone is always seeming

to optimize things as best they can in order for their program to function efficiently/quickly.

I guess I am looking for a frame of reference for when people talk about collection large amounts of data

at a high speed, etc. and processing it.

For example. My current project acquires data on 16 RSE lines at 1000s/sec using a 6221-M. --is this fast/lots of data, or negligible.

I then calculate things like load, vibration, temperature and (speed from a digital input).

If those values then exceed my limits i shut down my machine.

I also adjust the loads and motor speed with my analog outputs and digital outuputs.

The only other things going on are a data file that i record one data point (average the 1000) for each of the 16 inputs

every 5 seconds, and i have a circular disk buffer going with 300000 data points for each line.

Link to comment

that's a difficult question, I just can guess:

IMHO there are 3 categories:

1. to slow, e.g. if you get aliasing effects

2. fast enough - the speed of the accquisition is fast enough to sample the signal

3. to fast: you get more data then your system can handle :)

I'd consider 1 kHz as standard speed, 100 kHz as high speed

Link to comment

QUOTE(EJW @ May 2 2007, 12:03 PM)

What is considered "High Speed data acquistion." or system intensive calculations, etc. where everyone is always seeming

to optimize things as best they can in order for their program to function efficiently/quickly.

I guess I am looking for a frame of reference for when people talk about collection large amounts of data

at a high speed, etc. and processing it.

It seems that you are asking about more than just Nyquist theory and your data requirements. If so then what is *fast* and/or *intensive* depends on what you are doing with it. Are you just monitoring a data stream for certain conditions, streaming to disk, massaging the data, displaying the data, taking 'many channels in parallel, etc. (which begs the question, how many is 'many')

And as you mentioned "High Speed data acquisition" is relative to the hardware capability. Better hardware means that high speed gets higher and higher.

So ignoring data requirements, the question would be relative to the software task and the hardware capabilities.

Link to comment

A 6221 with 16 kS/s is quite easy. To prevent anti-aliasing we use low-pass filters from Dewetron or Bedo as signal conditioning, works like a charm (esp. the Dewetron DAQ-PV modules). We are currently moving into TDMS storage which allows you to read and write at the same time!

Ton

Link to comment

I was just considering if it would be possible to try to learn the LabVIEW FPGA by trying to code a DVB (digital video broadcasting, digital tv) receiver. The problem is that the DVB signal stream may be up to 60 Mbps, so I guess it's out of the speed range of NI RIO hardware. I'd call thid high-speed digital DAQ ;)

Tomi

Link to comment

I spent a lot of quality time with an ultrasonic c-scan application that acquired single channel data at 500 Msps, and it was capable of up to 2 Gsps. Although we were only taking 20usec of data at a 500hz rate I thought that was fast. The color-coded time-of-flight images created from this data looked very impressive. :)

Link to comment

EJW,

As others have said, it is all relative - relative to your signal frequencies, your hardware capabilities, and your cpu speed (as well as other factors). Just talking aobut data acquisition would be a difficult enough conversation, but to include computation, analysis, and display really makes it a large can of worms. Nonetheless, I'll throw out a couple of benchmarks for reference.

One project I did uses a 4-channel 6115 sampling all four channels at 5 MS/s. It streams 20MS/s (40 MB/s) into memory scans the data for events, pulls out valid events, saves them to disk, and does a minimal amuont of display. If there is a relatively low frequency of "events", this system can keep up with all of that and not miss any data. If the event rate goes up, the duty cycle goes down and it misses data. This is on a fairly recent PXI controller using the PCI bus for data transfer. In this case, 5 MS/s is pretty fast data acquisition, since at high event rates, both the CPU and the disk can be maxed out.

On another system, we have 4 channels of data acquired at 500 MS/s. It has similarly random events, but typically at a lower rate. This runs on a much older computer (500MHz PIII). Although the raw data acquisition rate is 100 times faster, the CPU is usually not taxed very heavily. (Under high event rates, this system can max out the hard disk and begin loading the CPU, but those conditions are fairly rare.)

If someone asked me if 5 MS/s and 500 MS/s were "hgih speed data acquisition", I would almost always say yes to the second, but the first would require more context.

Regards,

Dave T

Link to comment

QUOTE(dthomson @ May 3 2007, 12:38 PM)

If someone asked me if 5 MS/s and 500 MS/s were "hgih speed data acquisition", I would almost always say yes to the second, but the first would require more context.

Regards,

Dave T

Now see, I would call both of those high speed as they are far greater than the 1KHz rates I am using.

Most of the lab computers are new dell celeron systems which i am hoping will keep up with everything.

I am constantly updating the screen indicators with averages of the 1000 samples I am taking,(only vibration has a graph).

Aside from streaming the data to disk (the AIs dump into a functional global, a parallel while loop reads from the global and streams the data), not much else is going on. the test has various load sequences that it cycles through every 30 seconds or a minute where it changes AO voltages, etc, but it just doesn't seem like its enough to load the system too heavily.

at current, the program while not finished, is running along smoothly in the test phase at less than 10% proc usage.

ALL UI is handled prior to start of test, with the exception of changing to a different tab to see different data output.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.