Jump to content

Fastest way to read-calc-write data with cDAQ

Go to solution Solved by hooovahh,

Recommended Posts

Lets pretend I wanted to simulate a solar panel using a cDAQ, a programmable power supply and a light sensor. Id have to measure the voltage from the light sensor and the PSUs current, use those values to find the respective operating point on my light/voltage/current-curve and update the PSUs settings. Lets also say I wanted to do this for multiple systems in parallel, all connected to 1 analog input and 1 analog output module in 1 cDAQ.

What would be the best way to achieve the quickest response time? Simply reading and writing single samples seams to be pretty slow (though I can encapsulate everything neatly this way).

Is there a better way?

Link to post

If Windows is too slow for you, then you need something that can be more deterministic.  Before that I'd recommend just looking at getting better performance from your code by looking for ways to optimize it.  Because the type of control you are talking about seems like it can be somewhat slow.  If that doesn't work and you need to respond quicker, and want to stay with LabVIEW then your options are some kind of real time OS or FPGA.  The cheapest method isn't cheap.  Maybe you can get a hold of a myRIO which is still on the order of $1000.


On top of that bench top power supplies aren't made to change settings quickly, so even if you were able to sample an AI, and output an AO at 1KHz, I doubt many supplies can respond that quickly.  In the past I've used power amplifiers for this type of work and they are made to change output very quickly.  Give it a 0-10 and it outputs a 0-100.  Of course this equipment is generally very expensive.


I know you want some more control over this simulator, but would discreat components work?  I mean if you used an op-amp and some basic circuity could you have the voltage that varies from the solar panel, directly drive the input to the PSU?


And also this sounds like such a simple system, that maybe a small embedded micro could work.  Program an Arduino like board that samples an AI and outputs an AO.  The Teensy has a low cost version that is only $12 and has AIO and can be programmed with the Arduino IDE using all of its libraries.


Without knowing your system details, I'd just recommend using normal Windows code.  It's simple and easy, and could probably just use some optimizing.

Link to post

Looks like I should have been clearer with my post, sorry.

In the past Ive worked with LV on Windows, RT Desktop PCs, myRIOs and different cRIOs. Ive chosen Windows for this application because it should be fast enough and because of the development overhead needed for the FPGA Dev and so on. Also my application is more complex than the solar panels, but those explain my central problem (read - calc - write, no chance of writing or reading multiple data points instead of single DBLs because only the next value is known).


Getting more performance out of my code is my goal now, again Im sorry I didnt make that clear in the first post. I know the PSUs arent really quick, thats okay. The data sheet states rise and fall times for the outputs in the range of 2ms to 50ms (depending on the load). Id be perfectly happy when the code reading the voltages and feeding the PSUs is as fast as that.


Microcontrollers certainly would work but again that would take to much time now (and the simulation is only the first step in the project, later its the 'real deal' when I know I wont break something).


So, btt: 

Ill try to build a minimal VI that shows the problem tomorrow to post here, then we can be more specific. For now Id like to know what the general approach would be here. Would the code be significantly faster if I read all Channels in parallel (with 1 daqmx read), calculate everything and write the values with 1 write instead of single DBL reads and writes encapsulated in multiple VIs (Overhead)? Could the USB connection (cable quality, length, USB hubs) make a big influence? Would an Ethernet Chassis be faster? Is there a way to speed up reads and writes in the cDAQ?

Link to post
  • Solution

Hm....well I think a continuous read might be the fastest.  Set it up to perform something like N channels N samples, reading continuously at a fast rate, then perform a read, taking in all samples ready to be read.  You will likely get lots of samples because of your fast rate but who cares, just grab the newest for each channel, do calculations, and write single point, or maybe have the write task already open set to a continuous write, and do not regen.


The point I'm making, is I suspect that if you have your read task set to a continuous read, then just read how ever many samples are on the buffer, then this might be faster than reading a single point on each channel one at a time, because the task is already running and all that needs to happen is transfer the data.


Maybe the write can work the same way but it might be more complicated.  If you already have the task running, maybe performing a continuous write would be faster than multiple single point writes, but regen needs to be off.  Otherwise your output will repeat because this is usually used to do something like continually write a sine wave, you use the write once and it loops back around but, in this case you wouldn't want that.  Sorry I don't have any hardware to test on.

  • Like 1
Link to post

I presume you are using custom scales so you don't have to calculate inside your code?

A quick win over the classic read-then-write for  DAQ is to pipeline. If that isn't good enough you may need hardware timed, clocked and routed signals and that will be dependant on the hardware you are using.

Edited by ShaunR
  • Like 1
Link to post

I attached a minimal VI showing what Im doing now and what Id like to have faster. HW is a cDAQ-9172 with NI 9206 and 9264 for IO. The PC its running on is a normal laptop, Thinkpad x230i. Like I said, Id be happy if the vi reached 50ms loop timing.

@hooovahh: Ill try continous modes

@ShaunR: Yes, in the real application Im using custom scales. Pipelining the short demo vi made it even a little slower.


Edit: I checked which task needs the most time, reading took 70ms. Setting the read task to continuous chopped those 70ms off of the loop time, Im reaching 25ms loop timing for a single channel or 50ms for 16 channel r/w - THANK YOU!


Edited by volatile
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By paulofelora
      I have a requirement that I thought would be SIMPLE, but can't get it to work.  I have a 9205 card in a little 9174 cDAQ USB chassis.
      My *intended* behavior is to wait (block) at the DAQmx Trigger/Start Analog Edge on, say channel ai1, until I get a falling edge thru, say, -0.050V.  So I have a little vi (that contains 2 parallel loops) that I want to sit & wait for the trigger to be satisifed.  I'm doing "routine" voltage measurements in another AI loop on a different channel.  I want this vi to run separately from my "routine" voltage measurements because I want the app to respond "instantly" to input voltage exceeding a limit to prevent expensive damage to load cells.  I was afraid that if I used either Finite or Continuous sampling to "catch" an excessive voltage, I might miss it while I'm doing something else.  Yes, yes, a cRIO real-time setup would be better for this, but this is a very cost-sensitive task... I just want to "Arm & Forget" this process until it gets triggered, whereupon it fires an event at me.  SO... I'm also reading the same voltage on channel ai0 for regular-ole voltage measurements, and just jumpering them together.  I did this because I read somewhere that you can't use the same channel for multiple DAQ tasks - I *thought* I would need to set up the tasks differently.  {but now that think about it, the setups can be the same...}.
      I've set up the DAQmx task the same as shipping examples and lots of posts I've seen.  I'm supplying a nice clean DC voltage to a 9205 card using a high quality HP variable power supply.  Using NI-MAX, I've verified that my 9174 chassis & 9205 are working properly.
      THE PROBLEM - When I run it, the vi just sails right through to the end, with no error, and an empty data array out.  No matter WHAT crazy voltage I give the "DAQmx Trigger.vi" (set up for Start Analog Edge), it never waits for the trigger to be satisfied, just breezes on through as if it weren't there.  If I set the Sample Clock for "Finite Samples", the DAQmx Read fails with timeout - makes sense, since the trigger wasn't satisfied.  What could I possibly be doing wrong with such a simple task???????
      So my fundamental misunderstanding still vexes me - does the DAQmx Trigger vi not block and wait for the trigger condition to be satisfied, like the instructions state - "Configures the task to start acquiring or generating samples when an analog signal crosses the level you specify"?
      I stripped my requirement down to the bare essentials - see the 1st snippet, the 2nd is my actual vi.  Any ideas, anybody?

    • By neunited
      Dear All, 
      I'm new to this forum and I'm really glad I became a member.
      I am currently in the phase of designing a simple program which can control all the DAQmx channels using a configuration file (.ini) which is capable of change voltage range during mid-simulation. 
      At the moment my .ini file reads as follows: 
      [AO Channel 1] 
      Name = T2
      Physical Channel = cDAQ1Mod1/ao0
      Max Value = 10 
      Min Value = 0 
      [AO Channel 2] 
      Name = T3
      Physical Channel = cDAQ1Mod1/ao1
      Max Value = 10 
      Min Value = 0 
      [AO Channel 1] 
      Name = T2
      Physical Channel = cDAQ1Mod1/ao0
      Max Value = 5 
      Min Value = 0 
      [AO Channel 2] 
      Name = T3
      Physical Channel = cDAQ1Mod1/ao1
      Max Value = 10 
      Min Value = 0 
      My LabVIEW VI for the .ini script  is attached. I'm relatively new to using configuration file functions and I don't really understand where "Get Key Names" section should be wired to. I have placed a constant on it for now which reads the "AO Channel 1" but how can I get it to read all the channels in the .ini file.
      I am welcome to all suggestions here, I just want to make sure that I don't cause any problems to any of the channels and use best practice methods. All constructive criticism welcome!
      Thank you. 

      Read Configuration (INI) File (1).vi
    • By A Scottish moose
      Hello all,
      My question is: what is your preferred non-volatile way of storing daqmx task configurations? 
      Backstory: For a long time I've used ini files to maintain task configurations between executions of a program. This has worked for primarily static DAQmx configurations but I'm looking at creating a test executive with some degree of flexibility, which means daqmx task configurations start to get more complicated.
      If you use MAX for storing your tasks:  Do you run into problems accessing and maintaining these tasks even though you don't have direct access to the file?  Or is there an unlisted storage location for these files?
      If you use ini/text based files:  How do you keep things orderly?  I'm running into all kinds of limitations with the format.
      If you use XML: Where did you find your schema?  Did you write it?  I know NI saves tasks as XML files, and so therefore the schema exists, but I haven't been able to figure out how to use it.  I've created a task in my project and then dug through the XML of the project to find the attached image.  I believe this would be the ideal solution, but a little bit of digging in around NI.com (I know...) and google (actually useful) has resulted in no answers.
      I know there is an elegant answer to this question, I've just not found it yet.  Thanks in advance for your comments. 
      Tim M

    • By Gan Uesli Starling
      I have a LabVIEW program, which when operating with simulated input data is blazing fast during mock-up. As soon as I hooked into a real cDAQ on Windows, however, the update speed fell off to 4 seconds. That's awful slow. I don't really have a whole lot of channels. I have read tasks for Volts of maybe 5 channels, write tasks for both On./Off for maybe 10 channels (simultaneous) and write tasks for 4-20mA of only 5 channels. How may I improve on that? Is it maybe a priority issue in Windows or to do with USB? What? How to improve? Thanks in advance.
    • By cpipkin
      I'm using the 9229 and borrowed a community example to log 4 voltage inputs to TDMS. I need to output RMS voltage on channels 1&2 and the wavform on channels 3&4.
      Essentially what I've done is averaged channels 3&4 using the same # of samples that the RMS is averaged, that way I'm able to make sure they are time synchronized (see code attached). Another advantage doing averaging the samples is that I am reducing the amount of data to analyze later.
      When i compare the TDMS read results to the # of samples in the TDMS file there seems to be a discrepancy in time. I tried to add a time stamp to the logged TDMS file but couldn't get it to work. besides using time stamps, Is there an easy way to confirm that I am saving all of the data I am capturing? Eventually i will be logging data at 30min-40min intervals so I want to make sure that i'm not losing data.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.