Jump to content
volatile

Fastest way to read-calc-write data with cDAQ

Recommended Posts

Lets pretend I wanted to simulate a solar panel using a cDAQ, a programmable power supply and a light sensor. Id have to measure the voltage from the light sensor and the PSUs current, use those values to find the respective operating point on my light/voltage/current-curve and update the PSUs settings. Lets also say I wanted to do this for multiple systems in parallel, all connected to 1 analog input and 1 analog output module in 1 cDAQ.

What would be the best way to achieve the quickest response time? Simply reading and writing single samples seams to be pretty slow (though I can encapsulate everything neatly this way).

Is there a better way?

Share this post


Link to post
Share on other sites

If Windows is too slow for you, then you need something that can be more deterministic.  Before that I'd recommend just looking at getting better performance from your code by looking for ways to optimize it.  Because the type of control you are talking about seems like it can be somewhat slow.  If that doesn't work and you need to respond quicker, and want to stay with LabVIEW then your options are some kind of real time OS or FPGA.  The cheapest method isn't cheap.  Maybe you can get a hold of a myRIO which is still on the order of $1000.

 

On top of that bench top power supplies aren't made to change settings quickly, so even if you were able to sample an AI, and output an AO at 1KHz, I doubt many supplies can respond that quickly.  In the past I've used power amplifiers for this type of work and they are made to change output very quickly.  Give it a 0-10 and it outputs a 0-100.  Of course this equipment is generally very expensive.

 

I know you want some more control over this simulator, but would discreat components work?  I mean if you used an op-amp and some basic circuity could you have the voltage that varies from the solar panel, directly drive the input to the PSU?

 

And also this sounds like such a simple system, that maybe a small embedded micro could work.  Program an Arduino like board that samples an AI and outputs an AO.  The Teensy has a low cost version that is only $12 and has AIO and can be programmed with the Arduino IDE using all of its libraries.

 

Without knowing your system details, I'd just recommend using normal Windows code.  It's simple and easy, and could probably just use some optimizing.

Share this post


Link to post
Share on other sites

Looks like I should have been clearer with my post, sorry.

In the past Ive worked with LV on Windows, RT Desktop PCs, myRIOs and different cRIOs. Ive chosen Windows for this application because it should be fast enough and because of the development overhead needed for the FPGA Dev and so on. Also my application is more complex than the solar panels, but those explain my central problem (read - calc - write, no chance of writing or reading multiple data points instead of single DBLs because only the next value is known).

 

Getting more performance out of my code is my goal now, again Im sorry I didnt make that clear in the first post. I know the PSUs arent really quick, thats okay. The data sheet states rise and fall times for the outputs in the range of 2ms to 50ms (depending on the load). Id be perfectly happy when the code reading the voltages and feeding the PSUs is as fast as that.

 

Microcontrollers certainly would work but again that would take to much time now (and the simulation is only the first step in the project, later its the 'real deal' when I know I wont break something).

 

So, btt: 

Ill try to build a minimal VI that shows the problem tomorrow to post here, then we can be more specific. For now Id like to know what the general approach would be here. Would the code be significantly faster if I read all Channels in parallel (with 1 daqmx read), calculate everything and write the values with 1 write instead of single DBL reads and writes encapsulated in multiple VIs (Overhead)? Could the USB connection (cable quality, length, USB hubs) make a big influence? Would an Ethernet Chassis be faster? Is there a way to speed up reads and writes in the cDAQ?

Share this post


Link to post
Share on other sites

Hm....well I think a continuous read might be the fastest.  Set it up to perform something like N channels N samples, reading continuously at a fast rate, then perform a read, taking in all samples ready to be read.  You will likely get lots of samples because of your fast rate but who cares, just grab the newest for each channel, do calculations, and write single point, or maybe have the write task already open set to a continuous write, and do not regen.

 

The point I'm making, is I suspect that if you have your read task set to a continuous read, then just read how ever many samples are on the buffer, then this might be faster than reading a single point on each channel one at a time, because the task is already running and all that needs to happen is transfer the data.

 

Maybe the write can work the same way but it might be more complicated.  If you already have the task running, maybe performing a continuous write would be faster than multiple single point writes, but regen needs to be off.  Otherwise your output will repeat because this is usually used to do something like continually write a sine wave, you use the write once and it loops back around but, in this case you wouldn't want that.  Sorry I don't have any hardware to test on.

  • Like 1

Share this post


Link to post
Share on other sites

I presume you are using custom scales so you don't have to calculate inside your code?

A quick win over the classic read-then-write for  DAQ is to pipeline. If that isn't good enough you may need hardware timed, clocked and routed signals and that will be dependant on the hardware you are using.

Edited by ShaunR
  • Like 1

Share this post


Link to post
Share on other sites

I attached a minimal VI showing what Im doing now and what Id like to have faster. HW is a cDAQ-9172 with NI 9206 and 9264 for IO. The PC its running on is a normal laptop, Thinkpad x230i. Like I said, Id be happy if the vi reached 50ms loop timing.

@hooovahh: Ill try continous modes

@ShaunR: Yes, in the real application Im using custom scales. Pipelining the short demo vi made it even a little slower.

 

Edit: I checked which task needs the most time, reading took 70ms. Setting the read task to continuous chopped those 70ms off of the loop time, Im reaching 25ms loop timing for a single channel or 50ms for 16 channel r/w - THANK YOU!

2slow.vi

Edited by volatile

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.