Jump to content

Choosing an RT Solution : cRIO or PXI


Recommended Posts

Hi,

 

My company has been developing and maintaining its own SCADA software in LabVIEW for a few years. It is fairly comprehensive: datalogging, graphing, alarm monitoring, automation, loops for equations and PIDs, and so on. It is a PC-based solution and communicates with many different kinds of hardware through the COM(RS232), USB, and Ethernet ports of the PC. This solution works well and allows keeping the costs low for most of our customers. Most of the loops run around 10Hz (100ms).

 

However, more and more we are running into customer specifications that require high control rate (few milliseconds), high determinism (to the millisecond), and high reliability. Not surprisingly, the PC solution becomes unacceptable. We feel it is time to look into a real-time, embedded solution for those customers. That's why I'm currently investigating the different NI embedded RT solutions (namely PXI and cRIO).

 

I can find plenty of resources on each of them, but close to nothing when it comes to comparing the 2 solutions and choosing which one to go with. Would you mind giving me some guidance? I guess you'll need more information, which I'll be happy to provide.

 

A few elements already:

- There is no request for MHz loops, so the FPGA side of the cRIO is not required I believe

- Our application contains many VIs that are both the engine and the HMI, so there will be some decoupling effort if we need to split it into an HMI application on the PC and an RT application on the cRIO. Would a PXI solution avoid this issue by plugging a monitor directly to the display port of the controller? But then if I have all the code in the PXI controller, is it likely that I will lose my control rate and determinism?

 

Thanks!

Emmanuel

 

 

Link to post
Share on other sites

PXI rt doesn't support full displays--in fact, at this exact moment you're better off with the new atom cRIOs for that purpose. You can plug in a displayport cable and most UI elements (please please check them first) will render correctly. The big example of something that doesn't is subpanels (last I heard). This does reduce determinism (the GPU is firing off interrupts) but not as much as you might think. There are specs out there, probably but I don't think it goes above maybe 50 or 100 usec of jitter.

 

All the RTOSs use essentially a combo of preemptive scheduling (time critical, then scan engine, then timed loops, then normal loops) and round robin scheduling (between threads at the same priority). I'd recommend reading chapter 1, pg 24 (pdf pg 12) of this guide: ni.com/compactriodevguide

 

Don't take this as gospel, but I'd approximate the differences as:

-Pick cRIO if you need the ruggedness (shock, vibe, temp) features and you want a large number of distributed systems across a wider area. In exchange, you will lose some I/O count and variety vs PXI and you will lose most of your synchronization options vs PXI. Both of these limitations are slowly being hacked away at -- for example in the last few years we've gotten GPS, signal-based DSA sync, XNet CAN (vs manual make-your-own-frames-on-fpga-can) and so on. cRIO is even closing the CPU power gap. Can achieve deterministic I/O through scan engine or FPGA programming. If you have dlls, VxWorks is really hard to compile for and Linux is really easy, but in both cases you'd need the source to cross compile. Linux cRIOs can support security-enhanced linux, making them the more secure software option.

-PXI is significantly more powerful even than the newest atom cRIOs and generally has better I/O counts and variety, if only because the power requirements are not as strict as they are on cRIO. If NI sells an I/O type we probably have it for PXI and may have it for cRIO. Can achieve deterministic I/O through hardware-timed single-point, available with DAQmx for certain (probably most, by now) cards. Since you can get the monster chassis, you can sometimes simplify your programming by having a ton of I/O in one spot, but then you may exchange that programming complexity for wiring complexity. If you have DLLs, pharlap may just run them out of box, depending on how many microsoft APIs they use, and which specific ones (pharlap is vaguely a derivative of windows, but very vaguely).

 

All that having been said its more shades of gray at this point. For example if you need distributed I/O you can use an ethernet (non-deterministic) or ethercat (deterministic) expansion chassis from any RT controller with two ethernet ports (ie most PXI and half the cRIOs out there). You can do FPGA on both, and you could even use USB R-series to get deterministic I/O as an add-on to your windows system. 

Link to post
Share on other sites

Thanks a lot smithd.

 

My application is fairly complex and there are probably like 30 subpanels if I count subpanels of VIs inserted inside subpanels of other VIs and so on... So stripping down the subpanels is just not an option. From your answer, it seems I'll need to split my application into 2: one HMI for the PC, one RT for the cRIO or PXI.

 

For the cRIO I understand that the HMI and the RT can communicate through shared variables or network streams. But what about the PXI? Is it a common practice to have an RT application in the PXI controller and an HMI on a PC? And how would those 2 applications communicate?

 

Cheers

Link to post
Share on other sites

For network comms I'd just read this:

http://www.ni.com/pdf/products/us/criodevguidesec2.pdf

 

Communication options are pretty numerous. There are stream-based mechanisms (TCP), message mechanisms (network streams, STM, web services), and tag based mechanisms (shared variables, OPC UA, modbus). The mechanisms are the same for cRIO and for PXI. Those are all examples, not the complete set. For a scada system, OPC UA is probably a great fit but you won't get waveforms for example. Many systems use multiple mechanisms.

Link to post
Share on other sites

Is it a common practice to have an RT application in the PXI controller and an HMI on a PC? And how would those 2 applications communicate?

This is really the preferred method for a PXI running RT.

 

One other thing I'd mention is with cRIO vs PXI.  With a PXI you essentially need two programs written, one for the RT one for the Windows PC for UI and communication.  with a cRIO you may need three applications, Windows PC, RT, and FPGA.  This may mean more work if you can't fit in the scan engine profile of an FPGA, but it also means flexibility if you want to do other custom things.

  • Like 1
Link to post
Share on other sites

For network comms I'd just read this:

http://www.ni.com/pdf/products/us/criodevguidesec2.pdf

 

Communication options are pretty numerous. There are stream-based mechanisms (TCP), message mechanisms (network streams, STM, web services), and tag based mechanisms (shared variables, OPC UA, modbus). The mechanisms are the same for cRIO and for PXI. Those are all examples, not the complete set. For a scada system, OPC UA is probably a great fit but you won't get waveforms for example. Many systems use multiple mechanisms.

I agree with smithd in that we always use more than one communication method. We often have multiple network streams and shared variables at the very least. Beyond that, we sometime implement other forms of messaging but unless your system is fairly slow (to rely only on SV), I would expect you'll need at least 2 methods and multiple "instances" of each.

 

This is really the preferred method for a PXI running RT.

 

One other thing I'd mention is with cRIO vs PXI.  With a PXI you essentially need two programs written, one for the RT one for the Windows PC for UI and communication.  with a cRIO you may need three applications, Windows PC, RT, and FPGA.  This may mean more work if you can't fit in the scan engine profile of an FPGA, but it also means flexibility if you want to do other custom things.

Hoovah is correct about the need for a third application but unlike the PC and RT applications, unless you need to put custom logic, the FPGA program is usually fairly simple with basic I/O access and DMA FIFO access. You still have to write it but it is a lot of copy paste from existing examples with little more required. Do not think that the FPGA requires a large effort if you will only stream AI/AOs with the RT portion.

Link to post
Share on other sites

Do not think that the FPGA requires a large effort if you will only stream AI/AOs with the RT portion.

 

Thanks everybody. Well in our case we'll also need Ethernet, CAN, and RS232 communication in order to send commands to the different hardware devices in a fast and deterministic manner. I don't know yet if the SCAN engine support those or if we'll have to write FPGA code for those.

 

Now that I think about it, Ethernet and RS232 ports might be available on the RT Controller and will directly accessed through the LVRT? Only the CAN might be a module in the chassis and therefore require the FPGA layer?

Link to post
Share on other sites

Now that I think about it, Ethernet and RS232 ports might be available on the RT Controller and will directly accessed through the LVRT? Only the CAN might be a module in the chassis and therefore require the FPGA layer?

Yup, someone can correct me if I'm wrong, but I believe ethernet and RS-232 (VISA) is a RT only resource and the normal VISA and TCP palettes use those, no FPGA needed.

 

Doing CAN on FPGA is something I have done before, but just like AIO streaming there is an example that just shows how to send all the CAN data to a FIFO, and RT VIs that convert it all to frames, then XNet or other custom code can convert it to whatever you want.  If you wanted to write code that would do something custom like, when a specific frame was seen, set an output, then that could live on the FPGA if you weren't okay with the response time of going through RT.

Link to post
Share on other sites

Xnet is supported with the newer modules. The only CAN module which requires FPGA coding is the really old one. PXI only has XNet so you're safe there. As mentioned, serial can be done through RT/VISA-only, you just need the right driver.

 

Also as mentioned, the general use for RT is to have a separate controller and HMI. This improves determinism but is also kind of freeing -- its a ton easier to separate work out to multiple developers so long as you all know what communication you're using to interact. And of course you can use whatever mechanism you want. Frequently I've seen the host side of the scada system written entirely in .net or java while the RT portion is labview, and that works nicely.

Link to post
Share on other sites

You have received some very good input.  I do recommend designing the RT portion of the application to be as stand alone as possible.  Losing network communication with the HMI will happen at some point and the RT application should be able to operate safely without any user interaction.  It is important that your data communication method between the RT and HMI allow for re-connection without having to stop either application.

Link to post
Share on other sites

I agree with smithd in that we always use more than one communication method. We often have multiple network streams and shared variables at the very least. Beyond that, we sometime implement other forms of messaging but unless your system is fairly slow (to rely only on SV), I would expect you'll need at least 2 methods and multiple "instances" of each.

I haven’t had an RT project in several years, but if I had a new one I would probably stick to only a single (message-based) method of communication between RT and HMI, and possible only one instance of that.  What do other people who do a lot of RT work do?

Link to post
Share on other sites

I haven’t had an RT project in several years, but if I had a new one I would probably stick to only a single (message-based) method of communication between RT and HMI, and possible only one instance of that. What do other people who do a lot of RT work do?

Network streams or something simpler based like the STM library are well suited to commands from HMI to RT target or status updates and events from the target to HMI.

For tag data, where you care only about the last value, unbufferred network shared variables are good.

Doing it all as one pipe could be done, sure, it's all tcp/ip under the hood anyways

Link to post
Share on other sites
  • 1 month later...

You've probably already decided which system to go with but I would recommend the cRIO over the PXI any day of the week.

 

The cRIO is rugged, very cheap (compared to PXI), is more modern, has no irritating fan and is far more flexible with the built in FPGA capabilities. The PXI has some modules with mechanical relays (the cRIO only has one) and a lot of modules for radio applications (but I find them somewhat archaic and with the current progress in SDR and USRP we'll hopefully see some sort of useful radio module soon).

 

So if you don't have a specific request only the PXI can fulfil I honestly can't see any pros with it.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By jdeantx
      With exciting projects in oil and gas, aerospace, utilities, and other industries, Vertical AIT is looking to hire skilled LabVIEW developers and architects in the Houston area. We are an Alliance Partner, and we specialize in embedded development.
      Requirements:
      - US citizenship (required by our contracts)
      - Industry experience in production software development
      - Embedded cRIO development experience (Real-time and FPGA)
      - Certification (CLD, CLA, and/or CLED) preferred
      - Must be meticulous and detail-oriented
      We offer great benefits, and we prioritize a healthy work/life balance.
      Learn more about Vertical AIT at www.VerticalAIT.com, and please send resumes to jobs@verticalait.com.
    • By Óscar Gómez
      In a few days my team and me will start a project and I'm a little anxious because I've never work with PXI Systems... If anyone want to tell me a tip, recommendation or something about your experience with this systems, you'll welcome. I'm one of the developers in the crew!.
      I've experience with cRIO and cDAQ.
      OG

    • By IpsoFacto
      I've got some weird stuff going on with a cRIO project I'm working on wanted to get some opinions on it. The basic architecture is a set of classes that do some process. That process registers with a server. The internal data of the process is held in a DVR and the server get's access to that DVR. Clients use TCP to ask the server to do something, the server makes a call against the classes DVR and returns a response to the client.
      To simplify the issues I'm seeing I created a class that internally just increments an integer every 500ms. The client asks the server what's the current count, the server asks the Counter class and returns the answer to the client. This works perfectly fine when running the VI in the IDE. When built it connects, will get the JSON message back, but always gets a default value from the DVR call (zero, in this case). As soon as I open a remote debug panel to the cRIO, everything is working. The count is correct, the client calls work, just like normal. As soon as I right-click, close debug, it goes back to zero. Open debug works, close debug, back to zero. I know the DVR isn't getting dropped because the count continues to increment while not in debug, the process is still running happily with no issues.
      Here's a few screenshots of the code;
      Count Class process (get the count, increment, write it back to the DVR) - Counter Class process
      You can see the DVR vi's are actually vim's using a cast. I can't imagine that's the issue.
      Server Side call - Server Side calls
      All this does is get the count from the DVR (same as above) and wraps it in JSON and passes it back to the client as a JSON string.
      I also implemented an Echo class that ignores the process and DVR's, it just takes whatever string you sent form the client to the server and passes it back with a prepended "@echo". This works when running as an executable with the debug turned off so I know the client, server, and the server/class calls are all working as expected.
      Any thoughts here would be welcome, thanks.
      edit: I added the any possible errors coming from the variant cast to the JSON reply. When the debug is open there are no errors, when the debugger is closed it throws error 91, but the in-place element structure reading the DVR does not throw any errors. How can a variant not exist until a debugger is opened and than it magically exists?
      edit: the internal data dictionary is a wrapper around a variant attribute, I wired out the "found?" terminal all the way out to the JSON reply and if the debugger is open the attribute is found, but not if the debugger is closed. Anyone have issues with Variant Attributes in Real-Time?
    • By SayMaster
      Hello fellow LabVIEWers,
      I am trying to get a self developed PXI chassis up and running - with some problems.
       
      As this is this my first time developing a chassis, some general questions about 3rd party chassis:
      If I want to get a 3rd party PXI chassis up and running with MAX I just need to add the right description files into the right folder? otherwise the chassis will be recognized as "undefined"? but will still work - except triggers?  
      Whats my status right now?
      Chassis is connected via PCIe 8361 - PXIe 8360 and the Connection seems to work Chassis seems to work and inserted cards get recognized by MAX and "self test" works Chassis seems to get recognized with the Keysight Resource Manager (I installed it and selected it in NI MAX) no unknown devices in windows device manager (so the NI SMBus Controller gets recognized correctly)  
      btw my actual SW installation:
      PXI Services 17.5.1
      MAX 17.5
      VISA 17.5
      DAQ 17.5
      LV 17f2


      I am open for any suggestions and tipps!
       
       
      br
      SayMaster
    • By sup93
      Hi. 
      I would like to have the digital trigger high for user input time in seconds.
      Any help is much appreciated. 
      Data Acquisition Final.zip
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.