Jump to content

Optimizing instrumentation VI with multiple timed loops


Recommended Posts

Our CANbus driver contains 3 timed loops:

- #1 prepares and sends outgoing messages

- #2 receives and parses incoming messages 

- #3 handles events and check the communication status

 

Our driver's performance are not satisfying enough (we want it to run at a 10ms rate). It works ok most of the time (11ms, 9ms, 12ms, 10ms, 11ms, and so on) but the gap between loop iterations sometimes jump to 200ms before going back to an acceptable value. Quick tests showed that it is related to a user's request to change what HMI to display in a completely unrelated area of our application.

 

This logically led me to one obsession: making sure our driver never relies on the UI thread. To achieve that, I set the execution thread to "other 1" and the priority to "time-critical". I know setting the thread to "other 1" is not a full guarantee, so I also tried to get rid of everything that would force the VI to go to the UI thread: I got rid of all the indicators and all the property and invoke nodes.

 

The performances have already improved a lot, and the HMI activity in other part of our application no longer seem to influence the timed loop iterations.

 

However I'd like to see what else I could optimize:

 

- I still have a "stop" Boolean control. When it is set to True from another "manager" VI, the driver has to stop. All 3 timed loops poll this control to know whether to exit (one directly through the terminal, the other 2 through local variables). I know polling is not great, but my main worry is: because this is a control, does the VI still go to the UI thread whenever it is polled? Or only when its value change? I should also mention that I am not ever opening the front panel of the VI.

 

- Loops #1 and #2 need to be able to notify #3 that a read or write operation failed, so that #3 can take care of resetting the device and restarting the communication. Right now, #1 and #2 simply write the error in a dedicated error cluster (so I actually still have 2 indicators left I guess) and #3 polls those errors through local variables. Does that mean that at each iteration the VI goes to the UI thread since I am writing to those error clusters at each loop iteration?

 

Any comment on optimization in general is more than welcome!

 

Emmanuel

Link to comment

It really sounds like you want something embedded, and it really sounds like you don't have something embedded.  Is there a reason for this choice?  Work all you want at setting priority loops and avoiding the UI, but as soon as Windows feels like defragging, or indexing search, or update anti-virus, or timeout talking to a driver that doesn't exist, you will be at the mercy of Windows as far as loop rates are involved.  Embedded cRIO, Real-Time, FPGA, or even a micro with CAN is going to give you better performance.

 

What driver are you using for CAN?  XNet comes with a driver level wait for messages so you don't need to poll the hardware for new messages.  It works almost like a trigger on a message with an optional timeout.  This might help performance as well.

Link to comment

I have a question kind of related to this.

 

I have a couple of processes (subVI's) which consist of a message handling loop and a task loop in parallel. The task loop sometimes requires parameter updates, which thus far have been passed using local variables (there are no time critical actions associated with these parameters). These sub VI's never have their front panel opened.

 

What I wanted to know is, do local variables still go to the UI thread even if the front panel isn't open? Secondary to that, how do 0 timeout queues compare to local variables for passing small amounts of data (a cluster of 3-4 parameters say)?

Link to comment
It really sounds like you want something embedded, and it really sounds like you don't have something embedded.  Is there a reason for this choice?

 

I guess the answer is cost. Not only the hardware itself, but we are very tight when it comes to development time, as we are a small team and have a lot on our plate.

 

What driver are you using for CAN?  XNet comes with a driver level wait for messages so you don't need to poll the hardware for new messages.

 

We are using the standard NI-CAN Frame API.

 

Our problem is not so much on the incoming side. The biggest issue is that if our application doesn't send a specific message at least every 50ms let's say, the customer's hardware enters a fault protection mode... And the data of this message keeps changing, so we can't just set a periodic message and let the low level CAN driver handle it.

 

Terminals and local variables don’t use the UI thread, and are asynchronous to the Front Panel controls they represent (unless you set the “Synchronous” option).  

 

That really helps, thanks!

Link to comment
I guess the answer is cost. Not only the hardware itself, but we are very tight when it comes to development time, as we are a small team and have a lot on our plate.

 

 

We are using the standard NI-CAN Frame API.

 

Our problem is not so much on the incoming side. The biggest issue is that if our application doesn't send a specific message at least every 50ms let's say, the customer's hardware enters a fault protection mode... And the data of this message keeps changing, so we can't just set a periodic message and let the low level CAN driver handle it.

 

 

That really helps, thanks!

 

Hooovahh already mentioned the XNET CAN api, and I think you would see a big performance increase by using XNET supported HW instead of legacy CAN.

 

We were using CAN on a Pharlap system and needed to reduce jittering to be able to run simulations at at 1kHz, acting as up to 10 CAN busmasters.

It took a lot of tweaking to be able to get CAN run as we wanted it to. We used the Frame API and a frame to channel converter of our own (that supported multi frames for j1939) and changed some stuff in the NI-CAN driver, e.g.

* changed reentrancy setting for some VIs (e.g. the ncReadNet).

* moved indicators out of case structures

* Pre-allocated buffers for nxReadMult

 

We were also actively participating in the beta testing for XNET 1.0, and once we got the XNET HW our CAN-performance issues were gone.

 

/J

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.