Jump to content

fundamental question about order of execution


Jordan Kuehn

Recommended Posts

Yeah, I thought that I had completely understood order of execution years ago.  It's not that hard right?  If something is wired into something else it goes first and things that are unwired happen first too.  Well you know what mean. 

 

Here's the situation I'm working with.  There's an existing cRIO based software that needed a quick fix.  In order to do so and report back to the Windows host a Network Shared Variable was added.  This variable is updated during the initialization, but is polled during an idle state in the RT code main process.  The NSV is not wired with error wires (currently) in this idle state.  No buffering or RT FIFO is enabled.  I have an intermittent problem where the NSV in that idle case is claiming that a part of the initialization failed though it hasn't (it's sole purpose is to monitor this).  Here's the question:  Is it possible that the NSV in the downstream loop and case structure is being populated before the initialization completes?  It is my understanding that a structure will act similarly to a subvi in that *nothing* inside it will execute until all the inputs to the structure are available.  However, this is the only sane explanation I can come up with to describe this behavior. 

Link to comment

I used Shared Variables in a project about seven years ago to communicate between a Windows touch-panel and a RT controller.  I thought I was having a similar issue until I realized that it took as long as 0.03 seconds for a SV to update.  Are you relying on a write in the initialization to be read in the downstream loop?  It could be that your code gets there before the SV actually updates.

 

Maybe not.

  • Like 1
Link to comment
I used Shared Variables in a project about seven years ago to communicate between a Windows touch-panel and a RT controller.  I thought I was having a similar issue until I realized that it took as long as 0.03 seconds for a SV to update.  Are you relying on a write in the initialization to be read in the downstream loop?  It could be that your code gets there before the SV actually updates.

 

Maybe not.

This is certainly possible, especially if the default value is the value that indicates the stage was not initialized correctly.  I will investigate further.  The following steps in the initialization could certainly take right around what you are suggesting which would suggest a link to the intermittent nature of it too. 

 

Sorry for my sort of convoluted original question.  I'll look into this possibility and report back, with screenshots if it is some other problem.

Link to comment
  • 1 month later...

It looks like this was the issue, the network variable was intermittently getting populated in time.  Race condition fixed.  I cannot recall if the original problem code had the NSV error wires connected or not, but they are now and a small delay has been added afterward.  No issues since.  Thanks for the suggestions.

Link to comment
It looks like this was the issue, the network variable was intermittently getting populated in time.  Race condition fixed.  I cannot recall if the original problem code had the NSV error wires connected or not, but they are now and a small delay has been added afterward.  No issues since.  Thanks for the suggestions.

 

I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay!

 

A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.

Link to comment
I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay!

 

A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.

 

The network variables are hosted on the cRIO where the code is hosted.  Unfortunately the SV needs to be written in at least two locations, immediately after initialization and then it monitors the status subsequently.  It's checking the Drive Status of a 9501.  We've had problems with it failing under heavy vibration and this flag is an early indicator.  NI doesn't have a fix for us, so we need to at least know when it is dead.

Link to comment
The network variables are hosted on the cRIO where the code is hosted.  Unfortunately the SV needs to be written in at least two locations, immediately after initialization and then it monitors the status subsequently.  It's checking the Drive Status of a 9501.  We've had problems with it failing under heavy vibration and this flag is an early indicator.  NI doesn't have a fix for us, so we need to at least know when it is dead.

 

Well but the 9501 is in the cRIO too! So do you mean that the SV is once written in the RT application during initialization and once in the FPGA code or such?

Because if it is both done in the RT code I still think you have basically only one source of the data for this and can encapsulate it in a non re-entrant buffer VI that will make sure to synchronize access to the local variable and the SW.

Link to comment

Once during the initial initialization and continuously updated in the loop that communicates with the FPGA.  All in the RT code.  I will think about this suggestion.  Basically you are suggesting making a FGV for local access and using the NSV to keep the communication back to the host (the only reason why it is a NSV).  This is a well established project and modifying the current communication process is a much bigger change than using a NSV to monitor this flag.

Link to comment
Once during the initial initialization and continuously updated in the loop that communicates with the FPGA.  All in the RT code.  I will think about this suggestion.  Basically you are suggesting making a FGV for local access and using the NSV to keep the communication back to the host (the only reason why it is a NSV).  This is a well established project and modifying the current communication process is a much bigger change than using a NSV to monitor this flag.

 

Yes that is what I was thinking. On "read" just read the local FGV shift register and on "write" update both the NSV as well as the shift register. As long as you can make sure that the write always happens through this FGV on the RT system and anyone else only reads the NSV this should be perfectly race free.

 

Most likely you can even perform an optimization in the FGV to only write to the NSV when the new value is different than the previous.

Link to comment

Thanks for the suggestions everyone.  The action engine seems to be the more robust approach in regards to guaranteeing the completion of the initialization in the RT code, though the others that monitor the variable are also good approaches.  Using the action engine then takes the NSV out of any functional code and merely publishes the data to the host, which I like.  I may wind up implementing a mix of approaches here, we'll see. 

 

Thanks again.

Link to comment
A more robust solution shown in some of the courses is to add a loop after initialisation to read the values in a loop until they match the initial value to avoid the dependence on picking a delay value

 

I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO.

It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.

Link to comment
I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO.It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.
Yes, I appeared to be in a time warp when I wrote that and was referring to it being more robust than adding a delay. Personally in this case I don't see the case for increasing complexity with the FGV. Yes polling the NSV is less efficient but this is only occurring on start up before the application runs. With an FGV you now have to keep the two in synchronisation and there is a possibility that your FGV can hold a different value than the NSV which would be a nightmare to track down. I'm not saying it will with the current implementation described but it can if not used in the correct manner. And I don't see a great benefit other than a slight performance improvement (which will only really show if you are polling it fast).
Link to comment
Yes, I appeared to be in a time warp when I wrote that and was referring to it being more robust than adding a delay. Personally in this case I don't see the case for increasing complexity with the FGV. Yes polling the NSV is less efficient but this is only occurring on start up before the application runs. With an FGV you now have to keep the two in synchronisation and there is a possibility that your FGV can hold a different value than the NSV which would be a nightmare to track down. I'm not saying it will with the current implementation described but it can if not used in the correct manner. And I don't see a great benefit other than a slight performance improvement (which will only really show if you are polling it fast).

 

I think the argument that one has an advantage over the other in terms of the current situation is valid for both cases :-).

Future modifications to the application could render the decision to go for one or the other invalid in both cases. The NSV only case when that variable is suddenly also polled repeatedly throughout the application, rather than only at initialization, the FGV case in that someone makes modifications to the application without understanding FGVs and in the process of that modification botches its functionality.

 

For me the choice is clear as I use FGVs all the time, understand them quite well and can dream up an FGV much quicker than I can get an overview of an architecture where global variables are sprinkled throughout the code. And an NSV is very much a global variable, just with a potentially rather resource hungry network access engine chained to its hands and legs.

Link to comment

In this case the FGV is only a global variable as well though, so there is no advantage there.

 

I wanted to check my figures on some of this so I did some benchmarking. The other element I failed to mention before is enabling the RT FIFO will sort out a lot of performance issues. The results I saw (on my laptop).

 

(times in ms)

Variable                      Access Time (1)     Access Time (4 in parallel)

FGV                           0.46                        6.26

NSV                           8.34                        16.9

NSV + RT FIFO         0.078                       0.315

 

So FGV vs NSV, the FGV is much faster (as expected), what always worries me with FGVs is I have seen them kill code performance when used heavily in parallel hence the comparison there. In this case though they are still better to NSV.


The RT FIFO though makes the implementation deterministic and much higher performance. It isn't going to reduce the latency of the updates to and from the SVE, but for raw access time it is much faster.

 

I am not suggesting that you should drop all FGV for NSV + RT FIFO. The added overhead of running the variable engine comes into play and developers preferences, but for this use case there is no need for adding code for performance improvements.

 

I have attached the code. Thrown together a bit quick as I should be doing busy work now, but I think the results stand.

FGV vs NSV.zip

Link to comment

That I haven't dug into figures on. The overhead of the SVE will increase as each read/write it has to timestamp etc. this could also be dependent on the number of clients connected. As I say in this case I'm assuming this is happening anyway as you are using it for network communications.

 

For the individual read/writes I expect this will have some impact on those without RT FIFOs, with RT FIFO your code is isolated from the SVE load, that is what makes it deterministic. I haven't seen the implementation but I guess it will be similar to using a single element queue. I think the read will have no impact on the engine as we will only be touching the RT FIFO. The write will but will be handled asynchronously by the engine.

Link to comment
In this case the FGV is only a global variable as well though, so there is no advantage there.

 

I'm aware that it is. However in my experience they very quickly evolve because of additional requirements as the project grows. And I prefer to have the related code centralized in the FGV than have it sprinkled around in several subVIs throughout the project or as happens often when quickly adding a new feature, even just attached to the global variable itself in the various GUI VIs.

 

Now if I could add some logic into the NSV itself and maintain it with it, then who knows :-).

 

As it stands now the even more clear approach would be to then write a LabVIEW library or LVOOP that manages all aspects of such a "global variable" logic and use it as such. But that is quite a bit more initial effort than creating a FGV and I also like the fact that I can easily do a "Find all Instances" and quickly visit all places where my FGV is used, when reviewing modifications to its internal logic.

 

Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long.

For the individual read/writes I expect this will have some impact on those without RT FIFOs, with RT FIFO your code is isolated from the SVE load, that is what makes it deterministic. I haven't seen the implementation but I guess it will be similar to using a single element queue. I think the read will have no impact on the engine as we will only be touching the RT FIFO. The write will but will be handled asynchronously by the engine.

So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally?

Link to comment
So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally?

 

Thats it, same solution but less code you have to write.

 

 

Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long.

 

At the minute the code is just 4 parallel for loops accessing the FGV as fast as possible. What this does mean compared to a real application is that there is no downtime at all and so you probably would not see it as being this significant without a lot of usage. But this is always an concern with any non-reentrant code. I was called in on a system once that talked fine to 2 cRIOs but had double digit second latency when trying to talk to 14. It turned out to be a non-reentrant VI at the center of the comms protocol that all of them where fighting over. For this reason I always avoid polling FGVs in high performance loops now.

Link to comment

At the minute the code is just 4 parallel for loops accessing the FGV as fast as possible. What this does mean compared to a real application is that there is no downtime at all and so you probably would not see it as being this significant without a lot of usage. But this is always an concern with any non-reentrant code. I was called in on a system once that talked fine to 2 cRIOs but had double digit second latency when trying to talk to 14. It turned out to be a non-reentrant VI at the center of the comms protocol that all of them where fighting over. For this reason I always avoid polling FGVs in high performance loops now.

 

Thanks that makes sense! And I'm probably mostly safe from that issue because I tend to make my FGVs quite intelligent so that they are not really polled in high performance loops, but rather would manage them instead. :D

 

It does show a potential problem in the arbitration of VI access though if that arbitration eats up that much resources.

Link to comment
  • 1 year later...

Yeah it's an old thread, but having just stumbled across it, it's new to me.  And the vi access

arbitration issue is one I ran into but found a workaround for.

 

I solved a similar latency timing issue related to FGV access arbitration on an RT system by

making the FGV reentrant.  Well not *only* by making the FGV reentrant, then it wouldn't be

a FGV any more.

 

Internally, I changed the data storage mechanism to be a single-element queue with a

hardcoded name so every reentrant instance would get a unique reference to the single

shared queue.  The queue refnum was stored directly in the FGV on first access.  All

requests to write would dequeue first, thus blocking any parallel attempts to access the

queue by other "FGV" instances.  The reason this was a win is that the access arbitration

mechanism for queues is very low latency, unlike the (apparent) arbitration mechanism for

subvi access.

 

Oh wait, one other detail.  As I recall, it wasn't standard subvi access arbitration that was the

problem, it was stuff related to arbitrating a priority inversion when a time critical loop wanted

access to the FGV at an instant when a lower priority process was executing it.  That particular

mechanism would add a distinct spike to our main loop execution time during the occasional

cycles where the collision occurred, the spike being several times larger than our nominal

execution time.

 

After making the "FGV" reentrant but with all instances accessing the same queue, voila!  No

more timing spikes!  The other nice thing about this particular workaround was that none of the

source code for the dozens of modules that accessed the FGV had to be modified.

 

Our platform was a standard RT-capable PC rather than cRIO, and this was under LV 2010.

Not sure the workaround applies to cRIO, but wanted to share an approach that might be

worth a try for anyone else who stumbles onto the thread in the future.

 

-Kevin P

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.