Jump to content

Type-def'd shared variables. Good or bad idea for performance


Recommended Posts

Hi all,

 

I am working on a typical HMI+cRIO 9074 control system... porting over / lightly refactoring some code that was written to run on a PC.  There are going to be quite a few variables to keep track of, but many of them are logically related and I would internally keep them in type -defed clusters. Instead of creating individual shared variables for every single boolean, floating point and string,

 

I am thinking that grouping the variables into typedefed shared variables will be good for managing the information, and also improve performance (reduce CPU load on cRIO) by reducing the total number of shared variables.  

 

Is my hunch about performance correct or will this be immaterial?

Link to comment

Hi,

Your hunch is potentially correct. There is a certain amount of overhead associated with each shared variable (time stamp, FIFO if selected) and so grouping elements can help reduce the processor associated with it.

Don't forget though this means larger memory usage everywhere you access it (as you read the whole cluster), increases network usage (as all data is sent one one update) and may force some nice race condition risky read-modify-write actions if you have to update a single element so it isn't without cost! They shouldn't bee too large.

Link to comment

For us they work on middle ground, we have intermittent software failures due to NSVs not being reachable in between. In this case restarting the component (but not the SV Engine) brings us back to live. Some other workpackage reported incidents where the SV Engine lost all it's values and needed to be restarted, but I didn't see this personally.

 

But actually we're using the SVE for a system size for which it clearly wasn't designed.

Link to comment
Hi,

Your hunch is potentially correct. There is a certain amount of overhead associated with each shared variable (time stamp, FIFO if selected) and so grouping elements can help reduce the processor associated with it.

Don't forget though this means larger memory usage everywhere you access it (as you read the whole cluster), increases network usage (as all data is sent one one update) and may force some nice race condition risky read-modify-write actions if you have to update a single element so it isn't without cost! They shouldn't bee too large.

 

James, I am glad someone else shares my hunch on performance. I have given that a lot of thought to race consitions. I really hate and abhor "signaling globals" but have done the best I can to mitigate. All NSVs are in either a "control"  library or an "Indicator" library. NSVs in the indicator library are only written to in one place in the target/cRIO code, and all elements at once. 

 
Control NSVs will be written in only one place in the HMI /host code. "Signaling" NSVs that need to set a setpoint consist of a cluster with a "set?" boolean and the numeric quantity that is the setpoint, to avoid race condition between the "set?" flag and the setpoint. 

 

 

Be a little cautious when choosing NSVs over some other comms technology.

 

There is no other tool in LabVIEW that will make you go grey quicker than when your shared variables are not working. My (and other developers I have spoken with) experience has been that either they work perfectly, or are a complete dog, there is no middle ground.

 

Personally I will never touch them again. Network Streams are my current "protocol" of choice for communicating over IP.

 

 

 

Neil, I also share your distrust of NSV based on what I have heard. In the past I have used Network Streams and the Current Value Table and was pretty happy with the performance.

 

I should probably give a little background as to my choice in using NSVs in the first place. The reason I am using them is that I have inherited one of the usual monsters (one giant block diagram, interloop communication via local variables) and my task is to port it to cRIO . 

 
The client does not want to re-write it from scratch yet due to number of reasons. (no spec exists, it was grown "organically" let us say). Fortunately they are very aware that it's a mess, I decided that at least modularizing, undoing some of the local variable mess, and using shared variables to get the data out is the best option,  

 

 

For us they work on middle ground, we have intermittent software failures due to NSVs not being reachable in between. In this case restarting the component (but not the SV Engine) brings us back to live. Some other workpackage reported incidents where the SV Engine lost all it's values and needed to be restarted, but I didn't see this personally.

 

But actually we're using the SVE for a system size for which it clearly wasn't designed.

 

 

 

Flintstone, this does scare me. I have worked with NSVs running on PXI chassis, there where about 20 different machines, each with a library of say 50-60 NSVs. They seemed pretty easy to connect to via the Distributed System Manager. But then again that was PXI and not cRIO. I know NSVs had a whole lot of problems in the early days. It usually takes a few years to work out the bugs with any NI technology as far as I have noticed. Hopefully they are ok now...

Edited by MarkCG
Link to comment

Our general consensus on NSV: Critical stuff - don't use them. Non critical - sure, use them (but you already have other comm set up for the critical stuff so why not just use that  :P ).

 

I don't know the exact number, but at some point we have noticed NSV issues when you get large numbers of them on a system. 

 

I have recently written an object based TCP client/server so I just use that for everything now. Once it's written, you don't have to spend time writing tcp communication again. IMO your time is better spent doing something like this to avoid potential headaches down the road, and scrap NSV all together.

Edited by for(imstuck)
Link to comment
Our general consensus on NSV: Critical stuff - don't use them. Non critical - sure, use them (but you already have other comm set up for the critical stuff so why not just use that  :P ).

 

I don't know the exact number, but at some point we have noticed NSV issues when you get large numbers of them on a system. 

 

I have recently written an object based TCP client/server so I just use that for everything now. Once it's written, you don't have to spend time writing tcp communication again. IMO your time is better spent doing something like this to avoid potential headaches down the road, and scrap NSV all together.

+1

That's why I wrote transport.lvlib and dispatcher (which uses the former). I've never liked NSVs. Fine in theory (event driven) but too bloated and difficult to manage/maintain even if they did work properly all the time.

Link to comment

I know NSVs had a whole lot of problems in the early days. It usually takes a few years to work out the bugs with any NI technology as far as I have noticed. Hopefully they are ok now...

There is certainly some truth to this, many of the new guys in support haven't heard of "tagsrv.exe has stopped responding"!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.