dannyt Posted September 21, 2010 Report Share Posted September 21, 2010 Hi All, Sorry for maybe a silly question, but I was reading another thread here talking about shared variables and I starting thinking and not for the first time, are they any different, well any better than Global Variables in principle. I understand that unlike Global Variables they have a lot more functionality, but do they not share the same problem of over use and breaking of Data Flow issues especially when used in Single-Process Shared Variable Case. I would be interested in people's comments cheers Dannyt Quote Link to comment
jgcode Posted September 21, 2010 Report Share Posted September 21, 2010 I like using Globals as constants/WNRM or WORMs (posted here by Darren), and have been doing so more and more since that article, especially for tools in LabVIEW. I don't see the point of writing a FGV and the code that sits on top of it when I can use a global, with less work, if the Use Case suits it. I also like AQs ideas on Scoping a Global. As for Single Process SVs (SPSV) vs Globals: SPSV have error in/out to control dataflow SPSV have an optional timestamp if you need to check the last write time SPSV can be easily updated to a Network Published SV (NPSV) I use SVs more in RT apps etc... A few other things... you need to initialise the SV variable before you use it. You can also buffer these variable in different ways. And the configuration is Project based. I am not sure where SPSVs store their data now. If i remember correctly in 8.5, it used to create a VI for each variable (that was annoying). For globals the data is stored in the FP of a VI and you can have multiple objects in that VI. So you can view data easily, e.g. for Tools, you can open this VI and as a developer you have essentially created a configuration dialog for your tool. Summary: I think it really depends on the use case and user preference! Quote Link to comment
ShaunR Posted September 21, 2010 Report Share Posted September 21, 2010 All You Need To Know The underlying implementation of the single-process shared variable is similar to that of the LabVIEW global variable. The main advantage of single-process shared variables over traditional global variables is the ability to convert a single-process shared variable into a network-published shared variable that any node on a network can access. Quote Link to comment
PaulG. Posted September 21, 2010 Report Share Posted September 21, 2010 A few other things... you need to initialise the SV variable before you use it. The first time I used SV's that fact kicked my butt in a major way. For this reason alone I think SV's are poorly implemented in LV and could use a "service pack". Quote Link to comment
jgcode Posted September 21, 2010 Report Share Posted September 21, 2010 The first time I used SV's that fact kicked my butt in a major way. For this reason alone I think SV's are poorly implemented in LV and could use a "service pack". I don't think they are poorly implemented just because you have to initialize them. I don't want to be reading uninitialized data for one (there are warning for this tho on the error wire), and (I am not 100% sure but) there maybe resource optimization reasons for this too (for NPSV). The release of the SV API in 2009 was really cool, as I find it seems to be a bit more polished than Datasocket API (in terms of error handling etc...). One thing that drove me crazy was that the SVE was not linked to the TypeDef (engine?). However, in 2010, this has been added/fixed which is a great step forward. Now if they started to include some of the DSC functionality in LabVIEW Pro... Quote Link to comment
PaulG. Posted September 21, 2010 Report Share Posted September 21, 2010 I don't think they are poorly implemented just because you have to initialize them. I don't want to be reading uninitialized data for one (there are warning for this tho on the error wire), and (I am not 100% sure but) there maybe resource optimization reasons for this too (for NPSV). The release of the SV API in 2009 was really cool, as I find it seems to be a bit more polished than Datasocket API (in terms of error handling etc...). One thing that drove me crazy was that the SVE was not linked to the TypeDef (engine?). However, in 2010, this has been added/fixed which is a great step forward. Now if they started to include some of the DSC functionality in LabVIEW Pro... SV's are great to work with when they finally get working. It just seems to involve too many steps to get them going. I can go through all the trouble of mass-creating them in Excel, but then they have to be deployed, then they have to be initialized. Right now I have over 2K SVs and it takes a while to deploy them all. Most issues like this concern me because a customer or the client I'm working for will be sitting right next to me as we are walking through and tweaking the code when he asks me: "why do you need to read a SV before it will work ?" or "why do you need to deploy every time you change a SV?" I'm OK with saying "I don't know" and it's not important enough to the customer for me to track down an answer. But it's still a little clunky to me the way SV's work. Quote Link to comment
ShaunR Posted September 21, 2010 Report Share Posted September 21, 2010 I think you are all missing the point. The OP was questioning the difference between global variables and shared variables in a single process system. In fact. The argument against global variables is exactly the same for SV's (SV's are "SUPER global variables). SV's have a network feature and that is the only reason people (should?) use them (but they have limitations that make them unusable in some applications.). They were designed for real-time targets but moved over to mainstream labvVew as an "easy" network comms. Quote Link to comment
dannyt Posted September 21, 2010 Author Report Share Posted September 21, 2010 I think you are all missing the point. The OP was questioning the difference between global variables and shared variables in a single process system. In fact. The argument against global variables is exactly the same for SV's (SV's are "SUPER global variables). SV's have a network feature and that is the only reason people (should?) use them (but they have limitations that make them unusable in some applications.). They were designed for real-time targets but moved over to mainstream labvVew as an "easy" network comms. Thanks for all the replies as always. Shaun, cheers that was the way I was leaning on this, I just wanted to see if I was missing something, I have seem more general chatter about them of late. I have not had to do any network LabVIEW to LabVIEW programming yet in and if I do I suspected I my take a lot more serious look at them them. cheers Quote Link to comment
jgcode Posted September 21, 2010 Report Share Posted September 21, 2010 I think you are all missing the point... ...SV's have a network feature and that is the only reason people (should?) use them That is not true, due to the error wiring to sequence dataflow they can be a much better choice. Also, you are never going to use a NPSV in a time critical loop (or any serious loop), you are going to use a SPSV. Quote Link to comment
ShaunR Posted September 21, 2010 Report Share Posted September 21, 2010 (edited) due to the error wiring to sequence dataflow they can be a much better choice. That's a bit like saying you shouldn't use the GetTickCount because it does not have error terminals. The main two arguments behind global variables is that they make debugging difficult and cause race conditions across vi boundaries as well as within a vi. The use of an error cluster or not is irrelevant (I think). If you mean a choice between a global variable or shared variables. Then in-line with the ant-globalisation (lol) posse then neither should be used since they are both global variables and this is a sin Also, you are never going to use a NPSV in a time critical loop (or any serious loop), you are going to use a SPSV. I agree. Ooops. No I don't Or maybe I do I agree I am never going to use a NPSV in a time critical loop (and by time critical I mean on a real-time NI system). And I agree (on a real-time NI system) I am "probably" going to use the SPSV. But I am not going to use either in normal LV unless I want easy network comms (well. not even then ....). Edited September 21, 2010 by ShaunR Quote Link to comment
PaulL Posted September 21, 2010 Report Share Posted September 21, 2010 Also, you are never going to use a NPSV in a time critical loop (or any serious loop), you are going to use a SPSV. So one would think, but actually one can use a network-published shared variable even in a time critical loop on RT--as long as one configures it to be RT-FIFO-enabled. Then LabVIEW is smart enough to wait to do the networking part of the operation outside the time-critical loop. (Of course, there will be jitter in when the remote recipient gets the data because of the networking, but if this is OK for your application--as it is for ours--then this works fine. One of the folks at NI recommended I do it this way a couple years back, and it works.) 1 Quote Link to comment
jgcode Posted September 21, 2010 Report Share Posted September 21, 2010 Then in-line with the ant-globalisation (lol) posse then neither should be used since they are both global variables and this is a sin I used to be that posse, but it was due to narrow-mindedness. I now think they have a valid use case as per my posts above and link to D's article. And I agree (on a real-time NI system) I am "probably" going to use the SPSV. I use SVs more in RT apps etc... Yes, as I mentioned above, I was referring to RT. The OP wasn't focussed on a particular target. But I am not going to use either in normal LV unless I want easy network comms (well. not even then ....). Well, thats ok, your allowed to NI's PSP protocol seems pretty nice, and it sure is a fast way to share data/messages between a Target and a Host application. I haven't tried the Network 'this is the way SV were meant to be used" Streaming feature of LV2010, but that looks very cool. Also for tying into a Database or Alarms, is pretty straight forward, so I have found you could save a lot of work using them. <edit> So one would think, but actually one can use a network-published shared variable even in a time critical loop on RT--as long as one configures it to be RT-FIFO-enabled. Then LabVIEW is smart enough to wait to do the networking part of the operation outside the time-critical loop. (Of course, there will be jitter in when the remote recipient gets the data because of the networking, but if this is OK for your application--as it is for ours--then this works fine. One of the folks at NI recommended I do it this way a couple years back, and it works.) I did not know this! Cool tip - I will try it out. How much jitter are you talking? And what is acceptable for your TCL? (what is it doing?) </edit> Quote Link to comment
ShaunR Posted September 21, 2010 Report Share Posted September 21, 2010 (edited) NI's PSP protocol seems pretty nice, and it sure is a fast way to share data/messages between a Target and a Host application. I haven't tried the Network 'this is the way SV were meant to be used" Streaming feature of LV2010, but that looks very cool. Also for tying into a Database or Alarms, is pretty straight forward, so I have found you could save a lot of work using them. Give it a bash. I think you'll like it (drop it below 10ms or try about 10MB of data and see what happens). Then benchmark it against the Dispatcher Edited September 21, 2010 by ShaunR Quote Link to comment
PaulL Posted September 22, 2010 Report Share Posted September 22, 2010 <edit> I did not know this! Cool tip - I will try it out. How much jitter are you talking? And what is acceptable for your TCL? (what is it doing?) </edit> The normal Windows jitter--10 ms or so. Note that we don't run our TCL all that fast--only 62.5 Hz, but we do a lot in it (probably too much). Windows handles the shared variable logging, and other clients need some of the published information but not in a critical time period. Now if they started to include some of the DSC functionality in LabVIEW Pro... I couldn't agree more! OK, I've said as much before. The bottom line is I think some sort of implementation of the Observer Pattern/Publish-Subscribe-Protocol is essential for creating any sort of meaningfully scalable component-based system. I think in order for LabVIEW to be competitive in that market (or really any application market, since separating pieces of an application into threads is important in just about any application) LabVIEW needs this. Java has Swing other things. LabVIEW now has it, but unfortunately not all the essentials are in the core product, so they don't see as much use as they should. In particular, shared variable events and control binding are essential shared variable features. Why would anyone use shared variables without events? I think the logging can be an extra feature for the DSC Module (and I think it's well worth it), but LabVIEW developers need to think of shared variables (or the equivalent) when they are designing a system or really any application, and I suggest they will if these features are available. I think then NI would sell more LabVIEW licenses in more places and more DSC licenses. (It also means NI would devote more energy to improving the shared variable implementation, since more people would be providing feedback. Everybody wins!) Quote Link to comment
viSci Posted September 22, 2010 Report Share Posted September 22, 2010 FYI - Here is a Question and Answer to Jason Reding at NI concerning the use of RT-FIFO enabled NSV's: 6. Are there any caveats in creating programmatic access or alias binding to cRIO hosted NSV's with RT-FIFO enabled? The automatic deployment of RT-FIFOs to decouple your read/write access from the jitter occurred when accessing the network stack is only a feature of the static variable node. When using the programmatic API, you will be communicating with the variable in the same manner as if the variable had been deployed without the RT-FIFO option enabled. If this is functionality you need, we recommend you use the programmatic RT-FIFO API directly to decouple your deterministic loop from your network communication loop. This loop would look something like the following: In terms of binding, my understanding is that the binding just serves as another level of indirection to the original point deployed in the SVE. In essence, the SVE of the bound point becomes a client of the SVE of the original point. However, the configurations applied to the deployed points in the two SVE's are still unique. In other words, just because the original point was configured with the RT-FIFO option enabled doesn't mean clients accessing the bound point will access the bound point through an RT-FIFO. For this to occur, the configuration for the bound point would also need to enable the option for the RT-FIFO. The same goes for the network buffer configuration options. Accessing bound points using the programmatic interface has no additional caveats over accessing a non-bound point through the programmatic API. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.