Mechatroner Posted December 25, 2011 Report Share Posted December 25, 2011 From my personal experience and on-line research, the Functional Global pattern is an effective and convenient way to manage and process data in a LabVIEW application. I'm wondering though if the FG pattern is indeed as robust as it appears, especially for large-scale applications. Are there any known issues with the FG pattern (eg. memory leaks, lost data, crashes, etc) when used with large amounts of data stored in the USRs or operated for long periods of time? My concern about the robustness of FGs is based on my impression that, although it works well, the pattern seems like an unintended use for a while or for loop (ie running the loop once just to read the current value of previously set USRs). Thanks in advance for your thoughts and comments. John Bergmans Quote Link to comment
Norm Kirchner Posted December 26, 2011 Report Share Posted December 26, 2011 John, I'm glad that you are questioning this methodology. First I would strongly recommend that you review this well written Field Architect blog about this very topic. There are many many skilled LV developers that have either based their entire architecture on this tactic or used it as a staple of their programs. These are actively running programs, and I would say there is no doubt as to the validity of this approach. However! There are many drawbacks and caveats and I truly believe that this practice of coding represents a style which in the future we'll look back like we look back at finger paintings from grade school (cute, simplistic and far from a desirable design) Personally speaking, I was bitten VERY recently when another developer used LV2 style globals and never took into consideration the idea of needing two of the same thing. What ended up happening when another of the same thing was needed, was that a developer just copied 'all VI' and appended a '2' to the end of the file names so that he could have 2 of the same thing. If chills didn't just run down your spine....you may want to think about that implication in a very large application. I'll avoid putting all my comments from the blog into this reply, but long story short, there are emerging ideas/techniques/frameworks that will help you accomplish your large application with a much more Scaleable, Modular, Reuseable, Extensible and Simple (SMoRES) . If you have the chance to start a design from scratch and don't need to inherit someone else's choices, I would highly recommend taking this chance to implement a LV2-Global free design and move to LV 2011 instead of '2' 1 Quote Link to comment
ShaunR Posted December 27, 2011 Report Share Posted December 27, 2011 There are several main advantages of FGs over the in-built Labview global, none of which are particularly awe inspiring enough to call it a pattern IMHO. 1. They solve the read/write race condition problem. 2. They can have error terminals for sequencing. 3. They can be self initialising. 4. They are singletons (unless you set the VI to clone). A couple of comments about the blog mentioned. I think the author missed the point about race conditions and FGs or at least didn't explain why they aren't a magic bullet to variable races. He states " The “read, increment, write”, then, is a critical section". And in the original example using global variables it is intended to be. But the feature that protects the variable by using a FG is that it is placed inside a sub-vi, not that it is a FG. The exact same protection could be obtained by selecting the globals and increment then selecting "Create Subvi" and putting that in the loops. However. In all examples it is impossible to determine consistently what the output of Running total 1 or 2 would be since that would be dependent on the order which Labview executes the loops which may change with different compiles or even different execution speeds of the loops. So in reality, by using an FG we no longer have a race between the read and write operations, but we still cannot predict (reliably) the indicator values (Although we can say that they will always increase). We now have a race condition between the loops themselves. The thing to bear in mind about FGs are that they are a programming solution to the global variable read/write race condition only and therefore an improvement over the in-built globals. Many, however, would argue that global variables are evil whether a FG or not. But they are simple to understand, easy to debug and, marginally more complex than the in-built global. You can also put error terminals on them to sequence rather than cluttering up your diagram with frames. That said. It is quite rare to find a FG as an exact replacement for a normal global. People tend to add more functionality (read, write, initialise, add, remove etc) effectively making them "Action Engines". Quote Link to comment
Jon Kokott Posted December 27, 2011 Report Share Posted December 27, 2011 I'd prefer that functional globals (actually any USR) died a slow death. It is probably because I see people abuse them so easily (access scope, callers are more difficult to track than other by ref mechanisms.) As far as AEs, if I have to look at another octo-deca-pus I might just lose it. Quote Link to comment
ShaunR Posted December 27, 2011 Report Share Posted December 27, 2011 I'd prefer that functional globals (actually any USR) died a slow death. It is probably because I see people abuse them so easily (access scope, callers are more difficult to track than other by ref mechanisms.) As far as AEs, if I have to look at another octo-deca-pus I might just lose it. It's not only Action Engines that suffer from God syndrome or, indeed, scope. It's like the argument between private and protected Quote Link to comment
Jon Kokott Posted December 27, 2011 Report Share Posted December 27, 2011 its not that AEs are the only ones that suffer from god syndrome, its that they always end up that way. (its a global singleton for god's sake!) Quote Link to comment
Val Brown Posted December 27, 2011 Report Share Posted December 27, 2011 I'm wondering if you can say at least a little bit more about this comment and the general idea that FGVs "...always end up..." as god instantiations, and I gather you mean some form of pantheism or polytheism. FWIW it seems to me that it's not FGVs that are being talked about in that light; rather, it's the misuse of them based on misunderstanding how, when and why to consider using them. I agree with that comment because it does seem like overkill and over reliance on text-based descriptions rather than making use of the easy to read, multi-language accessibility of the native LV symbols. I mean what is one supposed to do: put simultaneous translations into Spanish, French, German, Hebrew, Russian, Chinese (old style and new style)....along with the presumed English language standard? Isn't LV intended to be international? Aren't the primitives much more language independent? It's not a burden to learn to navigate easily through non-text but clear diagrams. That's one of the many reasons that we use graphic representations of output data instead of just long text-based descriptions of those datasets. Quote Link to comment
Jon Kokott Posted December 27, 2011 Report Share Posted December 27, 2011 I'm wondering if you can say at least a little bit more about this comment and the general idea that FGVs "...always end up..." as god instantiations, and I gather you mean some form of pantheism or polytheism. This is specific to AEs: the "typical" usage of an action engine is to use it as a SINGLETON and that the AE itself IS the API used (have fun scoping that without another abstraction layer.) Using it as a CBR node call destroys any ease of maintainability associated with AEs (typdefing the connector pane, re assigning etc.) the only alternative at this point is to make the input/output a variant to get around the octopus connector that results, performing to/from variant. I think if you are good enough to do that, you might as well just step up to classes and type-protect your inputs/outputs, and reap all the run time performance benefit of dispatching instead of case-to-variant steps. I think singletons are inherently not extensible and lead to "god functions." Using a by-ref pattern using SEQs or DVRs is a better way to go. If you really want to go completely nuts with software engineering and maintainability, use the actor framework or some other messaging system. FGs: I don't even consider them useful. Once again, they are a singleton; The only time I'd be happy with them is as a top level application object for immutable objects (config data?) Maybe this is because I'm always making something work with one unit, then down the road I end up testing 6 at a time, and all the FGs are completely useless. Quote Link to comment
ShaunR Posted December 27, 2011 Report Share Posted December 27, 2011 (edited) This is specific to AEs: the "typical" usage of an action engine is to use it as a SINGLETON and that the AE itself IS the API used (have fun scoping that without another abstraction layer.) Using it as a CBR node call destroys any ease of maintainability associated with AEs (typdefing the connector pane, re assigning etc.) the only alternative at this point is to make the input/output a variant to get around the octopus connector that results, performing to/from variant. I think if you are good enough to do that, you might as well just step up to classes and type-protect your inputs/outputs, and reap all the run time performance benefit of dispatching instead of case-to-variant steps. I think singletons are inherently not extensible and lead to "god functions." Using a by-ref pattern using SEQs or DVRs is a better way to go. If you really want to go completely nuts with software engineering and maintainability, use the actor framework or some other messaging system. FGs: I don't even consider them useful. Once again, they are a singleton; The only time I'd be happy with them is as a top level application object for immutable objects (config data?) Maybe this is because I'm always making something work with one unit, then down the road I end up testing 6 at a time, and all the FGs are completely useless. Try writing a logging function without a singleton. I will pull you up on a couple of points, if I may. 1. API? Possibly. That's the developers decision. Not the fault of Action Engines. 2. Typical usage as a singleton. (I'm going to throw a controversy into the mix here ) They are not typically used as an equivalent (in OOP terms) of a Singleton, just that by default they are. They are typically used as the equivalent of an object (with minor limitations). The enumerations are the "Methods" and the storage is the equivalent of the Private Data Cluster. The terminals are the properties and if you want it not to be a singleton, then just select "Clone" from the properties (although I've found limited use for this). If a designer stuffs so many "Methods" into an action engine, that is a partitioning issue, not a problem with Action Engines per se. It is the same as adding all properties and methods to one God Object. Of course all the "Object Happy" peeps will cry about inheritance and run-time polymorphism. But like I said. Minor limitations (as far as LabVIEW is concerned-due to the constraints). 3. Variants as inputs. I hate variants. There. I've said it The "variant to data" primitive is the function that never was.Variants were the one feature that could have given native LabVIEW run-time polymorphism. Instead it's no different to using flatten to string since you still need to know the data-type and coerce it to the right type when recovering the data. The elegant approach instead of using variants (IMHO) is to use wrap the AE in a polymorphic VI to simplify the connector pane for the end user and enable it to "Adapt To Type". Then the user doesn't need to know the data type and cast. It also exposes the "Methods" as select-able drop-downs so for purists, there is no leakage of the type-def. In summary..... I don't think any of the points you make are attributable to the Action Engine philosophy, more of your experience with those designers that use it. All the arguments can equally be applied to by-val Objects and from the statement "Using a by-ref pattern" I'm thinking (rightly or wrongly) that the issue is with the dataflow paradigm rather than AEs. I'm doing to change my signature back to the old one...lol Edited December 27, 2011 by ShaunR Quote Link to comment
Val Brown Posted December 27, 2011 Report Share Posted December 27, 2011 Yes exactly the points I was going to raise as well, thanks ShauR. I would also add that essentially what you're saying is: byref is inherently "better" than byval and this is one situation that you have put forward to illustrate that point. I disagree on the priority given by many to byref esp in the LV environment and that for a number of reasons. The most important is that by val is the native "core" of LVOOP and there are a number of reasons for that are best explained, if I remember correctly, in a series of notes by AQ and perhaps the white paper as well -- for those who are interested in pursuing that. Having used FGVs for now almost 12 years, I find them a very robust way to implement, as ShaunR indicates, objects and not so much singletons, although that has become the default case for many. For years that was just about the only way to do that or anything like it that in LV. And, yes "god functions" are a problem, uh perhaps I should say, however they are instantiated. Being a class doesn't guarantee lack of god elevation. If on the other hand you want to "cast" FGVs as CBR substitutes then, yes, you'll create some interesting entanglements. But doesn't that get back to the issue (at least potentially) of prioritization of byref constructs? So as is the case with many LV constructs, if you don't like them, then don't use them. Just bear in mind that it isn't accurate to say that they are inherently dangerous or especially prone to idol worship..... 1 Quote Link to comment
jgcode Posted December 28, 2011 Report Share Posted December 28, 2011 I'd prefer that functional globals (actually any USR) died a slow death. Try writing a logging function without a singleton. I have built applications in the past made up entirely of AE's - which I am sure a lot of people did too before they had any OOP options available in LabVIEW - and I am sure they got it to work. IMHO being able to use the latest techniques is an assumption - are you coding LabVIEW RT on a brick running <=8.6? - I guarantee I will be using an architecture made up of AEs if you want some form of encapsulation. Nowdays I personally prefer LVOOP based implementations but on a module-to-module basis however, I think everything has it's place and it's up to the developer to decide what is appropriate for what use case. So as the OP is about robustness I just wanted to post different ways of achieving the implementation of (what I consider) a robust AE. the "typical" usage of an action engine is to use it as a SINGLETON and that the AE itself IS the API used (have fun scoping that without another abstraction layer.)... ...the only alternative at this point is to make the input/output a variant to get around the octopus connector that results, performing to/from variant. Personally, I have never like the use of variants as the inputs and outputs to a AE as it means the developer has to know what data to supply to what method without any edit time checks. Additionally I do not like using the AE as the API as it means coupling the enum on the BD of the calling VIs - I like using wrappers around the methods (which is a call to the AE with an Enum constant). And then like ShaunR - a polyVI for the API. Therefore in order to add robustness to the AE I found myself creating a lot of wrapper code. This was worth it IMHO as the code was less coupled, more readable, easier to make changes etc... but the extra work was a PITA and I didn't have any other options at the time. AE/FGV/MFVI/VIG -or whatever you call it (This is how I used to do it) This example is just a template so there is not much meat to it. I use the term private below but I never enforced it literally e.g. with a LabVIEW Project Library - it was just a coding convention. Also an end-user knows exactly what inputs to use for the method they are calling as and inputs can be marked required, recommended, or optional as needed: I believe this solution is more elegant - it does require LV2009 as it uses DVR's but there is a lot less code. LV2009 Singleton (This is how I saw AQ do it) This uses a Class in this example - but it doesn't have to be (I just prefer it). Quote Link to comment
ShaunR Posted December 28, 2011 Report Share Posted December 28, 2011 (edited) but there is a lot less code. I don't think you are comparing like-with-like there. The "lot less code" piccy would equate to a single frame in the AE (or as I usually do, a case statement before the main frames) and would look identical to the class version except it would have the cluster instead of the class blob (without the for loop of course). I also suspect that your first piccy is equivalent to a shed-load of VIs. The only difference (in reality) between a class and an AE is that an AE uses case frames to select the operation and a class uses voodoo VIs . An AE will be 1 VI with a load of frames and a typedef to select, whereas a class will be a load of VIs with more VIs to read/write the info and selection will be what object you put on the wire. (Just because you have a wizard to generate them doesn't mean less code). In this respect by wrapping the AE in a poly, you are merely replicating the accessor behaviour (figuratively speaking-in reality you are filtering) of a class and (should) incur the same amount of code writing as writing class accessors. But you will end up with 1 VI with n accessors (for an AE) rather than m VIs with n accessors (for a class). Of course, you don't HAVE to have the accessors for an AE, it's just icing on the cake Everything else that is different is just anal computer science. Edited December 28, 2011 by ShaunR Quote Link to comment
jgcode Posted December 28, 2011 Report Share Posted December 28, 2011 I don't think you are comparing like-with-like there. I believe I am, because I am comparing robustness which means creating the AE with wrapper VIs for the Methods. The "lot less code" piccy would equate to a single frame in the AE (or as I usually do, a case statement before the main frames) and would look identical to the class version except it would have the cluster instead of the class blob (without the for loop of course). Yes and no - that image is equivalent to both a AE frame and it's Method Wrapper - as it has the same level of robustness (IMHO), but contains a lot less boilerplate code. The only difference (in reality) between a class and an AE is that an AE uses case frames to select the operation and a class uses voodoo VIs . An AE will be 1 VI with a load of frames and a typedef to select, whereas a class will be a load of VIs with more VIs to read/write the info and selection will be what object you put on the wire. (Just because you have a wizard to generate them doesn't mean less code). In this respect by wrapping the AE in a poly, you are merely replicating the accessor behaviour (figuratively speaking-in reality you are filtering) of a class and (should) incur the same amount of code writing as writing class accessors. But you will end up with 1 VI with n accessors (for an AE) rather than m VIs with n accessors (for a class). I think you are getting hung up on the fact it has a Class in it? Like I said this could be e.g. a cluster, the benefits/how-the-framework-works in based solely on the DVR and the IPE (as opposed to a Class). FWIW here are my stats comparing the two (either one could have a polyVI so that is ignored): In this example I have the exact same API, where each has the exact same number of Methods (n). AE/MFVI with n Methods 1 x Main VI n x Methods VIs 1 x Enum TypeDef 2n x Method Cluster TypeDefs 3 x (Input, Output, Local/State) Cluster TypeDefs LV2009 Singleton with n Methods 1 x Main VI (FGV) n x Method VIs 1 x State Class/Cluster TypeDef The LV2009 code is definitely light-weight compared to the AE not only in stats but how it is coded. Of course all this is all based on my definition of AE robustness - like I said in previous post, you could use an AE with the enum exposed and have either: A variant interface A cluster interface - CP never changes but you have to do the bundling on the caller's BD Or no standard interface - where you could run out of CP inputs/outputs But all the above means the user does not know what the inputs/outputs are of that method, so they need to have intimate knowledge of the Module's design (plus coupling is higher). Of course, you don't HAVE to have the accessors for an AE, it's just icing on the cake And yes, if you coded the module using 1 of these 3 approaches then the stats would be different. But IMHO I don't think either approach is that robust - so to me it was never icing on the cake, it was how I implemented a Module (AE). Quote Link to comment
ShaunR Posted December 28, 2011 Report Share Posted December 28, 2011 (edited) I<snip> Nope. Still not with you here.... An action engine "by default" is a singleton, however, you are using a class cluster which is not. So if your action engine is identical to your 2009 version then the cluster at the heart of it is the same........but you don't have to use a DVR to make an AE a singleton, because it already is. Now. To make it usable for other developers you still need to expose the different parts of the internal clustersaurus (which Is what all the type-defs are presumably for in the AE and what the poly is for so that you don't have one huge in/out cluster) but in the second example you also have to de-reference the DVR too. So are you saying that in the 2009 version you de-reference and expose one huge cluster to the external software (1 x State Class/Cluster TypeDef), or lots of VIs to de-reference and output the cluster parts (class accessors)? What I'm not getting is that you want to break the singleton feature of an action engine (by using a class cluster?) then re-code it back again (by using a DVR) and somehow that means less typedefs for identical functioning code What am I missing here? Edited December 28, 2011 by ShaunR Quote Link to comment
jgcode Posted December 28, 2011 Report Share Posted December 28, 2011 An action engine "by default" is a singleton, however, you are using a class cluster which is not. The class is the state data of the module (in this example!). This state data is stored in the shift register (in both the AE and the LV2009 Singleton) But as I mentioned before it does not have to be a class - it could be a cluster. No different to the state data of the AE could be a class or a cluster. But it has nothing to do with how the DVR-IPE works - so it can be ignored. ....but you don't have to use a DVR to make an AE a singleton, because it already is. Yes, an AE is a singleton. But no, I prefer using the DVR because it is less work. ...you still need to expose the different parts of the internal clustersaurus (which Is what all the type-defs are presumably for in the AE and what the poly is for so that you don't have one huge in/out cluster) No, the clusters are all private data of the module - no clusters are exposed to the end user. The reason for having an (input/output) cluster per method (where needed) is so that each method's data is separate from another method. I.e. you cannot accidentally bundle the wrong data at edit time into another method. Then in the AE (main VI) you can unbundle that methods data easily. The input to the AE (main VI) is a super-cluster of the input clusters (and the output to the AE is a super-cluster of the method output clusters) - so the CP of the AE does not ever need to change - it's interface is maintained. The polyVI is just so you can drop one VI (the polyVI) and select which method you want. I like to do that when creating APIs. Both the AE and LV2009 Singleton could do this (that is why I ignored it from the stats). So are you saying that in the 2009 version you de-reference and expose one huge cluster to the external software (1 x State Class/Cluster TypeDef), No, that would violate encapsulation. or lots of VIs to de-reference and output the cluster parts (class accessors)? No, they are not class accessors - it is DVR based. But yes, each method is a VI. Accessing elements from the state data is the same as for the AE (you use unbundle). If the state data is too complex to manage then it should be broken down e.g. into classes or clusters or whatever - but this doesn't differ between the two implementations. What I'm not getting is that you want to break the singleton feature of an action engine (by using a class cluster?) then re-code it back again (by using a DVR) No I am saying that if I want to create a robust AE I prefer to use the LV2009 Singleton method (or whatever you want to call it). To an end user, both will have the exact same API, but for the developer there is less code. Using the DVR with the IPE means that the singleton feature is not broken then rebuilt - it's simply implemented a different way from the get-go. Quote Link to comment
ShaunR Posted December 28, 2011 Report Share Posted December 28, 2011 (edited) <snip> Hmmm. It seems you have picked a rather "special" action engine to demonstrate. I'd even go so far as to saying it's not one at all. Perhaps if you could put it in context with something simple (I like simple) and I'm more familiar with (e.g a list) I might be able to see the benefit. A list will have things like Add, Remove, Insert, Get Value etc. At it's heart will be an array of something. It will basically wrap the array functions so that you have the operations exposed from a single VI. There are two inputs (a value to do the operation on and an index if required for the operation) and one output. With this AE, how is the DVR method more robust, simpler, less coding et al? Here she is. Edited December 28, 2011 by ShaunR 1 Quote Link to comment
jgcode Posted December 28, 2011 Report Share Posted December 28, 2011 Hmmm. It seems you have picked a rather "special" action engine to demonstrate. I'd even go so far as to saying it's not one at all. Why? In terms of the module (as a whole) it's core is an AE, with the those AE methods wrapped. So to me it's an AE/FGV/MFVI/VIG etc... Perhaps if you could put it in context with something simple (I like simple) and I'm more familiar with (e.g a list) I might be able to see the benefit. A list will have things like Add, Remove, Insert, Get Value etc. At it's heart will be an array of something. It will basically wrap the array functions so that you have the operations exposed from a single VI. There are two inputs (a value to do the operation on and an index if required for the operation) and one output.With this AE, how is the DVR method more robust, simpler, less coding et al? As mentioned above I do not think this implementation is robust (I am assuming the the Enum is not a type def for ease of posting). The first thing that come to mind from looking at it is that when I go to select a method I do not know which data I should set without opening the block diagram and looking at the code. This is a simple example, but what happens when the code is more complex? What if there 2-3 inputs are needed for each method and there was 5 methods? What if some inputs are required and some are optional - how do you specify that? It's going to get harder to figure out what is going on for the end user. Once you run out of room for inputs/ouputs then you will need to use clusters - exposing this data (clusters, enum) leads to higher coupling. Therefore by wrapping the AE I can provide a more robust API for the end user. And if I was to go this route I would prefer to implement it with the DVR-IPE as it's less coding for me. Don't get me wrong, writing AE as you have is valid, lightweight and works - I just think it can be more robust. Here is an example - unlike yours, it just some BS functions, but it demonstrates the framework: Each module has the exact same API - and that API is robust IHMO. The DVR-IPE is more lightweight it terms of number of files and coding. I am not saying this is the only way to make an AE more robust and I always like seeing different implementations. If you check out LabVIEW For Everyone 3rd Ed pg 910-912 they show a similar implementation but the main points about encapsulation are the same: Each method bundles/unbundles has it's own input and output cluster respectively and The enum command is wrapped by the method However, they pass each cluster as a connection on the CP where I prefer to use a super-cluster and one in and one out connection and I like to use some real estate to pass the input into the state data, but this is all programming-preferences/style and has nothing to do with the point I am trying making about robustness Robust AE.zip Code is in LabVIEW 2009 Quote Link to comment
jgcode Posted December 29, 2011 Report Share Posted December 29, 2011 I am with Shaun here. What magic ingredient are we missing that allows the DVR method to not have to have individual wrapper VIs for all "actions"? Edit: should have refreshed my browser, looking at your new example now Jon, I still don't think your examples are the same. Surely for consistency your Singleton methods should have typedef inputs as well? If you do this you get a very similar number of files for both architectures. No, the LV2009 Singleton methods do not need typedef inputs. In the AE each method should access it's own data - it unbundles it's own inputs and in bundles up it's own outputs. The cluster helps enforce this which leads to more robust code. Additionally it standardises the API (i.e. CP) to the AE main VI. In the LV2009 Singleton example I don't need to worry about any of that as each method is a VI so it only uses those inputs/outputs. That is why I consider the examples the same. Quote Link to comment
drjdpowell Posted December 29, 2011 Report Share Posted December 29, 2011 I'm wondering though if the FG pattern is indeed as robust as it appears, especially for large-scale applications. Are there any known issues with the FG pattern (eg. memory leaks, lost data, crashes, etc) when used with large amounts of data stored in the USRs or operated for long periods of time?My concern about the robustness of FGs is based on my impression that, although it works well, the pattern seems like an unintended use for a while or for loop (ie running the loop once just to read the current value of previously set USRs). Regarding the initial post: John, you don't have to worry about the robustness of using an uninitiallized shift register. Even if this use of a USR was not originally foreseen, it has been a common method of LabVIEW programing for many years, as are other design patterns using shift registers. However, you should carefully consider what Norm said about the possibility of eventually needing more than one copy of the thing you program as a functional global. -- James Quote Link to comment
ShaunR Posted December 29, 2011 Report Share Posted December 29, 2011 No, the LV2009 Singleton methods do not need typedef inputs. In the AE each method should access it's own data - it unbundles it's own inputs and in bundles up it's own outputs. The cluster helps enforce this which leads to more robust code. Additionally it standardises the API (i.e. CP) to the AE main VI. In the LV2009 Singleton example I don't need to worry about any of that as each method is a VI so it only uses those inputs/outputs. That is why I consider the examples the same. OK. I see what you are getting at here (great documentation post, want to write my websocket help files ). The thing is though, they are not a fair comparison. and this is why...... In the second example a DVR is used purely because it is the only way for you to create a singleton (maybe I'm still hung up on classes but you wouldn't be able to unbundle so easily without it). Secondly (and more importantly) it allows you to un-type the inputs and outputs to one generic type. In your first example, you don't "un-type" the inputs and outputs, preferring instead to provide all permutations and combinations of the types for export. This has nothing to do with singletons. Just the strict typing of LabVIEW. I've attached an "equivalent" classic AE of your 2009 API based on a method I've used in the past (My apologies to John, I think I now understand what he was getting at with variants-without using a poly wrapper, that is). There is very little difference apart from the features that I have outlined previously. Arguably potato, potAto as to variants vs DVRs. But the (major) effect is to push the typing down into the AE thereby making the accessors simpler than equivelent DVR methods (and if those god-dammed variants didn't need to be cast, you wouldn't need the conversion back at all!) So back to the case in point. I think that the example I have provided is a fairer comparison between the super simple 2009 API and a classic AE. Which is more robust? I don't think there is a difference personally. Which is less coding? Again. I don't think there is much in it except to point out that changes are concentrated into the 1 VI (AE) in the classic method. You could argue that to extend the classic AE you have to add a case and an accessor rather than just an accessor, but you don't actually need accessors in the AE (and they are trivial anyway since they are there just revert to type). Quote Link to comment
jgcode Posted December 29, 2011 Report Share Posted December 29, 2011 (great documentation post, want to write my websocket help files ). Thanks! And no The thing is though, they are not a fair comparison. and this is why......In the second example a DVR is used purely because it is the only way for you to create a singleton (maybe I'm still hung up on classes but you wouldn't be able to unbundle so easily without it). Secondly (and more importantly) it allows you to un-type the inputs and outputs to one generic type. It does not have to be a class, here I changed it to a cluster and updated the DVR refnum and the rest of the code stays the same (in that example). I've attached an "equivalent" classic AE of your 2009 API based on a method I've used in the past (My apologies to John, I think I now understand what he was getting at with variants-without using a poly wrapper, that is). There is very little difference apart from the features that I have outlined previously. Arguably potato, potAto as to variants vs DVRs. But the (major) effect is to push the typing down into the AE thereby making the accessors simpler than equivelent DVR methods (and if those god-dammed variants didn't need to be cast, you wouldn't need the conversion back at all!) Ok, so now you have a wrapper methods and you have created a robust API IMHO - I like this API think it is robust like the DVR and the AE I posted e.g. you could change the implementation of underlying code (from DVR/Variant/AE) and it would not affect the API or end user. However, I would disagree that it less work than the DVR module I posted: So the example I posted (and you modified) is quite simple. How are you going to handle multiple inputs for a method? E.g. each method has 2 or more inputs. For your implementation (variant) I see two options (there may be others?) More Variant inputs on the CP of the AE Or the interface to the AE stays the same and you create a typedef Cluster of the inputs for that method and convert them back on the other side. In (1) more variant inputs could get messy fast and hard to manage in the AE? In (2) creating a Cluster means that you are going to have the exact same issues I have highlighted in terms of boiler plate code. So the typing issues has to do with the inputs/outputs to the AE not the state (persistent) data of the either module. The DVR is the state (albeit a reference to the state - accessed safely using the IPE) The DVR method inputs/outputs do not need to be isolated/grouped/protected etc... as there is only a single VI that will use them. In order to handle multiple inputs I don't have to do anything special, thus this makes the DVR less coding. So back to the case in point. I think that the example I have provided is a fairer comparison between the super simple 2009 API and a classic AE. Which is more robust? I don't think there is a difference personally. Which is less coding? Again. I don't think there is much in it except to point out that changes are concentrated into the 1 VI (AE) in the classic method. You could argue that to extend the classic AE you have to add a case and an accessor rather than just an accessor, but you don't actually need accessors in the AE (and they are trivial anyway since they are there just revert to type). IMHO the classic AE is not as robust, I have already addressed the following as to why I think it is not and why it should be wrapped to provide a more robust API to the end user: The first thing that come to mind from looking at it is that when I go to select a method I do not know which data I should set without opening the block diagram and looking at the code. This is a simple example, but what happens when the code is more complex? What if there 2-3 inputs are needed for each method and there was 5 methods? What if some inputs are required and some are optional - how do you specify that? It's going to get harder to figure out what is going on for the end user. Once you run out of room for inputs/ouputs then you will need to use clusters - exposing this data (clusters, enum) leads to higher coupling. Additionally the Command Enum should be private/hidden as e.g. this will not allow user to run private methods. <edit> For discussion here are some images of the Variant implementations when I had to increase the number of inputs to a method: 1. More Variant CP inputs: 2. Switch over to a cluster: Quote Link to comment
ShaunR Posted December 29, 2011 Report Share Posted December 29, 2011 (edited) It does not have to be a class, here I changed it to a cluster and updated the DVR refnum and the rest of the code stays the same (in that example). Ok so I'm clear on that now. Ok, so now you have a wrapper methods and you have created a robust API IMHO - I like this API think it is robust like the DVR and the AE I posted e.g. you could change the implementation of underlying code (from DVR/Variant/AE) and it would not affect the API or end user. However, I would disagree that it less work than the DVR module I posted: That's not what I said. I said it wasn't a fair comparison (your original AE and the super slim one) and that there is little difference in effort for the more equivelent one I supplied. So the example I posted (and you modified) is quite simple. It's different? Wasn't intentional. I did save As (copy) a few times so that I didn't have to re-invent the wheel. Maybe something got messed up when it recompiled. How are you going to handle multiple inputs for a method? E.g. each method has 2 or more inputs. For your implementation (variant) I see two options (there may be others?) More Variant inputs on the CP of the AE Or the interface to the AE stays the same and you create a typedef Cluster of the inputs for that method and convert them back on the other side. In (1) more variant inputs could get messy fast and hard to manage in the AE? <snip> In (2) creating a Cluster means that you are going to have the exact same issues I have highlighted in terms of boiler plate code. No 2. With a slight variation (I know you will pick me up on moving the typedef cluster outside the AE, but in the long run it's worth the "potential" pitfall. If we are supplying accessors, then it's only a pitfall for us, not the user). So I am deliberately sacrificing a small bit of robustness for a large gain in flexibility. Accessor AE I don't think it's any different to the boiler plate code that you have to use with a DVR. But there is a big motivation for doing this as I hope you will see a bit further down. So the typing issues has to do with the inputs/outputs to the AE not the state (persistent) data of the either module. The DVR is the state (albeit a reference to the state - accessed safely using the IPE) The DVR method inputs/outputs do not need to be isolated/grouped/protected etc... as there is only a single VI that will use them. In order to handle multiple inputs I don't have to do anything special, thus this makes the DVR less coding. Not strictly true. You still have to create the bundle and un-bundles in the accessors (and the extra controls/indicators etc) the same as I do in the above images (if changing a current implementation). If you are adding new "Methods" then yes. It only affects the new VI. Wheras I (may) have to create the new VI and add the cases to the AE.this is the point I was making about selection via frames or via VIs. This, however is both a strength and a weakness for the DVR method . IMHO the classic AE is not as robust, I have already addressed the following as to why I think it is not and why it should be wrapped to provide a more robust API to the end user: Additionally the Command Enum should be private/hidden as e.g. this will not allow user to run private methods. Point of interest/view etc. I don't consider AE=API.An API might be composed of many AEs which are self-contained sub-components. (Maybe that's just in my world though) Considering what I just said, the Cmd enum is not private in an AE, it is public. Why should it be private? (what was I saying earlier about anal computer science?). We want the user to be able to choose the methods and there is nothing in the AE that he shouldn't be accessing (unless you've not partitioned correctly and have loads of methods that are internal only-a state machine is an abuse of an AE!). If it's not on the enum, then he can't access it so why go making it harder for him to do so? You wouldn't go making a drop-down on a front panel private, would you? I like the DVR method, now I know more about it and will certainly be looking at some of my current implementations to see if this type would be better. But it has one weakness which (as I stated earlier) is also it's strength. So here's the kicker. It has one accessor (VI) for each and every method! We've covered the ground with different inputs and (I think) there is little in it (but the DVR is definitely in the lead at this point). What if we have multiple methods but the same input type? Lets say we have in our AE example the boolean input but we can do AND, OR, XOR, NAND, NOR, NXOR etc. Thats 6 more accessors (VIs) for the DVR all looking remarkably similar except for the boolean operation. That's not promoting code-reuse and and inflates the code-base. This is the (single) accessor for the AE with the 6 extra operations (1 extra type-def). I have to modify the case structure to include the extra operations, but I only have 1 "vi" and 1 typedef ("boolean Method") to maintain regardless of the number of boolean operations. The codebase also doesn't change i.e there is zero increase in the number of VIs for increasing boolean operations. This is why partitioning is so important. If you can partition your engines so that they are grouped "by function" then maintenance is easier and code re-use is increased . The DVR code-base seems to increase linearly with the methods and there also seems to be zero opportunity for re-use of the accessors.(not mentioning the compile time here ). Edited December 29, 2011 by ShaunR Quote Link to comment
jgcode Posted December 29, 2011 Report Share Posted December 29, 2011 I don't think it's any different to the boiler plate code that you have to use with a DVR... ...You still have to create the bundle and un-bundles in the accessors (and the extra controls/indicators etc) the same as I do There is no boilerplate code with the DVR (that is why it is less coding). Sure data is bundled/unbundled but this is the state data i.e. the data that is persistent for that module - same as in an AE: This is the (single) accessor for the AE with the 6 extra operations (1 extra type-def). I don't agree with sharing inputs for methods. Yes, it may appear advantageous to share them initially - especially if a module starts off small. But it violates encapsulation (and I aside for that I find it confusing). What if we have to change the inputs for Method 1 in the future - how do we know that it won't affect any other methods? We don't. If each method has it's own input/output cluster then we can confidently make changes to that method. We do not need to worry about this with the DVR-IPE implementation. In the example you are referring to, this is your method's interface: In order to reuse your states you have created an input Enum that is a subset of your module's Command Enum - now they are coupled to each other. A change will mean you will need to make a change in two places. Now this method interface can still be replicated using a DVR-IPE - and I think it's cleaner/more-robust (just throw in the paths): Quote Link to comment
jgcode Posted December 29, 2011 Report Share Posted December 29, 2011 Methods/accessors/wrapper VIs should be simple (do one thing, do it properly), so for most of the time there will not be the need for multiple variant inputs to the main VI. Saying that a method only has one argument most of the time is a pretty big assumption! It is going to be totally dependent on the module. Quote Link to comment
ShaunR Posted December 29, 2011 Report Share Posted December 29, 2011 Methods/accessors/wrapper VIs should be simple (do one thing, do it properly), so for most of the time there will not be the need for multiple variant inputs to the main VI. I tend to make a distinction here. An accessor (for me) will be of the ilk "do one thing, do it properly" (get name, get value set name, set value etc). But a wrapper would be a simplifier of a more complex function or "wrap" several "Methods" to yield a new function. There is no boilerplate code with the DVR (that is why it is less coding). Sure data is bundled/unbundled but this is the state data i.e. the data that is persistent for that module - same as in an AE: Of course there is. It's the IPE, (un)bundles, case structure and the "get ref". That's why all the accessors look identical at first glance and why you can use a wizard to create them (or a template vi from the palette or your Save As (copy) gets a thorough workout). I don't agree with sharing inputs for methods. That strikes me as a bit odd, for you to say, since since overrides (and overloading) are the epitome of input sharing. Yes, it may appear advantageous to share them initially - especially if a module starts off small. But it violates encapsulation (and I aside for that I find it confusing). I disagree. It has nothing to do with encapsulation. What if we have to change the inputs for Method 1 in the future - how do we know that it won't affect any other methods? We don't. If each method has it's own input/output cluster then we can confidently make changes to that method. We do not need to worry about this with the DVR-IPE implementation. "What-ifs" don't feature much in my designs nowadays. If there is an immediate "genericism" then I will most likely code it that way. Otherwise it will be to spec and no more. I exclusively use an iterative (agile) development cycle so changes will be factored in on the next cycle and costed accordingly. If you don't need to worry about impacts of changes on other functions, then regression testing is a thing of the past, right? The fact is, most changes affect most things to a greater or lesser extent. With linear coding (which is what this is), you've just got more to check. In the example you are referring to, this is your method's interface: In order to reuse your states you have created an input Enum that is a subset of your module's Command Enum - now they are coupled to each other. A change will mean you will need to make a change in two places. Yup. Coupling is a bad thing. Oh hang on. I have to get info from here...to here. How do I do that if I uncouple them? Coupling, as has been discussed here before, is a balancing act the same way as efficiency vs function. Now this method interface can still be replicated using a DVR-IPE - and I think it's cleaner/more-robust (just throw in the paths): Yup. I like that. I'm still warming...not red hot yet though. Now re-use it for integer types. Linear programming is fine, but tends to lead to bloat, very little re-use and, if used ad-nauseam inflexible code. If you want software to create software, then it's great because it is crank the handle and you get one for everything, all the same, with a slight variation. But the cost is long term maintainability, increased code-base, compile times and re-use. This is the reason I like polymorphic VIs, but think very carefully before using them. They, in themselves are re-usable and give the ability to adapt-to-type making it easier for users. But they are code replicators. Copy and paste managers. Hierarchy flatteners. And that doesn't sit well with me. Back to topic Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.