-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
Alright, I'll concede that with your phrasing I am abusing the term "reuse". Let me rephrase -- designing for reuse is often in conflict with designing for inheritance-for-original-use. The parent class is not being incorporated into other applications or systems. The parent class is being "reusued" in the sense that its code is part of all the children, all of which are participating in the same original system. Speaking to Daklu's argument that these are restrictions that are better placed by the caller... the children do not know the caller necessarily. Oh, they may know "I am used by that EXE over there", but they do not know how that EXE works, or what all the intricacies of that environment are. The parent is the one part of the system that they know. The parent knows the rules it itself had to follow to be a part of the system. It needs to communicate those rules to its children and -- where possible -- help its children by making those rules more than just documented suggestions and instead compiler checked enforcement. When I used the term "reuse", I'm speaking of the fact that the parent is reused by each child because the child does not have to duplicate all the code of the parent within itself, one of the first motivators of inheritance in CS. And, as for the "private" argument -- the other reason for having private methods, like private data, is because they are the parts of the parent class that the parent may reimplement freely without breaking children in future revisions. They are often precisely the parts that you do not want children using as cut points because then you cannot change those function signatures or delete those methods entirely in future releases without breaking the child classes. A trivial one... I have a private piece of data, which, you admit, is useful to keep as private. I may implement private data accessors for that piece of data because it can aid the development and maintenance of the class itself to be able to breakpoint and range check in those accessors. But if I make the accessors protected or public, I have substantially limited my ability to change the private data itself. There are lots of others, IMHO, but that seems to me to be an easy one.
-
There are two different types of reuse, and two different "ease of use" requirements, and they oppose each other. So the answer is that you put as many restrictions on the class as makes sense for the intended usage. If I am creating a class that fulfills a niche in a product that others are going to plug into, that parent class is going to be shaped very closely to match that niche, and is going to have as many rules in place as I can have to make sure that any child classes also fit within that niche... in that case, I am *not* designing for this parent class to be pulled out of its context and so I am *not* making it easy for the person who has a use case I've never heard of. Instead, I'm trying to make it easy for the person who *wants* to fit within that niche because they're trying to implement some alternative within an existing system. If I am developing more of a "top-level library class" meant to be used in have wide utility, then I will create more "cut points" within the class, i.e., dynamic dispatch VIs where people can replace the parent class' functionality with their own. But suppose I'm looking at a class that is a top-level library class, and that class has a method A that implements the general algorithm, and A calls B that implements a detail of the algorithm, I might make both of these methods dynamic dispatch, so that new users can override either the general algorithm or can continue to reuse that general algorithm and only override B to replace a specific detail. But that has a curious effect -- anyone who overrides A never even calls B [in most cases] because B is specific to the one implementation of A and the override of A does something entirely different. That's a clue that perhaps you really want more like *two* separate classes with a strategy pattern between them. There are lots of variations. The point is that the parent has to decide which use case it is going to serve, and if it is serving the "I'm helping children to fit this niche" then it benefits from more ability to specify exactly the dimensions of that niche and throwing every restriction in the book on its API. And the whole continuum of use cases exists, such that the language itself benefits from a class *being able* to specify these restrictions, whether or not every class actually uses them. And, yes, sometimes you have a parent class that you wish you could reuse except for some small restriction... I ran into that recently with the Actor Framework... I actually have a use case for an actor that does *not* call the parent implementation of Actor Core.vi. But removing the Must Call Parent restriction from Actor Core.vi does a disservice to the 90% use case of people trying to correctly implement actors. And, indeed, my use case *is not really an actor*. So that was a big clue to me that the inheritance might not be the best way to handle this, and I looked for other solutions.
-
And I'm trying to answer that question by saying, wholeheartedly, without reservation, hesitation or exception, that yes, the parent must, should, needs and wants to declare the restrictions. I'm saying that the parent has to declare what it itself is to be used for. If that design is "I am a free-floating class meant to be used in any number of applications as one of my child classes", then it will have few restrictions. If the parent class is designed to be used as a framework, then it will have many restrictions. But it is the parent class that decides its own usage. And that has ramifications for the children because the children *are* instances of the parent. And as instances of the parent, they want, need, must and should adhere to the rules of the parent. To go further, the reason for the parent needing to specify its own usage pattern is that a class that is designed to be used as part of a framework is designed completely differently from a class meant to be just a reusable base class for other components. A parent class has to make some assumptions in their own internal code about what is going on around them, and those assumptions are very different for a framework class as for a free-floating class. And, so, yes, a parent needs to be able to lay down restrictions and a child class must follow them. And this has NOTHING to do with NI or LabVIEW. This is fundamentals of programming correct code in *any* language.
-
> My persistence was because I was trying to understand why > you continued to appear to claim it is universally better for the > parent class to impose restrictions on child classes when (imo) > that clearly isn't the case. My persistence is because it is universally true. Whatever requirements that the parent class has, it should be able to impose it through code. It is only the emphasis that shifts, not the nature of the restrictions. As I said, there are still internal restrictions of the class that may need to be met for the class itself to function (the primary purpose of Must Call Parent). It was only the "Must Override" and frequency of "Must Call Parent" as particularly useful that I was downgrading.
-
The parent defines a method A that says, "I return values within this range." The framework is written to call method A. At run time, a child the framework has never seen goes down the wire. It is this child's override of method A that gets invoked. That override VI needs to be within the range promised by the parent. You -- the author of the parent class and the framework -- know what values that method needs to return. You have to document it so that anyone overriding your parent class FOR USE WITHIN THE FRAMEWORK knows what values to return. That bold-italics-screaming part is the assumption underlying everything I and flintsone have been saying: that the primary purpose of inheritance is to reuse a class within the same framework as the parent, not for freeform use of the child class. Classes that are designed for freeform use (i.e. the Actor Framework, various Vector/Hashtable/etc classes, that sort of thing) don't make those sorts of external usage promises, but they still may put requirements on child classes to fulfill internal requirements.
-
shoneil: In this particular case, I'm talking about how many pipeline segments the child class uses in its override VI. If the parent specifies 2 feedforward nodes, the caller will be expecting that it takes two iterations to move data through the pipeline. If the child takes three iterations, it will be out of sync with the rest of the program. In a straight linear program, that doesn't matter, but if you have two parallel pipes where the results get merged at the end of the pipes, you need both pipes to be the same length. flintstone's comments about RT determinism also apply. I just wasn't addressing those directly, but I have talked about that as a similar problem elsewhere. @Daklu: I was working on a class today and realized that you are likely primarily concerned about "code reuse" in the sense of inheriting from a parent class and then *calling the child directly*, where the caller does not care about most promises the parent made. I am primarily concerned about "code reuse" in the sense of *calling the parent directly*, which may dispatch to the child, where caller cares exclusively about the promises that the parent made. If you're not taking advantage of inheritance for actual dispatching but just for inheriting functionality, then you have less need for Must Override and Must Call Parent. Not zero need, but substantially less (you only care about promises made to fulfill parent's internal API requirements, like Must Override on protected scope VIs, not its external API requirements, like Must Override on public scope VIs). And I can imagine that a class designed to be called through the parent could indeed cause frustration if you then tried to write an app where you call it directly through the child, possibly enough frustration that I would suggest you investigate containing the class instead of inheriting from it. I think this is at the heart of the difference in viewpoint between your position and mine.
-
You rather severely missed the point on the Pi constant vs computed Pi functionality. Sure, a function that can compute pi or c is usable in more places than a function that just computes pi. But a function that sometimes computes pi and sometimes computes c and the caller never knows which one he is going to get is substantially less useful. Worse is a function that is documented to only produce pi that generates any number of random values when it is executed.
-
Not it LabVIEW nor most other languages TODAY. There is strong interest in integrating ACL2 compiler proof engine into various compilers as it can provide such guarantees. This is true of most of LSP. It can be proven to be impossible to machine check or to assert. Doesn't mean it isn't a requirement. If you have a problem with this, take it up with that Grand Omniscient Device before Whom we are all but mere peripherals. By that argument, all outputs should be variants and the caller should validate that the data types are types it can use. It's exactly the same situation. A subVI needs to define what it does, and a caller depends upon that thing being done. And if part of that contract is a range limitation, then the caller doesn't have to do that check -- just like it doesn't have to do the data type check today. On this point, you are wrong. The more restrictions a piece of code has, the *more* places it can be used. A function that returns the constant value "Pi" can be used everywhere. A function that computes Pi on the fly cannot be used in some time critical systems. If a function has design contract that says "this is my performance bound", then it can be used in more places. This is not an example I just pulled out of the sky... the interface for the C++ STL actually defines big-O performance bounds for any library implementing that interface for exactly this reason. Impossible to compiler check, but critical to the STL being usable by most systems. Functions that guarantee range, or guaranteed not to throw exceptions, or any of 10000 other guarantees go a long long long way to increasing the amount that code can be reused. These two sentences are synonyms for any application of moderate scale. Run time error detection for things that can be caught by the compiler assumes a test bed that I know never actually gets built by the vast majority of the world's programmers. We're not talking about restrictions on what the caller can do with the code, we're talking about restrictions about what the called code can do to its caller (i.e., how can a child class screw a parent class framework). The flags guarantee that child class developers implement some portion of the class correctly. Can the child class author still screw things up? Yes. That's the whole point of everything else discussed above -- there are an endless number of potential requirements and they cannot all be compiler checked. But some -- like these -- can be. As for not using them, I don't know what to say to you. The flags are the backbone and key to most every library I've ever released, critical to anyone ever making a large system workable. If these flags have ever gotten in your way, then either the author of the ancestor class didn't know what he or she was doing (i.e. putting Must Call Parent on every method without thinking) or you were trying to hack the system and these rightfully stopped you.
-
> Is what you're getting at is more along the lines > of the Liskov Substitution Principle Yes. > or Design by Contract? No. LSP isn't just a good idea. It's a requirement for any system that you want to be well defined. Daklu, you bring up the testing case. The "behavior" in this case is that the child does whatever its parent does but it does it in its own way. So a child class for testing may do various asserts, logs, and may fake results. But you don't design it to have stateful side-effects on further downstream functions otherwise you end up with an invalid test. I wish I had an easy example to give you at this point, but I don't have one at my fingertips this morning. The parent makes promises to its callers that "all values of me will behave within parameters X, Y and Z." All child classes are instances of the parent class (that's why they can travel on a parent class wire and be the value of a parent class control). They have to maintain any promises that the parent class made, so they have to adhere to X, Y and Z. Failure to do so -- even in a testing situation -- breaks the calling program. In release code, this results in undefined behavior that can send the calling program off the rails. In a testing situation, this results in both false positives and false negatives. The more that X, Y and Z can be defined in code, the easier time the child class author has and the more likely it is that the system actually works. Anyone who saw my presentation at NIWeek 2012 will recall my discussion about "a child class' override VI should only return error codes that the parent said could be returned." So even though the parent VI may not do anything on its diagram (and so never ever returns an error) it should still define the set of errors (in its documentation) that may be returned by any override VIs. This is a very hard promise to adhere to for various reasons, but if I am going to write a framework that calls the parent VI, I may need to test for specific error codes coming out of that function. If any child class that comes along could return an error code that I'm not prepared for, that's a problem. Similar crops up with parents that document "the output will always be a number in the range 0 to 10". This is only covered in documentation. A child could return 11, but will break callers that are expecting 0 through 10. If we could encode in the interface of the parent VI "this output must be 0 through 10", we could break a child VI that tried to return 11 in the compiler. Or -- because figuring out that a given math expression won't ever return a number outside a range is often impossible to prove -- we could at the very least require that the output pass through an In Range & Coerce function before returning. Range is a promise that the parent VI can make to callers that is currently impossible to specify in the code. These are the kinds of conditions that I am talking about. It is why "Must Call Parent" is such a critical part of the LabVIEW architecture -- it solves a massive hole that exists in other programming languages for a parent being able to make certain promises to its callers. I cannot count how many times I have wished for something as simple as Must Call Parent in C++, C# and JAVA. The reason it doesn't exist there is -- so I have been told by people who work on those languages -- that in non-dataflow languages, it is hard for a compiler to prove whether the requirement is met or not. Personally, I think they're just not trying hard enough because it seems obvious to me how to implement it, but I don't work on those languages and I've never actually tried to make a compiler check one of those languages. No. Behavior is defined by what comes *out* of a method and the impact that running one method has on running later methods. Does all this make sense? Do you see how it relates back to the earlier comments about a parent defining the use cases for children? The parent does not make any promises about *how* the job gets done, but it does make promises about the data that comes out of the function and the impact on later function calls. I clearly haven't done enough to convey the critical nature of all of this to the LabVIEW community. These things are really hard to do in practice, and you can find plenty of examples where they weren't followed, but when you find one of those examples, you can almost always find a bug in the code related to that example. Breaking LSP means your code is broken. Unfortunately, LSP is something that can only be determined by human analysis, not by compiler analysis, though we continually try to upgrade the amount that can be algorithmically proved. And for the non-OO programmers: LSP applies to you too, but it is even harder to analyze because you don't have the formal structures to talk about. But it affects any time you have a plug-in system or refactor a module called by other modules or use any sort of variant data or flattened string interpretation.
-
I do pull the OOD for everything. I do NOT pull the OOP. That's the huge mistake that JAVA makes. When planning out a program, being able to say what object each piece of data is associated with gives you an organizational power that I haven't found anywhere else. And once you're done with the planning, you look at the plan and say, "It would be ridiculous to build an entire class for this concept, so I'm not going to do it and just write a function for that thing." Having said that, it does many programmers good to spend some time operating in the world where "you will only have functions that are members of classes." That whole "everything has a place" aspect of JAVA is actually a really valuable perspective, and in my experience, code written in any language from programmers who have spent time in JAVA is cleaner than code written in any language by programmers who have only used free spirit languages like C++ or LabVIEW. The other language that provides a needed discipline is LISP. I'd fully support any CS program that said "all freshmen will write in JAVA and LISP, alternating each week, and only then do we put the less-regimented languages in your hands." Unfortunately, most schools only do the JAVA part. And they never get around to handing the students LabVIEW. *sigh* No, 2012 is not a cut point, just the latest version at the time. We aim to maintain it as long as is practical, and we have at this point maintained backward load support longer than at any other time in LV's history, so far as I can tell. I suspect the current load support will go for quite some time because there's not really a problematic older feature that all of use in R&D want to stop supporting.
-
For the "too long; didn't read" crowd, just read the four boldface sentences. The rest is explanation. :-) @drjdpowell and @flintstone: A child class that does something the parent never intended is a bad child class in almost all cases. The phrase "And you as the parent class designer do not know now what your class might be used for in e.g. three years from now" is false. The parent class designer establishes, in stone as it were, exactly the uses of a child class. Why? Because code will be written that uses the parent class and only the parent class, and that code knows nothing about the children that will eventually flow through it. All children are expected to match those expectations or they are going to have runtime problems. The more you can convert those runtime problems into compile time problems by letting the parent declare "these are the requirements", the more successful the authors of the child classes will be. This is true whether you are one developer working on an app by yourself or whether you are writing a framework for third party developers to plug into. A child class needs to match it's parents for Identity, State and Behavior or else a child cannot be effectively used in any framework written for the parent. The parent defines the invariants that all child classes will obey -- that's what allows frameworks to operate. The more that a language allows a parent to say "these are the exact requirements needed to be a well defined version of myself", the more power the language has to build frameworks that are guaranteed to work out of the box. The parent designs for "the children will have free reign to do whatever they want here" and "the children will do exactly this and nothing else here". I'll give you an example that we were discussing yesterday: dynamic dispatch on FPGA. At the moment, the parent implementation of a dynamic dispatch method just defines the connector pane of the method. It does not define the cycle time of the method. In order to write an FPGA framework where any child class can be plugged in and the framework works, there are cases where you need to be able to guarantee that the child override will execute in the same number of clock cycles as the parent implementation defines. Essentially, the parent implementation needs a way to say "I have three Feed Forward nodes on my diagram in series between this input and this output. I require all overrides to maintain the same amount of pipelining... their diagrams must have exactly three Feed Forward nodes in series." We were discussing ways to add that restriction declaration to LabVIEW and whether the compiler could really check it. I have plenty of other examples, from many languages, of parent classes that want to define particular limitations for children. LabVIEW has the scope restrictions, the Must Override restrictions, the DVR restrictions [particularly the one that says this one ancestor is the only one that can create DVRs for all the descendants]. When we someday have template classes, we'll have the ability for the parent class to set type limits, just like every other language that has templates. If you are defining a class that is a lot like a parent but violates Identity, State or Behavior, do not use inheritance; use containment instead. Delegate to a contained instance of the parent class when (if) appropriate. Or define a new grandparent class for the parent and move the common functionality up to the grandparent such that the invariants of the parent class are unchanged and you can now inherit off of the grandparent to get ONLY the functionality that your new piece requires. > (if you still can open it in your then current version of LV ) We just last year walked a LV 4.0 VI all the way to LV 2012. It opened and ran just fine. You have to open it in LV 6.0, then LV 8.0 then LV 2012, as those are the defined load points, but the mutation paths have been fully maintained.
-
Nope. The IPE is you telling LV what you are doing, not necessarily how to do it. Yes, it was named for asserting inplaceness, but that really only applies to the "Element In"/"Element Out" pair, and we never really came up with a better name for the structure. It simply provides a better way of saying "I am doing this complex operation of taking a value out, modifying it, and putting it back...LV, you figure out how to lay out the memory more effectively now that you have more information." I am referring to the overuse of the DVRs just to avoid copies where you're taking information away from LabVIEW so you can control the copies entirely. In some cases, yes, that's what you have to do, but with every rev of the compiler, those situations become rarer. Doing it knee-jerk just because you have a large data structure isn't good practice, IMHO. Exactly! Like what I wrote in the blog post about the Swap Values primitive. Yes, it can reduce data copies because LV knows that swapping is your intent, but it also makes it much clearer what you're doing with all those crossing wires, and that has benefit all its own.
-
Right. 5 copies of your class. Easily leading -- as I saw happen in the late 1990s with early naive implementations of C++ templates -- to multiplying the size of your entire program by 5 or 6. Took quite a few years to get it right. We can learn a lot from their trailblazing, but it still isn't dead obvious. Sure. And we might be able to use it for that purpose. But there's no control for a Void type. And there's no template instantiation in the project tree or in the type propagation (because, ideally, types would be instantiated from the diagram the same way new queue types are created from wiring). And keep in mind we're not talking "save as" here... we're talking a chained template that has to be kept up to date with the original and only instantiate the type at *runtime*, not as actual source files. There's roughly a bijillion issues with template classes. I'd rank it as easier to do than interfaces, but it still isn't a one-release feature. -- Stephen
-
I'm still stuck back on the original problem, so bear with me. You want to enforce "must call parent", but you can't say what connector pane the parent function has? Huh? You want the parent to say "I have a function and it must be called at some point by my children, but I can't say what function they need to call it from and I can't say anything about their setup". At that point, why are these children at all? Why aren't you using a delegate? Which brings directly to Shaun's points... And that's almost exactly what I've said from the beginning. Object-oriented programming adds encapsulation and inheritance. It brings the next level of organization to your code (you know how to organize a block diagram, this is how to organize the VIs in a module). And if you can't define the relationship, then it is just a regular subVI call, with no special Call Parent Node behavior definable. I'm really missing what relationship you think these two classes have and why you think there's any sort of child relationship involved.
-
If this is true, then I definitely do not understand what you're asking for. How can a parent class have any need to call a function when it doesn't even know the parameter count? What possible need could a parent have to define *anything* for the child? Try again, from the top, using small words and pictures, please.
-
Jack: After working through your thread, I think the answer you are looking for is what would be called template classes in C++ and generic classes in C#. There you would define the ancestor class in terms of type T -- not any specific type, but an unnamed type T. Think like the Queue primitives. You have many different types of queues: queues of strings, queues of integers, etc. All of them need a "copy data into the queue" operation. Obviously that cannot be defined by the parent "queue" class. And it cannot be by dynamic dispatch because, as you point out in your examples, every child has a different interface for this operation. The templating/generics takes care of that. An entirely new class is instantiated by the compiler by supplying a concrete type to fill in for type T. R&D prototyped but never released generic VIs (loathe the name because the terminology is way too overloaded, but I'll use it for now). We need a way that you would put a placeholder in the private data control and then in your project specify "I am using a new class which is the same as my generic class but with that placeholder filled in with <your specific type here>". Templates/generics have proved quite powerful in various languages, but they generally come with the need for an ultrasmart linker that can, at load time, only create one copy in memory of the specific concrete classes, even when multiple modules join together, each one of which may have instantiated the same concrete class. They also want to only duplicate the specific methods that use type T's internals and not duplicate any methods that just refer to type T in a way that all the assembly code generated is identical (i.e. T just passes through them but is not itself modified). That addresses ShaunR's memory concerns. Without such smart linkers, templates will bloat your code very very very quickly. I assume if we ever get this feature in LV that we will have learned from the other languages that have it and build the linker capacities in from the outset.
-
Citing the NI VIs as a template to follow is silly. All the modules like this date from a time when by value objects did not exist. To achieve encapsulation of data, references was the only option. I am not saying they wouldn't be references if they were designed from scratch today... they might be. But the engineering evaluation was never made, so citing them as your reason for using references reads too much into their design. And they all have established patterns of usage that make converting to a by value style today impossible as there's really no way to do such a conversion and maintain backward compatibility. Very large waveforms, arrays, strings, and clusters have been used quite successfully by many people for many years in LabVIEW. Becoming a class does not change the calculus of what should be by value and what should be by reference. If you are branching the wire, LabVIEW *may* make copies of the data. The majority of the time -- and the LV compiler keeps getting smarter about this with every release -- when it makes a copy, it is because a copy is needed, and would be needed even if you are working with references, only in the reference case, you'd be dropping an explicit "do a deep copy of this reference" node. In my book, leaning on the LV compiler to give you better data copy control is a best practice. Trying to take control of that for yourself just burns up lots of processor cycles uselessly on memory constructs that add expensive thread synchronization when none is needed for the data accesses and cuts you off from all the compiler optimizations that come with dataflow.
-
Organizing Actors for a hardware interface
Aristos Queue replied to Mike Le's topic in Object-Oriented Programming
Sometimes I would suggest such a proxy, but there are some pieces of HW that behave more like streams of status updates: a sensor that continually sends back data, or a robotic arm that you give it a final X, Y, Z coordinate to reach and it streams back to you the ongoing "here's where I am now" info, rather than you polling the HW continuously for "where are you now? how 'bout now?" If I had to build a proxy between the UI and the actual HW to make that happen, that would be my first choice of architecture, all other things being equal and no other information about the system. Essentially, I prefer to get to "push" as quick as possible, whatever software layers are required to achieve that. I find that it gives a more responsive system overall. And, for the record, this isn't just Actor Framework. This is any sort of "here's the UI process and over here is the HW process". So "actor" as a model of the system, not as a particular instance of Actor.lvclass. -
As long as you have a reason based on trying alternatives, not based on fear or a knee-jerk "cannot be done otherwise" response, you won't hear objections from me*. :-) * unless your logic strikes me as wildly off base, but that's not applicable here. You do what a SQL database does and double-key it. The first key maps to a second key. The second key maps to the data cluster. You can have as many different types of first keys as you want that all map to the same second key. Include some bit of ref counting in the second key to know how many first keys are pointing at it so that you know when to throw the data cluster away.
-
Organizing Actors for a hardware interface
Aristos Queue replied to Mike Le's topic in Object-Oriented Programming
Agree with Intaris, though I prefer push over pull: I don't encourage interrogating the HW. Instead let it push changes up to the UI when they happen. The UI just announces "I am connecting" or "I am disconnecting". After that, it is the HW's job to tell the UI about state changes. The benefit here is that you, the designer, do not ever get in the mindset of your UI holding and waiting for information from the HW, which leads to long pauses in your UI responses. -
With the caveat that I consider it a bad practice to use references unless you are backed into a corner and have no other options, then, yes, this is a good practice. Just remember that if *any* data member of your class is by reference then things work much MUCH better if *all* data members of your class are by reference. Trying to mix by value members with by reference members is legal but results in situations that many people screw up easily (i.e. the wire forks and now half of their class is copied and half of their class is the shared reference, leading to confusion about what exactly this object represents).
-
If you are anywhere close to Austin, TX, there is a UT professor that offers a 1-day-per-week-for-three-weeks course that is spectacular. We sent the entire LV team through when we did the cross-over from C to C++ about a decade ago, and we still send new hires through from time to time if they come from a predominantly Java or C# background. Dr. Glenn Downing.