Daklu Posted February 15, 2013 Report Share Posted February 15, 2013 I have to challenge you on these points: A child class that does something the parent never intended is a bad child class in almost all cases. The more you can convert those runtime problems into compile time problems by letting the parent declare "these are the requirements", the more successful the authors of the child classes will be. A child class needs to match it's parents for Identity, State and Behavior or else a child cannot be effectively used in any framework written for the parent. If you are defining a class that is a lot like a parent but violates Identity, State or Behavior, do not use inheritance; use containment instead. 1. This is probably true in situations where the child class is written with the intent of replacing the parent class in a fully functional system. I frequently create subclasses as unit test doubles and these almost always do something I didn't expect when writing the parent class. 2. Also probably true as long as the child is intended to replace the parent in a fully functional application. In my experience these restrictions get in the way of things I try to do in the future, like automated testing. In fact, unit testing is one of the reasons I rarely use any of the current restrictions available to us. 3 & 4. Possibly true, depending on how you define Identity, State, and Behavior. But as it is I'm not quite following. If Identity is defined as an individual object why should the child class identity match the parent class identity? (Data) state is defined by all the internal data fields... those same private fields that Labview doesn't allow anyone else to know about. Why does it matter if the child's state matches the parent's state, or for that matter, why does the internal state matter at all? Behavior is defined by what the object does when a given method is called and controlled by the object's state and the method's input parameters. If the child's behavior matches the parent's behavior there's little reason to create the child in the first place. One of the reasons to create a child class in to change the behavior. Is what you're getting at is more along the lines of the Liskov Substitution Principle or Design by Contract? Quote Link to comment
Aristos Queue Posted February 15, 2013 Report Share Posted February 15, 2013 > Is what you're getting at is more along the lines > of the Liskov Substitution Principle Yes. > or Design by Contract? No. LSP isn't just a good idea. It's a requirement for any system that you want to be well defined. Daklu, you bring up the testing case. The "behavior" in this case is that the child does whatever its parent does but it does it in its own way. So a child class for testing may do various asserts, logs, and may fake results. But you don't design it to have stateful side-effects on further downstream functions otherwise you end up with an invalid test. I wish I had an easy example to give you at this point, but I don't have one at my fingertips this morning. The parent makes promises to its callers that "all values of me will behave within parameters X, Y and Z." All child classes are instances of the parent class (that's why they can travel on a parent class wire and be the value of a parent class control). They have to maintain any promises that the parent class made, so they have to adhere to X, Y and Z. Failure to do so -- even in a testing situation -- breaks the calling program. In release code, this results in undefined behavior that can send the calling program off the rails. In a testing situation, this results in both false positives and false negatives. The more that X, Y and Z can be defined in code, the easier time the child class author has and the more likely it is that the system actually works. Anyone who saw my presentation at NIWeek 2012 will recall my discussion about "a child class' override VI should only return error codes that the parent said could be returned." So even though the parent VI may not do anything on its diagram (and so never ever returns an error) it should still define the set of errors (in its documentation) that may be returned by any override VIs. This is a very hard promise to adhere to for various reasons, but if I am going to write a framework that calls the parent VI, I may need to test for specific error codes coming out of that function. If any child class that comes along could return an error code that I'm not prepared for, that's a problem. Similar crops up with parents that document "the output will always be a number in the range 0 to 10". This is only covered in documentation. A child could return 11, but will break callers that are expecting 0 through 10. If we could encode in the interface of the parent VI "this output must be 0 through 10", we could break a child VI that tried to return 11 in the compiler. Or -- because figuring out that a given math expression won't ever return a number outside a range is often impossible to prove -- we could at the very least require that the output pass through an In Range & Coerce function before returning. Range is a promise that the parent VI can make to callers that is currently impossible to specify in the code. These are the kinds of conditions that I am talking about. It is why "Must Call Parent" is such a critical part of the LabVIEW architecture -- it solves a massive hole that exists in other programming languages for a parent being able to make certain promises to its callers. I cannot count how many times I have wished for something as simple as Must Call Parent in C++, C# and JAVA. The reason it doesn't exist there is -- so I have been told by people who work on those languages -- that in non-dataflow languages, it is hard for a compiler to prove whether the requirement is met or not. Personally, I think they're just not trying hard enough because it seems obvious to me how to implement it, but I don't work on those languages and I've never actually tried to make a compiler check one of those languages. Behavior is defined by what the object does when a given method is called and controlled by the object's state and the method's input parameters. No. Behavior is defined by what comes *out* of a method and the impact that running one method has on running later methods. Does all this make sense? Do you see how it relates back to the earlier comments about a parent defining the use cases for children? The parent does not make any promises about *how* the job gets done, but it does make promises about the data that comes out of the function and the impact on later function calls. I clearly haven't done enough to convey the critical nature of all of this to the LabVIEW community. These things are really hard to do in practice, and you can find plenty of examples where they weren't followed, but when you find one of those examples, you can almost always find a bug in the code related to that example. Breaking LSP means your code is broken. Unfortunately, LSP is something that can only be determined by human analysis, not by compiler analysis, though we continually try to upgrade the amount that can be algorithmically proved. And for the non-OO programmers: LSP applies to you too, but it is even harder to analyze because you don't have the formal structures to talk about. But it affects any time you have a plug-in system or refactor a module called by other modules or use any sort of variant data or flattened string interpretation. Quote Link to comment
Daklu Posted February 17, 2013 Report Share Posted February 17, 2013 > Is what you're getting at is more along the lines> of the Liskov Substitution Principle Yes. > or Design by Contract? No. Hmm... I don't understand how LSP can apply but DbC does not. Everything I've read indicates LSP and DbC emphasize the same thing--establishing preconditions and postconditions for each method. I've seen it phrased a few different ways, but the general idea is an overriding method should require no more and provide no less than the parent method. Or to put it another way, the child's precondition must be no stronger and its postcondition must be no weaker than those of the parent method. Some authors appear to consider LSP and DbC equivalent. The Open-Closed principle is at the heart of many of the claims made for OOD. It is when this principle is in effect that applications are more maintainable, reusable and robust. The Liskov Substitution Principle (A.K.A Design by Contract) is an important feature of all programs that conform to the Open-Closed principle. It is only when derived types are completely substitutable for their base types that functions which use those base types can be reused with impunity, and the derived types can be changed with impunity. (Emphasis added.) "The Liskov Substitution Principle" p.11, Martin ------------------ LSP isn't just a good idea. It's a requirement for any system that you want to be well defined... <snip> I agree with everything you said. I didn't understand your reference to "Identity, State, and Behavior" so I turned to Google to see if I could find a context for it. I think I understand what you meant by "behavior" now and it appears to align with my interpretation of behavior, but I'm still not following the reference to "identity" or "state." ------------------ Similar crops up with parents that document "the output will always be a number in the range 0 to 10". This is only covered in documentation. A child could return 11, but will break callers that are expecting 0 through 10. If we could encode in the interface of the parent VI "this output must be 0 through 10"... These are the kinds of conditions that I am talking about. It is why "Must Call Parent" is such a critical part of the LabVIEW architecture -- it solves a massive hole that exists in other programming languages for a parent being able to make certain promises to its callers. I understand where you're coming from. I've occasionally implemented informal contracts by documenting pre- and postconditions and slightly more formal contracts by using assertions. Your post appears to be advocating additional language constructs to enforce expectations. I have a couple concerns about this: 1. In the example you gave, the parent method has no way of ensuring a child method outputs values within the expected range. It can't control what shows up on the child method's output terminals. I'm not sure how one would go about declaring that requirement. You can't do it in code. The only thing I can think of is some sort of property dialog box, like maybe setting up the indicator with the same kind of range property as the numeric control. I'm not fond of requirements that aren't explicitly spelled out in code--I think it significantly hinders readability--but I don't see any way to implement this kind of requirement in code. 2. Even if the parent method can impose that requirement, my personal opinion is that's not the best place for it. If the caller is going to invoke unknown code (from an unknown child class method) and it has restrictions on what it can accept from the sub vi, it should be validating the values returned from the method. Yes, there are distinct advantages to to having the compiler find bugs, and that ability is lost when the postconditions are verified by the calling code. But the overall goal isn't (I hope) to push as much error detection into the compiler as possible. The goal is to give us a tool we can use to quickly create what we need to create. The more a language forces potential errors into the compiler the less flexible code written in that language becomes. Every restriction put on a piece of code reduces the possible places where that code can be used. It seems very analagous to the strict typing vs dynamic typing debate. Even the Must Override and Must Call Parent flags are nothing more than reminders for child class developers. I can't think of a single case where either of those flags guarantee the child class will be implemented correctly, or where using them is appropriate for all potential subclass situations. Maybe there are some, I just can't think of any. But I've run into lots of situations where they got in the way of what I was trying to do. Quote Link to comment
Aristos Queue Posted February 17, 2013 Report Share Posted February 17, 2013 Hmm... I don't understand how LSP can apply but DbC does not. DbC (as I've heard it discussed) gets into planning with users and feature scheduling; pulls in way more topics than I'm speaking to. Not that it doesn't have something to do with this, but it's overly broad. https://en.wikipedia.org/wiki/Object_%28computer_science%29 Three properties characterize objects: Identity: the property of an object that distinguishes it from other objects State: describes the data stored in the object Behavior: describes the methods in the object's interface by which the object can be used Quote Link to comment
Aristos Queue Posted February 17, 2013 Report Share Posted February 17, 2013 You can't do it in code. Not it LabVIEW nor most other languages TODAY. There is strong interest in integrating ACL2 compiler proof engine into various compilers as it can provide such guarantees. I'm not fond of requirements that aren't explicitly spelled out in code--I think it significantly hinders readability--but I don't see any way to implement this kind of requirement in code. This is true of most of LSP. It can be proven to be impossible to machine check or to assert. Doesn't mean it isn't a requirement. If you have a problem with this, take it up with that Grand Omniscient Device before Whom we are all but mere peripherals. it should be validating the values returned from the method. By that argument, all outputs should be variants and the caller should validate that the data types are types it can use. It's exactly the same situation. A subVI needs to define what it does, and a caller depends upon that thing being done. And if part of that contract is a range limitation, then the caller doesn't have to do that check -- just like it doesn't have to do the data type check today. Every restriction put on a piece of code reduces the possible places where that code can be used. On this point, you are wrong. The more restrictions a piece of code has, the *more* places it can be used. A function that returns the constant value "Pi" can be used everywhere. A function that computes Pi on the fly cannot be used in some time critical systems. If a function has design contract that says "this is my performance bound", then it can be used in more places. This is not an example I just pulled out of the sky... the interface for the C++ STL actually defines big-O performance bounds for any library implementing that interface for exactly this reason. Impossible to compiler check, but critical to the STL being usable by most systems. Functions that guarantee range, or guaranteed not to throw exceptions, or any of 10000 other guarantees go a long long long way to increasing the amount that code can be reused. But the overall goal isn't (I hope) to push as much error detection into the compiler as possible. The goal is to give us a tool we can use to quickly create what we need to create. These two sentences are synonyms for any application of moderate scale. Run time error detection for things that can be caught by the compiler assumes a test bed that I know never actually gets built by the vast majority of the world's programmers. We're not talking about restrictions on what the caller can do with the code, we're talking about restrictions about what the called code can do to its caller (i.e., how can a child class screw a parent class framework). Even the Must Override and Must Call Parent flags are nothing more than reminders for child class developers. I can't think of a single case where either of those flags guarantee the child class will be implemented correctly, or where using them is appropriate for all potential subclass situations. Maybe there are some, I just can't think of any. But I've run into lots of situations where they got in the way of what I was trying to do. The flags guarantee that child class developers implement some portion of the class correctly. Can the child class author still screw things up? Yes. That's the whole point of everything else discussed above -- there are an endless number of potential requirements and they cannot all be compiler checked. But some -- like these -- can be. As for not using them, I don't know what to say to you. The flags are the backbone and key to most every library I've ever released, critical to anyone ever making a large system workable. If these flags have ever gotten in your way, then either the author of the ancestor class didn't know what he or she was doing (i.e. putting Must Call Parent on every method without thinking) or you were trying to hack the system and these rightfully stopped you. Quote Link to comment
Daklu Posted February 17, 2013 Report Share Posted February 17, 2013 [Preface for readers: As always, my goal from these discussions is to increase my own knowledge and understanding. I have found the best way to do that is to explain my beliefs and the reasons for those beliefs and hope those with conflicting beliefs do the same. If any of my comments (on any thread, here or on NI's forums) come across as overly aggressive or combative, I apologize. It is not my intent to beat my beliefs into others.] DbC (as I've heard it discussed) gets into planning with users and feature scheduling; pulls in way more topics than I'm speaking to. Not that it doesn't have something to do with this, but it's overly broad. Okay, I can accept that. 1. Identity: the property of an object that distinguishes it from other objects2. State: describes the data stored in the object 3. Behavior: describes the methods in the object's interface by which the object can be used Based on some of the information I read earlier, I interpret Identity as a property (usually not explicitly declared in LV) that is unique to each object. Two objects of the same class have different identitites. This seems to agree with that interpretation. In light of that, I don't see how a child class can ever hope to "match its parent's Identity." For State I refer back to my earlier question. Yes, every object has an internal data State. Why does the child's State need to match the parent's State for it to be "effectively used in a framework?" It seems like an obvious violation of encapsulation. I think of Behavior as a more abstract concept than that definition appears to advocate. To my way of thinking, a class' behavior is the simplest model that explains to users how the class works well enough for them to understand how to use it. Methods are part of that model, but they don't describe the entirety of a class' behavior. For example, a file i/o class may present itself to users as having two behavioral states: file open and file closed. The class may be written so internally it only has one behavioral state, or it could be written so it has ten behavioral states. It's irrelevant. What's important is that the class presents itself publicly as having two behavioral states. It's this public behavior that the child classes should match.* [*Assuming the goal is to replace the parent object with the child object at runtime. If that is not the intent, there's no reason to mimic the parent's behavior.] This is true of most of LSP. It can be proven to be impossible to machine check or to assert. Doesn't mean it isn't a requirement. Oh I fully agree they are still requirements; I'm not disputing that. Every time I use a sub vi or primitive I'm using it with the assumption that it meets the set of precondition and postcondition requirements I've imagined for it. Some of those, like the data types it accepts and returns, are explicit. Other, like the execution time, are implicit. Implicit requirements are generally understood but not strictly enforced by the language or compiler. I'm disputing the apparent suggestion that requirements should be moved from the implicit realm to the explicit realm. Does it sometimes make sense to be able to strictly define some set of arbitrary requirements? Probably. I can see how it would be useful in your FPGA example. On the other hand, as flintstone pointed out strictly defining the number of clock cycles it is allowed to consume eliminates the possibliity of creating a child class and using it in a system with different requirements. By that argument, all outputs should be variants and the caller should validate that the data types are types it can use. Taken to the other extreme, every output should be a unique type that strictly defines all values that sub vi will return. Every string constant, every integer field with unique range restrictions, etc. should be a unique type so you can guarantee a runtime error won't occur. I think we can agree both extremes lead us away from a productive and usable language. If we use LV's existing type system as a starting point, adding the ability for a parent method to define the allowable range of an integer output terminal is roughly analagous to creating a new data subtype. Sure it might look like an Int32, but I can't connect any Int32 to it. I have to coerce my Int32 into an Int32Range0-10 subtype before I can pass it out. As a thought exercise, imagine replacing the Int32Range0-10 output terminal with an enum typedef that simply lists the numbers 0-10. Functionally it's nearly the same thing. What are the long term consequences of doing that? Let's pretend an egg packaging factory has a vision system in place for counting the number of eggs in a box just prior to final packaging to ensure all egg cartons are full. For the sake of the example we'll assume all their cartons hold 10 eggs. The developer recognizes that the only valid values returned from the Vision.EggCount method are the integers 0-10, so he defines an Int32Range0-10 output terminal and is confident his system will work correctly. Lo and behold marketing discovers many customers prefer purchasing eggs by the dozen, so the factory sets up a new packaging line for 12 egg cartons. The vision system is identical in every way, except on the new line Vision.EggCount can validly return the integers 0-12. What choices does the developer who used an Int32Range0-10 have? 1. Fork the original code and maintain a separate code base for each packaging line. Generally considered a poor solution. 2. Edit the parent method and switch it to an Int32Range0-12. Then edit the calling code to add bounds checking and a configurable upper limit. Conversely, the developer who implemented Vision.EggCount using a simple Int32 (or U8, or whatever) has already implemented bounds checking in code. If the bounds checking is in the calling code he'll have to add a configurable upper limit but his system is already designed to deal with the possibility of an error, so he has far less work to do. If the bounds checking is in the Vision.EggCount method he can implement a new limit by subclassing (possibly accompanied by some simple refactorings.) On this point, you are wrong. The more restrictions a piece of code has, the *more* places it can be used. I respectfully disagree. A function that returns the constant Pi can only be used in those places where Pi is needed. A function that returns the constants Pi, e, g, and c can be used in more places than either the constant function Pi or the computed function Pi. Now I'm not saying I think all constants should be rolled into one function. It's just an illustration of how lessening restrictions makes it easier to use a piece of code in more situations. You comment about how the constant function Pi can be used in more places than the computed function Pi raises some interesting questions. Is that universally and necessarily true? I don't think so. It's only true in those situations where reduced computation time to get Pi is desireable. Granted, that is usually the case so I don't dispute this specific example, but I question the ability to draw a general conclusion from it. I've mostly been thinking about functional requirements, not performance requirements. To be honest I'm not sure how to merge them. Maybe in trying to simplify things we're both wrong. After all, I can just as easily put a minimum execution time requirement on the Pi function as a maximum execution time requirement. If I define the minimum execution time as 16 seconds, is that function more usable that a Pi function with no explicit performance requirement but is shown to execute in 10 ms? It seems to me that in order to know if a given restriction is increasing its application space or decreasing its application space, one must know the size of the application spaces being gained and lost as a result of the restriction. As those application spaces includes future code not yet written, any computation is necessarily subjective and the decision is constrained by the author's imagination. the interface for the C++ STL actually defines big-O performance bounds for any library implementing that interface for exactly this reason. Impossible to compiler check, but critical to the STL being usable by most systems. This illustrates (albeit imperfectly) what I was trying to communicate earlier. Imposing this performance requirement assumes anyone implementing this interface will be using it in a system where those performance requirements are necessary. What if I want to use STL in a system that doesn't have those performance requirements? If the performance requirement is strictly defined and inviolate, I no longer have the ability to create an STL implementation that meets my needs. On the other hand, if they are simply documented requirements then I am free to violate them at my own risk. Does making the STL performance requirement documented rather than enforced make the STL less usable across systems? I don't think so, though it may be slightly less "usable" from the developer's point of view since they must understand the performance characteristics of any implementation they want to use instead of being able to haphazardly plug in a random implementation. Run time error detection for things that can be caught by the compiler assumes a test bed that I know never actually gets built by the vast majority of the world's programmers. I agree many programmers do not adequately test their code and I agree compiler errors are easier to find and fix than runtime errors. I disagree that NI should try to create (or move towards) a language which maximizes compiler errors to compensate for inadequate testing by developers. The flags are the backbone and key to most every library I've ever released, critical to anyone ever making a large system workable. I'll speculate that the reason they are the key to most every library you've released is because one of your primary concerns is to prevent people from shooting themselves in the foot. I understand why that is and I'm not disputing that it is a valid goal to work towards. But it's not the only valid goal and it's not everybody's goal. In particular, it's not my goal. My goal is to release libraries that allow users to extend them as needed to fit their specific work flow and requirements. Is there more opportunity to blow off a couple toes? Yep. But they can also point that gun at the racoons rummaging through the trashcan and the cougar stalking the dog. They're not limited to plinking at the rusted truck body on blocks in the backyard. How did using those flags make the difference between a workable system and a non-workable system? What situation did they prevent that could not have been detected equally well by decent error handling and testing? (I can't help but relate our goals to political ideologies, but I'll spare the thread that particular rabbit hole.) If these flags have ever gotten in your way, then either the author of the ancestor class didn't know what he or she was doing... That's certainly possible seeing as how the author was me. All I can say is every single time I can think of right now where I've used either of those flags, I have encountered or been able to think of situations where imposing that requirement on a child class didn't make sense. Quote Link to comment
Daklu Posted February 17, 2013 Report Share Posted February 17, 2013 Another thought that occurred to me towards the end of the lengthy editing session but for which I didn't want to go back and rewrite the entire post: Somehow I think the idea of using a parent class as an interface fits into this discussion. Is allowing an interface to declare performance requirements the best solution? Instinctively I don't think it is. Suppose I publish an interface ("FastMath") that guarantees via compiler checking every method will execute in less than 100 ms. Lots of child classes are built using different implementation strategies and with different tradeoffs, but they all meet the <100 ms execution time requirement. A developer realizes he could use FastMath, but he requires 50 ms execution time. 100 ms is too long. It turns out that several of the newer FastMath child classes published do in fact meet his timing requirements. Unfortunately he cannot use the FastClass interface because it doesn't exclude the slower implementations. He *can* create a new ReallyFastMath interface class, create a new subclass for each of the FastMath subclasses that meet his performance requirements, and use delegation. That is how we would do that using existing LV technology. What if we had a solution that allows the interface user (calling code) to declare the performance requirement and the interface implementations (child classes) to publish their performance parameters? The compiler can compare the requirement against the parameters and throw an error if they don't match. It doesn't change all errors into compiler errors--dynamically loaded classes would generate an error until loaded. But it does get closer to the goal of avoiding errors without sacrificing flexibility. Quote Link to comment
Aristos Queue Posted February 18, 2013 Report Share Posted February 18, 2013 You rather severely missed the point on the Pi constant vs computed Pi functionality. Sure, a function that can compute pi or c is usable in more places than a function that just computes pi. But a function that sometimes computes pi and sometimes computes c and the caller never knows which one he is going to get is substantially less useful. Worse is a function that is documented to only produce pi that generates any number of random values when it is executed. Quote Link to comment
shoneill Posted February 18, 2013 Report Share Posted February 18, 2013 I am having trouble following the discussion. Since when does the execution time of a program belong to it's compile-time definition. Yeah, for FPGA I can grok that but this can be done by folding possible DD calls into a case with each "DD" VI present. If timing is maintained then good, if not then boohoo, compile error. I just don't get WHY this would be a job of the compiler outside of FPGA. I'm not trolling, I just have the feeling I'm missing something important here. Shane. Quote Link to comment
flintstone Posted February 18, 2013 Report Share Posted February 18, 2013 @shoneill: I guess it really depends on the task you are working on. A hard real-time system is defined as a system where the correctness not only depends on the correct values being produced but also on keeping the timing bounds. At university I did a course on worst-case execution time analysis of programs and, in one step further, programs that execute in constant time. For these systems you give up optimal best-case performance in favour of guaranteed upper bounds for execution time. The people there work on annotation systems for code to be able to communicate these requirements to the compiler directly in the code. So there are people who consider timing performance as something they want to use a compiler for. As I am a big friend of compile-time checks I definitely like this idea. I also like strong static typing a lot, it may mean more work in the beginning but for large systems it definitely pays off in the end when there are a lot of types around that sound similar and thus tend to get mixed up (I've seen this happen in more or less every weakly-typed system as soon as the complexity is above a certain point). So for any kind of restriction on return values I would definitely go with strong static typing, especially when this is a framework to be used by other programmers. If I document "Do not return something outside the range m-n" I have to rely on the user to read, understand and follow this guideline. If I provide a type which is restricted to this range don't have to hope I have a good guy there who reads the documentation ... he/she will need to anyway as soon as the stuff won't compile not matter how hard they try. Cheers, flintstone Quote Link to comment
Aristos Queue Posted February 18, 2013 Report Share Posted February 18, 2013 shoneil: In this particular case, I'm talking about how many pipeline segments the child class uses in its override VI. If the parent specifies 2 feedforward nodes, the caller will be expecting that it takes two iterations to move data through the pipeline. If the child takes three iterations, it will be out of sync with the rest of the program. In a straight linear program, that doesn't matter, but if you have two parallel pipes where the results get merged at the end of the pipes, you need both pipes to be the same length. flintstone's comments about RT determinism also apply. I just wasn't addressing those directly, but I have talked about that as a similar problem elsewhere. @Daklu: I was working on a class today and realized that you are likely primarily concerned about "code reuse" in the sense of inheriting from a parent class and then *calling the child directly*, where the caller does not care about most promises the parent made. I am primarily concerned about "code reuse" in the sense of *calling the parent directly*, which may dispatch to the child, where caller cares exclusively about the promises that the parent made. If you're not taking advantage of inheritance for actual dispatching but just for inheriting functionality, then you have less need for Must Override and Must Call Parent. Not zero need, but substantially less (you only care about promises made to fulfill parent's internal API requirements, like Must Override on protected scope VIs, not its external API requirements, like Must Override on public scope VIs). And I can imagine that a class designed to be called through the parent could indeed cause frustration if you then tried to write an app where you call it directly through the child, possibly enough frustration that I would suggest you investigate containing the class instead of inheriting from it. I think this is at the heart of the difference in viewpoint between your position and mine. Quote Link to comment
Daklu Posted February 18, 2013 Report Share Posted February 18, 2013 You rather severely missed the point on the Pi constant vs computed Pi functionality. [Edit - cross-posted with AQ] Perhaps. If so, I apologize. I assumed a computed Pi function's execution time would be normally distributed and somewhat predictable, but longer than the constant Pi function's execution time. Are you assuming the execution time is unbounded and/or entirely unpredictable? Sure, a function that can compute pi or c is usable in more places than a function that just computes pi. But a function that sometimes computes pi and sometimes computes c and the caller never knows which one he is going to get is substantially less useful. Agreed, but I'm not following how this relates back to the Pi function's execution time. Worse is a function that is documented to only produce pi that generates any number of random values when it is executed. Agreed. This function is clearly violating the contract of returning Pi and it is going to cause problems for users. If there were a compiler enforcable contract to "only return the value of Pi" that potential error would be quickly found. However, if I'm writing a Pi function for others to use, is there significant value in the ability to specify an enforcable contract that states, "this function will execute in < n ms and only returns the value 3.14159...?" **shrug** Obviously I'm skeptical. We're talking about code fulfilling a contract as if they are separate things. They're not. If I'm building a library and want to make certain guarantees to the calling code, how do I decide what guarantees I should publish? Say I publish an "execute time < n ms" guarantee. Okay, that's helpful for those who need an execution time guarantee, but it doesn't help those who need a memory use guarantee. So I publish that guarantee too. Where does it stop? Eventually I publish so many guarantees I've effectively defined the implementation. Ultimately the implemented code *is* the contract, it's just a question of which parts of the contract I'm going to publish to the caller. In order for the library to supply the caller with meaningful contractual guarantees beyond the conpane, I have to know details about what guarantees the calling code is interested in. I can't predict that--it's going to be different for every user and every app. Every guarantee I publish that the calling code author doesn't need imposes unnecessary restrictions on that person's ability to create subclasses for their specfic situation. In the abstract I can see how allowing a parent class to impose arbitrary contractual requirements on itself and its subclasses might be necessary in one specific situation--when an app calls unknown and possibly hostile sub vis. (i.e. Apps that allows third party plugins and defines a parent class as the interface for the plugins.) The app author wants to make sure the plugins do not break the app. Is that the best way to achieve the goal? I dunno... I'm still skeptical. Off the top of my head I think it would be better to structure the app in a way that sufficiently segregates the plugins. (i.e. Launch a killable actor to host each plugin and validate the plugin's return values if there's reason for concern.) So for any kind of restriction on return values I would definitely go with strong static typing... I generally prefer static typing as well, but as a general statement this is (imo) taking it too far. Most vis I write have some limited set of values that is strictly smaller than the set of all possible values of the native datatype. In other words, if there's a string indicator on an output terminal my vis don't usually have the ability to return every possible combination of string characters. They are only able to return some subset of them. There is clearly a restriction on the return value, should it be a unique data type? If every unique subset of the native data types is a new data type in itself you'll either end up buried in type conversion code or you'll rarely be able to reuse code. More types does not necessarily equate to a better language. So for any kind of restriction on return values I would definitely go with strong static typing, especially when this is a framework to be used by other programmers. If I document "Do not return something outside the range m-n" I have to rely on the user to read, understand and follow this guideline. If I provide a type which is restricted to this range don't have to hope I have a good guy there who reads the documentation ... he/she will need to anyway as soon as the stuff won't compile not matter how hard they try. Can you explain this a little more? If you are creating a framework for other developers to use, why do you need to specify "Do not return something outside the range m-n?" You implemented the framework code, surely you know what values your code is able to return? Quote Link to comment
Aristos Queue Posted February 18, 2013 Report Share Posted February 18, 2013 Can you explain this a little more? If you are creating a framework for other developers to use, why do you need to specify "Do not return something outside the range m-n?" You implemented the framework code, surely you know what values your code is able to return? The parent defines a method A that says, "I return values within this range." The framework is written to call method A. At run time, a child the framework has never seen goes down the wire. It is this child's override of method A that gets invoked. That override VI needs to be within the range promised by the parent. You -- the author of the parent class and the framework -- know what values that method needs to return. You have to document it so that anyone overriding your parent class FOR USE WITHIN THE FRAMEWORK knows what values to return. That bold-italics-screaming part is the assumption underlying everything I and flintsone have been saying: that the primary purpose of inheritance is to reuse a class within the same framework as the parent, not for freeform use of the child class. Classes that are designed for freeform use (i.e. the Actor Framework, various Vector/Hashtable/etc classes, that sort of thing) don't make those sorts of external usage promises, but they still may put requirements on child classes to fulfill internal requirements. Quote Link to comment
Daklu Posted February 18, 2013 Report Share Posted February 18, 2013 [Grrr... -1 for web-based editors. ] ...FOR USE WITHIN THE FRAMEWORK... Give a guy a chance to respond to respond to a cross-post, wouldja? Besides, we want you at the summit, not in the hospital recovering from a stroke. @Daklu: I was working on a class today and realized that you are likely primarily concerned about "code reuse" in the sense of inheriting from a parent class and then *calling the child directly*, where the caller does not care about most promises the parent made.... I think this is at the heart of the difference in viewpoint between your position and mine. Yes, I agree that is the difference in viewpoint. I thought I made myself clear in my post originally challenging you, 1. This is probably true in situations where the child class is written with the intent of replacing the parent class in a fully functional system. 2. Also probably true as long as the child is intended to replace the parent in a fully functional application. as well as here, I can't think of a single case where [using] either of those flags... is appropriate for all potential subclass situations. and here. [*Assuming the goal is to replace the parent object with the child object at runtime. If that is not the intent, there's no reason to mimic the parent's behavior.] ... Does it sometimes make sense to be able to strictly define some set of arbitrary requirements? Probably... On the other hand, as flintstone pointed out strictly defining the number of clock cycles it is allowed to consume eliminates the possibliity of creating a child class and using it in a system with different requirements. ... What if I want to use STL in a system that doesn't have those performance requirements? My persistence was because I was trying to understand why you continued to appear to claim it is universally better for the parent class to impose restrictions on child classes when (imo) that clearly isn't the case. possibly enough frustration that I would suggest you investigate containing the class instead of inheriting from it. Composition works sometimes, but not always. Suppose your application supports plugins that adds two integers and truncates the sum to 0-10, so you define that requirement as part of the parent class. Users can create whatever child classes implementing the functionality in any way they like as long as they adhere to that requirement. Now I'm building an application and I really like that feature, but my app needs the sum truncated to 0-20. Creating a new class composed of one of the child classes doesn't help me much. If the inputs are 7 and 2 I can delegate to the child class and all is good. If the inputs are 7 and 6 I cannot. I have to write code to detect whether or not I can use the child class on the inputs, AND I have to duplicate much of the code in the child class to handle situations where the sum equals 11-20. That's what I meant when I said more restrictions equals fewer places it can be used. An interface that declares "I will return the value of Pi in < 100 ms" will always be useful in fewer places than an interface that declares "I will return the value of Pi" without giving a timing specfication, because the developer using that interface can create an implementation that meets their specific timing requirements if one isn't readily available. Of course there's always a tradeoff between flexibility and specificity. I could create a "Function" interface with two variant inputs and a single variant output, and then create subclasses for every operation from Add to Concatinate String. Clearly that's moving too far in that direction. From my perspective the next big productivity jumps are going to come when NI implements things like Interfaces/Traits and Generics/Templates. Implementing the ability for a parent class to impose an arbitrary requirement on a child class strikes me as an interesting idea with somewhat limited real-world benefit. Quote Link to comment
Aristos Queue Posted February 18, 2013 Report Share Posted February 18, 2013 > My persistence was because I was trying to understand why > you continued to appear to claim it is universally better for the > parent class to impose restrictions on child classes when (imo) > that clearly isn't the case. My persistence is because it is universally true. Whatever requirements that the parent class has, it should be able to impose it through code. It is only the emphasis that shifts, not the nature of the restrictions. As I said, there are still internal restrictions of the class that may need to be met for the class itself to function (the primary purpose of Must Call Parent). It was only the "Must Override" and frequency of "Must Call Parent" as particularly useful that I was downgrading. Quote Link to comment
shoneill Posted February 19, 2013 Report Share Posted February 19, 2013 shoneil: In this particular case, I'm talking about how many pipeline segments the child class uses in its override VI. If the parent specifies 2 feedforward nodes, the caller will be expecting that it takes two iterations to move data through the pipeline. If the child takes three iterations, it will be out of sync with the rest of the program. In a straight linear program, that doesn't matter, but if you have two parallel pipes where the results get merged at the end of the pipes, you need both pipes to be the same length. @AQ Thanks. The pipelining (although it WAS mentioned previously) helps clear up at least one concrete case for this that I can fully appreciate. I think I get the general gist of it now. I've had my own problems with pipelining FPGA code so I can appreciate how this would be a great benefit. Thanks. Quote Link to comment
Daklu Posted February 20, 2013 Report Share Posted February 20, 2013 My persistence is because it is universally true. I wasn't clear enough. If a guarantee is explicitly declared in the parent class, then yes, child classes should have to adhere to it. I'm questioning the (perceived) assumption that the guarantee belongs in the parent class in the first place. You've been phrasing it in terms of "the parent class providing guarantees to the caller." I agree there are certain times that ability may be useful--i.e. plug-in systems. More generally you're talking about "the callee providing guarantees to the caller." Guarantees are a first step towards contracts. But whereas guarantees give callers a choice to "take it or leave it," a contract system where the calling code defines its requirements and the parent class (and subclasses) tell the calling code whether or not they meet that requirement is more of a "let's negotiate at see if we can work it out" arrangement. Historically NI has been pretty good at putting out features that works well for 80% of the developers. Unfortunately, often the feature is useless for the remaining 20% of us because there's no way to tailor it to meet our specific needs. I'm concerned your emphasis on guarantees will lead to another 80/20 feature whose flexibility is permanently limited by backwards compatibility concerns. I know what I am suggesting is much larger in scope than what you are talking about and Labview may never support a contract-based programming. I'm just saying if you're going to go from A to B, make sure the path continues on to C. We users are a fickle bunch. When a new feature comes out we like, we tend to want to use it in ways NI didn't expect. Quote Link to comment
Aristos Queue Posted February 21, 2013 Report Share Posted February 21, 2013 I'm questioning the (perceived) assumption that the guarantee belongs in the parent class in the first place. And I'm trying to answer that question by saying, wholeheartedly, without reservation, hesitation or exception, that yes, the parent must, should, needs and wants to declare the restrictions. I'm saying that the parent has to declare what it itself is to be used for. If that design is "I am a free-floating class meant to be used in any number of applications as one of my child classes", then it will have few restrictions. If the parent class is designed to be used as a framework, then it will have many restrictions. But it is the parent class that decides its own usage. And that has ramifications for the children because the children *are* instances of the parent. And as instances of the parent, they want, need, must and should adhere to the rules of the parent. To go further, the reason for the parent needing to specify its own usage pattern is that a class that is designed to be used as part of a framework is designed completely differently from a class meant to be just a reusable base class for other components. A parent class has to make some assumptions in their own internal code about what is going on around them, and those assumptions are very different for a framework class as for a free-floating class. And, so, yes, a parent needs to be able to lay down restrictions and a child class must follow them. And this has NOTHING to do with NI or LabVIEW. This is fundamentals of programming correct code in *any* language. Quote Link to comment
drjdpowell Posted February 21, 2013 Report Share Posted February 21, 2013 I do have an example of a parent-class restriction that I wish I could make. I have an abstract “address” object that defines a method for sending a message. The framework that uses this parent assumes that “Send.vi” is non-blocking (and reasonably fast). But there is nothing stopping a child-class being implemented with a blocking enqueue on a size-limited queue, other than documentation. Quote Link to comment
JackDunaway Posted February 22, 2013 Author Report Share Posted February 22, 2013 And I'm trying to answer that question by saying, wholeheartedly, without reservation, hesitation or exception, that yes, the parent must, should, needs and wants to declare the restrictions. I'm saying that the parent has to declare what it itself is to be used for. If that design is "I am a free-floating class meant to be used in any number of applications as one of my child classes", then it will have few restrictions. If the parent class is designed to be used as a framework, then it will have many restrictions. But it is the parent class that decides its own usage. And that has ramifications for the children because the children *are* instances of the parent. And as instances of the parent, they want, need, must and should adhere to the rules of the parent. Completely agreed; this is precisely the sentiment and context in which 'Must Implement' was proposed. A good article on this topic is Make Interfaces Easy to Use Correctly and Hard to Use Incorrectly. Any of the arguments above about contracts making a library harder to use, or promoting hacks to simply fulfill a contract in order to subclass, just seem... misguided. The whole point of 'Must Implement' or 'Must Override' (or any strategic contract, such as timing requirements) is to make the library easy to use correctly and hard to use incorrectly. Consider for Actor Framework, if Message.lvclass were to define 'Must Implement' on Send.vi (as it already specifies 'Must Override' on Do.vi). Do you agree this as a good use of the contract to make subclass creation more robust and even simpler? Does this example better explain my sentiment for wanting 'Must Implement'? Quote Link to comment
drjdpowell Posted February 22, 2013 Report Share Posted February 22, 2013 Consider for Actor Framework, if Message.lvclass were to define 'Must Implement' on Send.vi (as it already specifies 'Must Override' on Do.vi). Do you agree this as a good use of the contract to make subclass creation more robust and even simpler? Does this example better explain my sentiment for wanting 'Must Implement'? A good example, because “Send” being a method of Message has always looked wrong to me. Messages are written; sending is an action of a communication channel. The act of sending should be independent of the message type. I don’t want to implement Send.vi; I want to implement Write.vi. How will “Must Implement Send.vi” feel about that? Also, what about messages that have no data, and thus don’t need a creation method of any kind? They don’t need to implement Send or Write. Quote Link to comment
JackDunaway Posted February 22, 2013 Author Report Share Posted February 22, 2013 A good example, because “Send” being a method of Message has always looked wrong to me. Messages are written; sending is an action of a communication channel. I tend to agree; Send.vi is an invocation of the message transport mechanism -- Messenger.lvclass -- and Construct.vi (i prefer this terminology to Write.vi) is a member of a concrete instance of Message.lvclass -- something I realized a while back after naïvely convolving the message with the messenger. This aside, i still want to impose 'Must Implement' on Construct.vi for Message.lvclass, yet it clearly cannot be 'Must Override' because message construction has a unique ConPane for each concrete message type. Also, what about messages that have no data, and thus don’t need a creation method of any kind? They don’t need to implement Send or Write. An object always carries 'data' -- even if it's payload-free, the type carries information about the message. There still exists benefits for requiring Construct.vi for these payload-less message objects -- even though that method effectively takes no inputs to construct. Quote Link to comment
Daklu Posted February 23, 2013 Report Share Posted February 23, 2013 And I'm trying to answer that question by saying, wholeheartedly, without reservation, hesitation or exception, that yes, the parent must, should, needs and wants to declare the restrictions. I'm not sure you're understanding what I'm trying to say, so let me try to explain it another way... I think you're saying having the ability for the parent class to dictate arbitrary requirements to the child classes is necessary and this ability is intended to be used in situations the parent class acts purely as an interface for child class implementations and the context in which the parent class is used is known. i.e. I'm writing the parent class *and* the only code that will ever call the parent or child classes. Now, suppose those arbitrary requirements were declared in the calling code instead of the parent class. What functionality has been lost? Since the calling code defines the restrictions for all methods that are executed and none of the child methods will be executed outside of the calling code, any violations in child method implementations will be caught. I expect many times the calling code and child methods will be in memory at the same time while editing, so it's conceivable that users will not even lose much in the way of compile time checking. If calling code declarations provide the same protection capabilities as parent class declarations, but also provide more flexibility, why is that not a preferred solution? Earlier you mentioned your goal is related to Liskov but not to DbC. Theoretically I can see there may be value in providing a way to enforce LSP. However, the functionally you're proposing is an incomplete and limited form of DbC. If you're going to start down the path of DbC, why not design it in a way that makes sense from a DbC perspective? (In some ways this feels like the the ability to inline sub vis. Currently the only option for inlining rests with the author of the sub vi. That implementation feels backwards to me. I'd much rather the calling code be able to dictate which sub vis it wants to inline. I'm often frustrated during LapDog development because I'm being forced to make design decisions that are better left to my users.) I'm saying that the parent has to declare what it itself is to be used for... But it is the parent class that decides its own usage. See, intuitively that doesn't make sense to me. It's certainly not the way I think about things. No class (or sub vi) ever declares what it is to be used for. It only declares what it does, and it does that in code. What it is used for, or how it is used, is entirely up to the person writing the calling code, not the person designing the class. There's a correlation between the two, but I don't think they are identical. Any of the arguments above about contracts making a library harder to use, or promoting hacks to simply fulfill a contract in order to subclass, just seem... misguided. The whole point of 'Must Implement' or 'Must Override' (or any strategic contract, such as timing requirements) is to make the library easy to use correctly and hard to use incorrectly. LOL. If you want to say I'm wrong that's okay. I've been wrong in the past and I'll be wrong in the future. Consensus appears to be I'm wrong about this. I may very well be wrong, but I haven't seen anybody address my point. I don't think I claimed contracts made the library "harder" to use. I did say it's less "usable," but I meant usable in terms of flexibility and places it can be used successfully, not usable in terms of how easy it is to use the API. A good article on this topic is Make Interfaces Easy to Use Correctly and Hard to Use Incorrectly. That is an interesting take on it. Unsurprisingly I don't fully agree with it. IMO the best interfaces are those that are easy to use and easy to extend to work in scenarios the developer did not originally anticipate. The second to last paragraph contains an extremely important idea: The best way to prevent incorrect use is to make such use impossible. If users keep wanting to undo an irrevocable action, try to make the action revocable. If they keep passing the wrong value to an API, do your best to modify the API to take the values that users want to pass. They state you should make incorrect use "impossible," but notice how they make it impossible. It's not by imposing restrictions on the interface user and forcing them to conform to your way of thinking, it's by changing your interface to allow the user to use it in the way they want to. Quote Link to comment
Daklu Posted February 23, 2013 Report Share Posted February 23, 2013 This aside, i still want to impose 'Must Implement' on Construct.vi for Message.lvclass, yet it clearly cannot be 'Must Override' because message construction has a unique ConPane for each concrete message type. I still don't understand why you want to impose a must implement requirement on Construct. Why does your parent care if n child object was created in a Construct vi or with an object cube and a bunch of setters? Either way is valid. Using a Construct vi is entirely a stylistic decision. Quote Link to comment
ShaunR Posted February 23, 2013 Report Share Posted February 23, 2013 No class (or sub vi) ever declares what it is to be used for. It only declares what it does, and it does that in code. What it is used for, or how it is used, is entirely up to the person writing the calling code, not the person designing the class. This is also the crux of the Private Vs Protected debate. What is it better to do? Put so many restrictions that they have to edit your code for their use case (and you will get all the flack for their crap code), or make it easy to override/inherit so they can add their own crap code without touching your "tested to oblivion" spaghetti - regardless of what you think they should or shouldn't do. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.