Jump to content

[Ask LAVA] Must Override exists; could Must Implement?


Recommended Posts

I still don't understand why you want to impose a must implement requirement on Construct.  Why does your parent care if n child object was created in a Construct vi or with an object cube and a bunch of setters?  Either way is valid.  Using a Construct vi is entirely a stylistic decision.

 

Since subclasses inherit state from the parent, it could be desirable to ensure the parent object is constructed properly by imposing 'Must Call Parent' in addition to 'Must Implement'. (Any parent enforcing 'Must Implement' without specific functional requirements such as this is probably is better designed without the contract, allowing the subclass designer the freedom to construct the object with a constant and setters.) And 'Must Call Parent' also can ensure atomicity on construction when it's important to fully construct the object before invoking any methods on it.

What is it better to do? Put so many restrictions that they have to edit your code for their use case (and you will get all the flack for their crap code), or make it easy to override/inherit so they can add their own crap code without touching your "tested to oblivion" spaghetti -  regardless of what you think they should or shouldn't do.

 

It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

 

Just for clarity: do you suggest 'Private' scope should not exist, or should developers just consider 'Protected' way more often?

Link to comment
This is also the crux of the Private Vs Protected debate. What is it better to do? Put so many restrictions that they have to edit your code for their use case (and you will get all the flack for their crap code), or make it easy to override/inherit so they can add their own crap code without touching your "tested to oblivion" spaghetti -  regardless of what you think they should or shouldn't do.

 

There are two different types of reuse, and two different "ease of use" requirements, and they oppose each other. So the answer is that you put as many restrictions on the class as makes sense for the intended usage.

 

If I am creating a class that fulfills a niche in a product that others are going to plug into, that parent class is going to be shaped very closely to match that niche, and is going to have as many rules in place as I can have to make sure that any child classes also fit within that niche... in that case, I am *not* designing for this parent class to be pulled out of its context and so I am *not* making it easy for the person who has a use case I've never heard of. Instead, I'm trying to make it easy for the person who *wants* to fit within that niche because they're trying to implement some alternative within an existing system.

 

If I am developing more of a "top-level library class" meant to be used in have wide utility, then I will create more "cut points" within the class, i.e., dynamic dispatch VIs where people can replace the parent class' functionality with their own.

 

 

But suppose I'm looking at a class that is a top-level library class, and that class has a method A that implements the general algorithm, and A calls B that implements a detail of the algorithm, I might make both of these methods dynamic dispatch, so that new users can override either the general algorithm or can continue to reuse that general algorithm and only override B to replace a specific detail. But that has a curious effect -- anyone who overrides A never even calls B [in most cases] because B is specific to the one implementation of A and the override of A does something entirely different. That's a clue that perhaps you really want more like *two* separate classes with a strategy pattern between them. 

 

There are lots of variations. The point is that the parent has to decide which use case it is going to serve, and if it is serving the "I'm helping children to fit this niche" then it benefits from more ability to specify exactly the dimensions of that niche and throwing every restriction in the book on its API. And the whole continuum of use cases exists, such that the language itself benefits from a class *being able* to specify these restrictions, whether or not every class actually uses them.

 

And, yes, sometimes you have a parent class that you wish you could reuse except for some small restriction... I ran into that recently with the Actor Framework... I actually have a use case for an actor that does *not* call the parent implementation of Actor Core.vi. But removing the Must Call Parent restriction from Actor Core.vi does a disservice to the 90% use case of people trying to correctly implement actors. And, indeed, my use case *is not really an actor*. So that was a big clue to me that the inheritance might not be the best way to handle this, and I looked for other solutions.

  • Like 1
Link to comment
Since subclasses inherit state from the parent, it could be desirable to ensure the parent object is constructed properly by imposing 'Must Call Parent' in addition to 'Must Implement'. (Any parent enforcing 'Must Implement' without specific functional requirements such as this is probably is better designed without the contract, allowing the subclass designer the freedom to construct the object with a constant and setters.) And 'Must Call Parent' also can ensure atomicity on construction when it's important to fully construct the object before invoking any methods on it.

 

It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

 

Just for clarity: do you suggest 'Private' scope should not exist, or should developers just consider 'Protected' way more often?

 

Personally? More the latter (but I have heard reasonable arguments for the former). For example. In languages where you declare the scope of variables, then it's imperative to define variables that maintain state as private (this restricts creating debugging classes). Methods, on the other hand, should generally be protected so that the you don't restrict the ability to affect behaviour and I have never seen *(or can think of any) reason why any should be private. Even those that the developer sees as private "may" be of use to a downstream developer.

There are two different types of reuse, and two different "ease of use" requirements, and they oppose each other. So the answer is that you put as many restrictions on the class as makes sense for the intended usage.

I think that here we fundamentally disagree. There is only "re-use"; one "instance", if you like. Can it be re-used without modification. Re-purposing without modification goes a long way towards that and the more restrictions, the less it can be re-purposed. One is aimed at the user, the other at downstream developers but they are not in opposition (we are not looking at Public Vs Private). When re-purposed, you (as the designer) have no idea of the use-case regardless of what you "intended". Suffice to say a developer has seen a use case where your class "sort of" does what he needs, but not quite. Placing lots of restrictions just forces down-stream developers to make copies with slight modifications and that is an anathema to re-use.

As for "ease of use". Well. That is subjective. What is easy for you may not be easy for me especially if it is a use-case that was conceived when your crystal ball was at the cleaners :D

Edited by ShaunR
Link to comment
Personally? More the latter (but I have heard reasonable arguments for the former). For example. In languages where you declare the scope of variables, then it's imperative to define variables that maintain state as private (this restricts creating debugging classes). Methods, on the other hand, should generally be protected so that the you don't restrict the ability to affect behaviour and I have never seen *(or can think of any) reason why any should be private. Even those that the developer sees as private "may" be of use to a downstream developer.I think that here we fundamentally disagree. There is only "re-use"; one "instance", if you like. Can it be re-used without modification. Re-purposing without modification goes a long way towards that and the more restrictions, the less it can be re-purposed. One is aimed at the user, the other at downstream developers but they are not in opposition (we are not looking at Public Vs Private). When re-purposed, you (as the designer) have no idea of the use-case regardless of what you "intended". Suffice to say a developer has seen a use case where your class "sort of" does what he needs, but not quite. Placing lots of restrictions just forces down-stream developers to make copies with slight modifications and that is an anathema to re-use.

Alright, I'll concede that with your phrasing I am abusing the term "reuse". Let me rephrase -- designing for reuse is often in conflict with designing for inheritance-for-original-use. The parent class is not being incorporated into other applications or systems. The parent class is being "reusued" in the sense that its code is part of all the children, all of which are participating in the same original system. Speaking to Daklu's argument that these are restrictions that are better placed by the caller... the children do not know the caller necessarily. Oh, they may know "I am used by that EXE over there", but they do not know how that EXE works, or what all the intricacies of that environment are. The parent is the one part of the system that they know. The parent knows the rules it itself had to follow to be a part of the system. It needs to communicate those rules to its children and -- where possible -- help its children by making those rules more than just documented suggestions and instead compiler checked enforcement.

 

When I used the term "reuse", I'm speaking of the fact that the parent is reused by each child because the child does not have to duplicate all the code of the parent within itself, one of the first motivators of inheritance in CS.

 

And, as for the "private" argument -- the other reason for having private methods, like private data, is because they are the parts of the parent class that the parent may reimplement freely without breaking children in future revisions. They are often precisely the parts that you do not want children using as cut points because then you cannot change those function signatures or delete those methods entirely in future releases without breaking the child classes.

 

I have never seen *(or can think of any) reason why any should be private.
A trivial one... I have a private piece of data, which, you admit, is useful to keep as private. I may implement private data accessors for that piece of data because it can aid the development and maintenance of the class itself to be able to breakpoint and range check in those accessors. But if I make the accessors protected or public, I have substantially limited my ability to change the private data itself.

 

There are lots of others, IMHO, but that seems to me to be an easy one.

Link to comment
And, as for the "private" argument -- the other reason for having private methods, like private data, is because they are the parts of the parent class that the parent may reimplement freely without breaking children in future revisions. They are often precisely the parts that you do not want children using as cut points because then you cannot change those function signatures or delete those methods entirely in future releases without breaking the child classes.

 

Of course you can change or delete them. You just need to "deprecate" them first (which to me you should always do anyway).

 

If I have defined them as protected it's not my problem if their code breaks child classes. They have made a conscious decision to override my bullet proof one for whatever reason sounds sane in their mind, so they should be aware of the consequences.

 

 

A trivial one... I have a private piece of data, which, you admit, is useful to keep as private. I may implement private data accessors for that piece of data because it can aid the development and maintenance of the class itself to be able to breakpoint and range check in those accessors. But if I make the accessors protected or public, I have substantially limited my ability to change the private data itself.

 

There are lots of others, IMHO, but that seems to me to be an easy one.

You haven't changed anything (and don't try to bring public in as an equivalent - it's not). Similarly to my previous paragraph, they should understand what the consequences are since they understand why they are doing it. By making it private you are denying them the opportunity to add, in your example, logging to that accessor. So. What will they do? Hack your code! When it finally all falls to pieces three weeks later after they have forgotten about the hack and they have put in a bug report for your class (which you won't be able to replicate) you will eventually find that out if/when they send the code. 

 

You don't stop them from doing anything by making it private. What you do is force them to modify your code to make it fit their use case. Bear in mind also. It is only rare occasions when it is required, but the argument is that if they wish to do so, however unsavory it may be, then they should be able to without modifying the original, tested code. Then it's their problem not yours.

Link to comment
Of course you can change or delete them. You just need to  "deprecate" them first (which to me you should always do anyway).

Deprecation? As a common solution? Do you work in an environment where revisions of classes take two years between iterations and where you can support all the old functionality during that time? I definitely do not. Backside functionality of a component is revised on a monthly basis. I mean, sure, deprecation *sometimes* works for widely distributed libraries of independent content, but that is a non-starter for most component development within an app.

 

As for them making changes to your own code, that's one of the strong arguments for distributing as binaries, not as source code. Myself, I prefer the "distribute as source and let them fork the code base if they want to make changes but know that they are now responsible for maintaining that fork for future updates." But I understand the "binaries only" argument. It solves problems like this one.

Link to comment
Deprecation? As a common solution? Do you work in an environment where revisions of classes take two years between iterations and where you can support all the old functionality during that time? I definitely do not. Backside functionality of a component is revised on a monthly basis. I mean, sure, deprecation *sometimes* works for widely distributed libraries of independent content, but that is a non-starter for most component development within an app.

 

As for them making changes to your own code, that's one of the strong arguments for distributing as binaries, not as source code. Myself, I prefer the "distribute as source and let them fork the code base if they want to make changes but know that they are now responsible for maintaining that fork for future updates." But I understand the "binaries only" argument. It solves problems like this one.

Deprecation as opposed to deletion. If you just delete it you will break any existing code anyway. It's nice to give developers a heads up before just crashing their software ;)

 

What have binaries  got to do with anything? That's just saying use it or use something else.

Edited by ShaunR
Link to comment
Deprecation as opposed to deletion. If you just delete it you will break any existing code anyway. It's nice to give developers a heads up before just crashing their software ;)

And that's why you make something private: so it can be deleted without breaking any existing code.

What have binaries  got to do with anything? That's just saying use it or use something else.

The binaries ensure that when you ship something as private, someone doesn't make it public and then blame you when you delete it.

Link to comment
[The parent] needs to communicate those rules to its children and -- where possible -- help its children by making those rules more than just documented suggestions and instead compiler checked enforcement.

 

The only advantage to having the restrictions in the parent instead of the calling code is that it guarantees a compile error instead of a runtime error.  In every other way it makes more sense to put the restrictions in the code that actually requires the restrictions--the calling code.  Is gaining compile time checking worth unnaturally twisting around the requirements so they flow up the dependency tree instead of down?  Personally I don't think so.  It comes across as a shortsighted solution to an immediate problem rather than a well-planned new feature.

Link to comment
It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

 

 

There is no clear line separating "too much contract" from "not enough contract."  Any contract preventing a class user from doing what they want to do is too much, and from that user's perspective the parent class designer was overzealous.

Link to comment
I tend to agree; Send.vi is an invocation of the message transport mechanism -- Messenger.lvclass -- and Construct.vi (i prefer this terminology to Write.vi) is a member of a concrete instance of Message.lvclass -- something I realized a while back after naïvely convolving the message with the messenger.

 

This aside, i still want to impose 'Must Implement' on Construct.vi for Message.lvclass, yet it clearly cannot be 'Must Override' because message construction has a unique ConPane for each concrete message type.

 

 

It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

 

But aren’t you in danger of being the overzealous designer, Jack.  :)  You want to impose “Must Implement” of a “Construct.vi” on “Message”, a class that I don’t believe even has a constructor at the moment.  And at least initially, you imagined this required constructor to be “Send”.  What requirements could you have made at that point that were not, in hindsight, either blindingly obvious (“we need to construct our objects”), or overzealous (“must be called Send or Constructor”, "must have an Enqueuer input”)?  You can’t enforce “must actually use this function”, so any error in requirements will just lead to unused “husk” methods made only to satisfy the requirements.  

 

— James

 

BTW: There is an example of this very thing in the Alpha/Beta Task example in 2012, where “Alpha Task Message” has two independent constructors: a “Send Alpha Task”, following the standard pattern (not enforced, of course), and then a “Write Data” constructor written when it became necessary to write a message without sending it.

Link to comment
Is gaining compile time checking worth unnaturally twisting around the requirements so they flow up the dependency tree instead of down? 

IMHO, the answer is "yes" in the initial writing the code case and "hell yes" in the event that you're releasing version 2.0 and the requirements have changed. Every error caught by the compiler is worth months of runtime errors and errors that potentially are not found until end users are seeing them.

Link to comment

Every error caught by the compiler is worth months of runtime errors and errors that potentially are not found until end users are seeing them.

 

The implication of this statement worries me.  To reiterate some of the things I said in posts 78 and 81, asserting that it's always better for an error to be found at compile time instead of runtime ignores the cost (in terms of language flexibility, ease of use, etc.) of actually making the error detectable by the compiler.  As a thought experiment, imagine a language in which every error is a compiler error.  Is it safe?  Does it help prevent users from making mistakes?  Yes on both counts.  Is it a "better" language?  That's a subjective evaluation, but I think most people would get frustrated with it very quickly and switch to a language with more flexibility.

 

(For the sake of moving the discussion forward, I'll assume you agree at least sometimes it is preferable to defer error checking to runtime, even if it is possible to change the language to eliminate that runtime error.  I trust you'll let me know if you disagree.)

 

The question is one of cost vs benefit.  How much value are we giving up to get compile time checking and how much value are we gaining?  Based on statement above, it appears you are assigning a very high value to the benefit of making this a compiler checked feature.  I think you are over-valuing it.  Let me explain...

 

First, I don't think categorizing a contract violation as a runtime error or compile time error is completely accurate.  It's really a load time error.  In principle, with caller declared contracts the error can be detected as soon as the code declaring the contract (the caller) and the code under contract (the sub vi or child classes) are loaded into memory at the same time.  There's no need to write extensive test code executing all the code paths.  Depending on the structure of the application, the error could be discovered during editing, building, or execution.  In most real world use cases the error will be discovered at edit time because most users have their entire code base already loaded into memory.

 

Second, the use cases where the error isn't discovered until execution are advanced architectures--modular applications that dynamically link to dlls or vis, like this:

 

post-7603-0-31760800-1361826010_thumb.pn

 

Advanced architectures are built by advanced developers.  These are not the people you need to worry about shooting themselves in the foot.  That's not to say we don't do it occasionally, but we're typically looking for more flexibility, not more protection.  Seems to me advanced developers are the ones most likely to get frustrated by those restrictions.

 

So what benefit does parent declared contracts bring to the table that caller declared contracts don't offer?  Guaranteed edit time contractual errors isn't the ideal NI should be shooting for when the additional use cases captured by that guarantee (compared to the non-guaranteed version) are few in number, used by advanced developers, and may not be wanted in the first place. 

 

Does guaranteed edit time contractual errors save months of runtime errors?  Only in the rare case where a developer creates and releases a plugin without even loading it into the application it's designed for.  That's not a bug, that's negligence.  If NI is trying to protect that developer from himself I'll really get worried about Labview's future direction.

 

 

...and "hell yes" in the event that you're releasing version 2.0 and the requirements have changed.

 

(I assume in v2.0 the requirements have tightened since all the child classes would still work fine if the requirements were loosened.)

 

In this scenario you're still not gaining much (if anything) by having parent declared contracts.  Come to think of it, parent declared contracts don't prevent runtime contract violations from occurring.  The violation can't be detected until each child class is loaded, and in the case of dynamically called child classes it's possible the child class was never loaded into the dev environment after editing the parent class' contract.  Oops, runtime error.  You can't turn that into a guaranteed edit time error unless you're willing to make parent classes automatically load all children.

 

If you're really hell-bent on throwing edit time errors for the plugin developers in the pattern above, I could get behind an implementation that uses a separate xml or text file to publish the contractual terms the parent class offers to its caller.  Call it something like PlugIn.lvcontract, put it in the same directory as the class, and load it when the class is loaded.  Maybe each child class has their own .lvcontract file and they are checked against the parent's .lvcontract file to make sure the child offers terms that are compatible.

 

Advanced users get the flexibility of choosing whether or not they want to enforce the same requirements the original developer thought were necessary, and casual users get the reassurance of edit time errors.  Win win.

Link to comment
Not necessarily. MJE just posted a classic example of exactly what I am trying to get accross.

 

I posted a reply to MJE, but one section of my post there is relevant here:

 

> When I add a flexibility point, I prefer to do it in response to some user need, not just on the off chance that

> someone might need it because every one of those points of flexibility for an end user becomes a point of

> inflexibility for the API developer, and, ultimately, that limits the ability of the API to flex to meet use cases.

 

That pretty much sums up what I've learned over my years of programming for component development.

Advanced users get the flexibility of choosing whether or not they want to enforce the same requirements the original developer thought were necessary, and casual users get the reassurance of edit time errors.  Win win.

 

I had actually been putting together notes -- based on this conversation -- for a concept I was calling "a class' optional straightjacket". Where it falls short -- at the moment -- is module interoperability. Suppose we say that "all actors will stop instantly all activity upon receiving an Emergency Stop message". And we made that an optional straightjacket for actor classes. An actor that chose not to live by the straightjacket isn't just a loose actor, it is actually not an actor... it's ability to be reused in other systems is actually decreased. It is, in a sense, unreliable.

 

That lead me to consider "a class that chooses not to wear the straightjacket *is not a value of the parent class*". Bear with me here, because these notes are a work in progress, but you brought it up. If a parent class has a straightjacket to let it be used in an application, and the child class chooses not to wear the straightjacket, then it can still inherit all the parent class functionality, but it cannot be used in a framework that expects classes that use the straightjacket. This makes it very different from an Interface or a Trait because the parent class *does* wear the straightjacket, but inherited children do not necessarily do so.

 

Thoughts?

Link to comment
I posted a reply to MJE, but one section of my post there is relevant here:

 

> When I add a flexibility point, I prefer to do it in response to some user need, not just on the off chance that

> someone might need it because every one of those points of flexibility for an end user becomes a point of

> inflexibility for the API developer, and, ultimately, that limits the ability of the API to flex to meet use cases.

 

That pretty much sums up what I've learned over my years of programming for component development.

 

I had actually been putting together notes -- based on this conversation -- for a concept I was calling "a class' optional straightjacket". Where it falls short -- at the moment -- is module interoperability. Suppose we say that "all actors will stop instantly all activity upon receiving an Emergency Stop message". And we made that an optional straightjacket for actor classes. An actor that chose not to live by the straightjacket isn't just a loose actor, it is actually not an actor... it's ability to be reused in other systems is actually decreased. It is, in a sense, unreliable.

 

That lead me to consider "a class that chooses not to wear the straightjacket *is not a value of the parent class*". Bear with me here, because these notes are a work in progress, but you brought it up. If a parent class has a straightjacket to let it be used in an application, and the child class chooses not to wear the straightjacket, then it can still inherit all the parent class functionality, but it cannot be used in a framework that expects classes that use the straightjacket. This makes it very different from an Interface or a Trait because the parent class *does* wear the straightjacket, but inherited children do not necessarily do so.

 

Thoughts?

 

FWIW, I agree with all of what you're saying here.  The idea of allowing for "advanced users to be flexible" is fine, but then, those same advanced users can be as flexible as they like -- with code THEY develop.

Link to comment
I had actually been putting together notes -- based on this conversation

 

Really?  And these notes consist of more than, "I wish Daklu would shut up?"  I'm shocked.  :)

 

 

for a concept I was calling "a class' optional straightjacket"

 

It sounds like your straitjacket is similar to what I've had in mind; a set of restrictions that can be applied and enforced when that level of protection is wanted, but can be removed by the class user for special situations.

 

 

That lead me to consider "a class that chooses not to wear the straightjacket *is not a value of the parent class*"...  If a parent class has a straightjacket to let it be used in an application, and the child class chooses not to wear the straightjacket, then it can still inherit all the parent class functionality, but it cannot be used in a framework that expects classes that use the straightjacket.

 

I'm getting a mixed message here and I'm not quite sure what you're trying to say, so I'll break it down a bit.

 

a class that chooses not to wear the straightjacket *is not a value of the parent class*

 

I disagree with this part.  A child class that chooses not to wear the parent's straitjacket is still a value (or type) of the parent class.  I view straitjackets (or restrictions, guarantees, contracts, requirements, etc.) as a different language construct than classes, Interfaces, or Traits.  I think it would be a mistake to build requirements into any of those features.

 

If a parent class has a straightjacket to let it be used in an application, and the child class chooses not to wear the straightjacket, then it can still inherit all the parent class functionality, but it cannot be used in a framework that expects classes that use the straightjacket.

 

Mmm, this is moving in the right direction but I need clarification.  If the parent class is straitjacketed and a child class is not, does that prevent a child object from travelling on a parent wire altogether?  Does it effectively remove the ability to dynamically dispatch to that child class?  Can the child's straitjacket be changed, donned, or removed without editing the child's .lvclass file or any methods?

 

 

Thoughts?

 

In order to get the security you want and the flexibility I want, I think there has to be two parts to establishing a contract.  The calling code or parent class publishes requirements, and the called code or child class publishes the promises it is willing to make to callers.  The intersection of the requirements and promises is the contract.  If the contract fulfills all the requirements published by the calling code, everyone is happy and there's no error.

 

A very rudamentary form of this could be implemented now using manually edited .lvcontract files.  All you'd need is an engine to walk the dependency tree, load the .lvcontract files, and compare the caller's requirement list against the callees promises.

 

Something else occurred to me while typing this up.  You've been advocating making requirements part of the class.  I've been advocating making requirements part of the calling code.  It there a valid use case for needing different sets of requirements on different objects of a single class?  Neither of our positions permits that.  If it is a useful addition, how is it accomplished?

Link to comment

With regards to to the straightjacket -- this finally triggered a synapse none of us have keyed onto yet: unit testing.

 

It's a manner in which the parent class can supply a bread crumb trail as to what is expected of a child class.

 

Yet the child class can choose to fail or ignore these tests.

 

Can we think of ways to incorporate unit tests as more of a first class feature of class definitions? Perhaps even making ad hoc execution of the tests as simple as right-clicking a ProjectItem that's part of the class?

  • Like 1
Link to comment
Can we think of ways to incorporate unit tests as more of a first class feature of class definitions?

 

I don't think this is a good idea.  The class definition (which I interpret to mean the .lvclass file) is supposed to define the class--data and methods to operate on the data.  Performance guarantees to calling code, restrictions placed on child classes, and unit tests for the class certainly contain information related to the class, but they are not part of what a class is.  It would be nice to be able to right-click on a class and have an option to run all unit tests associated with that class, and that does require a way to link the class to the unit test.  But please don't embed that information in the .lvclass file.

 

Previously I mentioned .lvcontract files to keep track of requirements and guarantees. I'll just generalize the idea and change the name from MyClass.lvcontract to MyClass.lvclass.manifest.  The purpose of the manifest is to store any important information about the class (or vi) that isn't defined by the class.  Theoretically someone could write a project provider plugin that reads the manifest file and invokes the unit tests it specifies.

 

Eventually I think LV is going to have to support something along the lines of manifests eventually anyway in order to address the issues people are running into.  Last year I was raising the alarm about the risk of running into unsolvable conflicts with the many .vip packages available.  Manifests could solve that problem.

 

--------

 

More to your question, black box testing cannot provide the same level of guarantees as inspection.  If my Add function is required to return a value from 0-10, how many tests do I need to conduct to guarantee the function is working correctly?  Assuming I'm adding two 32-bit numbers, about 18,447,000,000,000,000,000.  That's a lot of goofing off waiting for the test to finish.  Conversely, if I can inspect the block diagram and I see the output is coerced just prior to being passed out, I've verified correctness in less than 10 seconds.

 

It is possible to use unit testing to verify many aspects of an interface's implementation.  In my experience doing that usually leads to a lot of test code churn that isn't adding significant value to the project.  YMMV.

Link to comment
I don't think this is a good idea....unit tests for the class certainly contain information related to the class, but they are not part of what a class is.

 

Would you suggest for this reason removing Probes and Menu Palettes from the LVClass definition? These are precedences of conveniences/facilities to use the class effectively, yet do not define the operation of the class. Unit Test usage in this context could be considered an extension to documentation. (And: I'm not necessarily seriously supporting making Unit Tests a first class feature of the class definition; it was more, just throwing it out there to consider and iterate)

More to your question, black box testing cannot provide the same level of guarantees as inspection.  If my Add function is required to return a value from 0-10, how many tests do I need to conduct to guarantee the function is working correctly?  Assuming I'm adding two 32-bit numbers, about 18,447,000,000,000,000,000.  That's a lot of goofing off waiting for the test to finish.  Conversely, if I can inspect the block diagram and I see the output is coerced just prior to being passed out, I've verified correctness in less than 10 seconds.

 

If the Unit Test is a member of the class, what's to prevent white box testing with scripting? That's a form of inspection; sometimes better than the human eye (e.g., for catching an error wire that runs under a method rather than connected to its I/O). Unit test here could be used interchangeably with VI Analyzer test.

 

 

It is possible to use unit testing to verify many aspects of an interface's implementation.  In my experience doing that usually leads to a lot of test code churn that isn't adding significant value to the project.  YMMV.

 

I won't disagree. The value of unit test coverage roughly seems linked to economy of scale with larger projects/teams -- single devs on small projects likely don't offer their customers value with 100% coverage in lieu of more features; on the other hand, massive OSS projects with neutral Unit Test and Style arbiters allow hundreds of devs to contribute. Many LabVIEW projects tend toward the smaller spectrum. I've yet to work on a project, no matter how small, that wouldn't have benefited from at least a couple strategically-placed tests. YMMV.

Previously I mentioned .lvcontract files to keep track of requirements and guarantees. I'll just generalize the idea and change the name from MyClass.lvcontract to MyClass.lvclass.manifest.  The purpose of the manifest is to store any important information about the class (or vi) that isn't defined by the class.

 

What are reasons to store the Manifest as a separate file from the LVClass?

Link to comment
Would you suggest for this reason removing Probes and Menu Palettes from the LVClass definition? These are precedences of conveniences/facilities to use the class effectively, yet do not define the operation of the class.

 

I don't know. I haven't thought about that at all. Off the top of my head and putting no more than 2 minutes thought into it...

 

-If the probe requires access to private data, then I suppose there's justification for making it part of the class definition, the same as any method that access private data. If the probe can provide the information to the user using the class' public method, then it doesn't need to be part of the class definition.

-Which menu palettes are part of the class definition? I end up creating my own .mnu files or use the utility built into VIPM.

-Precedence, imo, isn't sufficient reason to continue moving along an inferior path.

 

 

If the Unit Test is a member of the class, what's to prevent white box testing with scripting? That's a form of inspection; sometimes better than the human eye (e.g., for catching an error wire that runs under a method rather than connected to its I/O). Unit test here could be used interchangeably with VI Analyzer test.

 

Nothing is preventing that kind of white box testing as far as I know.  (Though I use VI Tester for unit testing and don't have the option to make test cases members of the class under test.)  There are a couple concerns I have:

 

1. You are proposing using unit tests to verify child classes written by other developers meet some set of functional requirements.  White box testing relies on some knowledge of the implementation.  You, as the parent class author, don't know how the child class author will implement the functional requirements, so writing white box test cases isn't possible.  They must be black box.

 

2. Is it possible to implement a script smart enough to inspect an arbitrary block diagram and know if a functional requirement has been met? (Hint: No, except perhaps in trivial cases.) Automated inspection isn't capable of allowing all the implementations that meet an arbitrary functional requirement and rejecting all the implementations that do not.

What are reasons to store the Manifest as a separate file from the LVClass?

 

Version control simplicity for one.  Who wants to check out lvclass file just to write another unit test for the class?

 

More importantly to me, it allows end users more flexibility in customizing certain parameters without changing the source code.  If you want to help your plugin authors write plugins that work well in your application, you distribute the manifest with the plugin interface class.  Users that want to use it can.  Users that have other use cases can change the manifest or otherwise ignore it.

Link to comment
  • 1 month later...

I know this went a bit off track, and admittedly I haven't read (see: understood) all discussions. But, I can see a benefit of must implement in a current app I am writing. I want all child classes of my parent "output" class to implement a "write data" method so they fit nicely into my framework. It makes sense, any output should have a method to write to that output. The problem is the connector panes need to be different. Think about a digital write vs. an analog write. I want these to be enforced in all my "output" child classes, but I can't because one needs an array of booleans, the other an array of doubles. 

 

Go ahead, throw the tomatoes. 

Link to comment
I know this went a bit off track, and admittedly I haven't read (see: understood) all discussions. But, I can see a benefit of must implement in a current app I am writing. I want all child classes of my parent "output" class to implement a "write data" method so they fit nicely into my framework. It makes sense, any output should have a method to write to that output. The problem is the connector panes need to be different. Think about a digital write vs. an analog write. I want these to be enforced in all my "output" child classes, but I can't because one needs an array of booleans, the other an array of doubles. 

 

Go ahead, throw the tomatoes. 

Variant tomatoes? The problem isn't so much with different data-types, it is more numbers of terminals where you run into trouble.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.