Jump to content

Destination: Task


Recommended Posts

This thread branched from the latter part of this discussion. Visit that thread for the background. I'll start off by responding to Shaun's post here. (Although this started as a discussion between Shaun and myself, others are encouraged to join in.)

Of course, most traditional LV programmers create highly coupled applications that offer little opportunites for reuse and become increasingly rigid and fragile over the life of the application.

I think you've been fed the hype intravenously. That is certainly the OOP mantra.

The OOP mantra is "traditional LV programmers create highly coupled applications..."? Huh... I didn't realize LV had become so visible within the OO movement. ;)

I think you're extrapolating some meaning in my comments that I didn't intend. You said most LV users would use a typedef and be done with it, implying, as I read it, that it should be a good enough solution for the original poster. My comment above is a reflection on the primary goal of "most" LV programmers, not an assertion that one programming paradigm is universally better than another.

"Most" LV programmers are concerned with solving the problem right in front of them as quickly as possible. Making long term investments by taking the time to plan and build generalized reuse code libraries isn't a priority. Additionally, the pressures of business usually dictates quick fix hacks rather than properly incorporating new features/bug fixes into the design. Finally, "most" LV programmers don't have a software design background, but are engineers or scientists who learned LV through experimentation.

In essence, I'm refuting the implication that since "most" LV programmers would use a typedef it is therefore a proper solution to his problem. He is considering issues and may have reuse goals "most" LV programmers don't think about. With uncommon requirements, the common solution is not necessarily a solution that works.

But no evidence has ever been proffered to support this kind of statement (you have a link?).

My statement can be shown to be true via deductive reasoning:

Premise 1: *Any* software application, regardless of the programmer's skill level or programming paradigm used, will become increasingly co-dependent and brittle over time when the quick fix is chosen over the "correct" fix.

Premise 2: "Most" traditional LV programmers, due to business pressure or lack of design knowledge, implement the quick fix over the "correct" fix most of the time.

Therefore, most traditional LV programmers create applications that limit reusability and become harder to maintain over time.

:shifty:

However, I suspect you're asking about scientific studies that show OOP is superior to structured programming. I read a few that support the claim and a few that refute the claim. Personally I think it's an area of research that doesn't lend itself to controlled scientific study. Software design is a highly complex process with far too many variables to conduct a reliable scientific study. As much as critical thinkers eschew personal experience as valid scientific evidence, it's the best we have to go on right now when addressing this question.

Just to be clear, I don't think an OO approach is always better than a structured approach. If you need a highly optimized process, the additional abstraction layers of an OO approach can cause too much overhead. If you have a very simple process or are prototyping something, the extra time to implement objects may not be worth it. When it comes to reusablity and flexibility, I have found the OO approach to be clearly superior to the structured approach.

One final comment on this... my goal in using objects isn't to create an object oriented application. My goal is to create reusable components that can be used to quickly develop new applications, while preserving the ability to extended the component as new requirements arise without breaking prior code. I'm not so much an OOP advocate as a component-based development advocate. It's just that I find OOP to be better than structured programming to meet my goals.

I would start off with explaining why typdefs (in labview) provide single point maintenance

So does a class. Classes also provide many other advantages over typedefs that have been pointed out in other threads. No time to dig them up now, but I will later if you need me to.

and expand into why OOP breaks in-built labview features

I'll bite. Explain.

and why many of the OOP solutions are to problems of it's own making

I'm curious what you have to say about this, but no programming paradigm is suitable for all situations. Hammering on OOP because it's not perfect doesn't seem like a very productive discussion. :D

  • Like 2
Link to comment

Quick response... I'll try to fill in more later.

I disagree vehemently with what you are saying here (maybe because you've switched the subject to "Most LV users/programmers"?). I'm sure it is not your intention (it's certainly out of character), but it comes across as most LV programmers and Labview programmers ALONE , are somehow inferior, lack the ability to design, plan and execute programming tasks... It comes across that you view Labview as an environment that "real" programmers wouldn't use. It's verging on "elitist".

Hmm... how to explain this without coming across as a pompous ass?

I'm not sure why you disagree. Most LV users are not professional developers who get paid to write LV apps. (NI has said as much--that's why they make decisions that focus on ease of use and approachability.) They are engineers, scientists, or technicians who use LV to accomplish a task that's related to what they *are* paid to do. The software is a side effect of their work, not the goal of their work.

Contrast that with users of other major languages, such as c/c++, java, or c#. Producing software is typically what they are hired to do. They are much more likely to be working in a dedicated software development environment and can reap the benefits of that. My statement reflects that, on average, people who use LV in their work environment have less formal training and software development experience than, say, people who use c++ in their work environment.

"Elitism" carries the connotation of unjustified authority and the arrogance of personal superiority over those who are not elite. In that sense I hope I do not come across as elitist. Am I elitist for recognizing that certain people have learned a set of skills and have the experience that makes them better suited for specific tasks? Personally I don't think so, but others might. (Though I wonder how they would justify hiring a contractor for their home remodel as opposed to, say, the paperboy.)

Traditional LV programmers would, for example, use an "action engine" (will he bite? . I know it's one of your favouritesbiggrin.gif) in preference to a class to achieve similar perceived behaviour.

Again, not trying to be a pompous ass...

If one perceives classes and action engines are equivalent, it's pretty apparent to me they don't understand OOP. (By logical symmetry it could be that I don't understand AE's... I'll leave that debate for another time.) The differences between them have significant long term implications.

I really don't have an issue when an informed business decision is made to use an AE. Sometimes, based on programmer skill, existing code base, and project requirements, an AE is the correct business decision. As a software design construct it is clearly inferior to classes. What I frequently see is AEs implemented without considering alternatives or even understanding the tradeoffs between the various options.

Out of time. The rest will have to wait until later...

Link to comment

Since I was in on the original thread, I thought I'd weigh in here as well. First, let me say I'm following this thread because I know from all of their contributions to LAVA both ShaunR and Daklu will have something intelligent and interesting to say. Second, I feel like I'm positioned somewhere between you two on the LVOOP question.

I (and my team, since I make them) do all of our development in native LV OOP. I seldom use any class as by-ref as it does break the dataflow paradigm, although as we all know there are times when breaking dataflow is necessary or at least desirable. But I may not be an OOP purist. I use typedefs - I even find occassion to use them in private class data. My typical use is something like this - create a typedef and add it to the class as a public control - place the typedef in the private class data cluster - create a get/set for the typedef in the class. This is typical of a class that I may write to enable a specific DAQmx functionality. The user may need to select a DAQ channel, sample rate, and assign a name but nothing else. So I create a typedef cluster that just exposes this. Now, the developer can drop the public typedef on the UI, wire the typedef to the set method (or an init method if you really want to minimize the number of VIs), and have a newly defined instance on the wire. Then wire that VI to a method that either does the acquisition or launches the acquisition in a new thread. What I like is that the instance is completely defined when I start the acquisition - I know this because I use dataflow and not by-ref objects - and I know exactly which instance I'm using (the one on the wire). So this leverages data encapsulation and dataflow, both of which make my code more robust and only adds one or two VIs (the get/set and maybe the init) to the mix. So I don't think by-val LVOOP compromises dataflow and doesn't add (to me at least) excessive overhead.

But, I clearly have not designed the above class as a reuse library since my get/set and init depend on a typedef. If I try to override any of these methods in a child, I'll find it difficult since the typedef can't be replaced so I'm stuck with whatever the parent uses. But that's OK - not everything can (or should) be designed for general reuse. At some point, one has to specialize the code to solve the problem at hand. A completely general re-use library is called a programming language.

But there are real candidates for general classes that should support inheritance and LVOOP gives us the ability to leverage that tool when needed. A recent example was a specialized signal generator class (decaying sines, triangles, etc). Even I could see that if I built a parent signal generator class and specialized through inheritance this would be a good design (even if it took more time to code initially). And it proved to be a good decision as soon as my customer came back and said "I forgot that I need to manipulate a square wave as well" - boom - new SquareWave class in just a few minutes that integrated seamlessly into my app.

I guess my point is that dataflow OOP is a whole new animal (and a powerful one) and one should try to figure out the best ways to use it. I don't claim to know what all of those are yet, but I am finding ways to make my code more robust (not necessarily more efficient, but I seldom find that the most important factor in what I do) and easier to maintain and modify. I do feel that just trying to shoehorn by-val OOP into design patterns created for by-ref languages isn't productive. It reminds me of the LV code I get from C programmers where the diagram is a stacked sequence with all of the controls and indicators in the first frame and then twenty frames where they access them using locals. They've used a dataflow language as an imperative language - not a good use of dataflow!

Mark

  • Like 1
Link to comment

Quick response... I'll try to fill in more later.

...Contrast that with users of other major languages, such as c/c++, java, or c#. Producing software is typically what they are hired to do. They are much more likely to be working in a dedicated software development environment and can reap the benefits of that. My statement reflects that, on average, people who use LV in their work environment have less formal training and software development experience than, say, people who use c++ in their work environment...

OK, you clearly don't work where I work :) We've got no end of people around here that use all of those languages as well as LabVIEW and consider themselves programmers - this is especially true of the researchers (PHD's in many scientific disciplines). But many have no idea how to architect code (notice I avoid saying most, since I can't provide hard data :) ) and no matter the language, they write spaghetti code. And there are more than a few around here who's job is to architect and develop code in LabVIEW - and we're trained in many disciplines but we all have comp sci education as well. But in the end, what matters to our customers is "do our test and measurement systems work" and that's why we have to recruit people for our team with varied backgrounds (heck, my undergrad was in ME) because it's not enough to understand code development, you have to understand the problem you're trying to solve.

Mark

Link to comment

Like most LabVIEWer's I started the out in the world using Traditional LabVIEW techniques and design patterns e.g. as taught in NI Courses etc... Of course, I implemented these rather poorly, and had a limited understanding at the time (hey - I was learning after-all!). After a while I discovered LVOOP, and above all, encapsulation saved my apps (I cannot overstate this enough). I then threw myself into the challenge of using LVOOP exclusively, without fail, on every project - for every implementation. This was great in terms of a short learning curve, but what I discovered was that I was creating very complex interactions for every program.

(Whilst I quickly admit I am not full bottle on OOP design patterns) I found these implementations were very time consuming. I also saw colleagues put together projects much faster than I could Traditionally, and they were achieving similar results (although IMHO using LVOOP is much easier to make simple changes and test), but I wanted to weigh up the time involved and answer the question ...could I do it better?

Pre-8.2 (aside from RT where we could only start using Classes in 2009), people (some very smart ones at that - who have been around for ages in the LabVIEW community) have been solving problems without LVOOP, successfully. This lead me to recently undergo a reassessment of my approach. My aim was to look at the Traditional techniques, now having a better understanding of them (and LabVIEW in general), and reintegrate them with what I was doing in LVOOP etc... - and I am having success (and more importantly fun!).

Damn, I have even started to find I like globals tongue.gif.

Anyways, at the end of the day I find using what works and not trying to make something fit is the best and the most flexible approach. With the aim of becoming a better programmer, I hope I continue this iterative approach to my learning (and of course these means I want to keep learning about LVOOP and OOP too as part of this too).

JG says enjoy the best of both worlds!

  • Like 1
Link to comment

I think the discussion has been interesting. My opinion is that encapsulation is what matters most in increasing maintainability and reuse and reducing bugs. Keeping as many routines private as you can, and minimizing the interface (the number and complexity of the public routines) is the goal.

The LV library (lvlib), and it's cousin the lvclass, the have been a big help to the language in this regard, despite other annoyances. I think Shaun has some valid criticisms about the need to maintain more state when using an lvclass, so I only use them when I need dynamic dispatching. Another problem with classes (objects) is the difficulty of adding a new dynamic dispatch method to a bunch of existing classes.

I find I get more work done if I can rapidly prototype and iterate to find a good design, but lvclasses encourage more upfront planning and careful architecture because reworking is pretty painful. This need to plan ahead encourages the waterfall development model, which everyone loves to hate.

Jason

  • Like 1
Link to comment

I'm not motivated enough to carefully read whole the discussion, so maybe it was put already.

What I like the most in LVOOP is custom colors for wires. The effect is worth the effort for implementing all these access methods...

If you're being serious this is funny. Paragraphs of LVOOP dialog and all you have to say is: "I like the pretty colors". :lol:

And if you're being funny, well, this is funny. :lol:

Link to comment

OK, you clearly don't work where I work :) We've got no end of people around here that use all of those languages as well as LabVIEW and consider themselves programmers - this is especially true of the researchers (PHD's in many scientific disciplines). But many have no idea how to architect code (notice I avoid saying most, since I can't provide hard data :) ) and no matter the language, they write spaghetti code. And there are more than a few around here who's job is to architect and develop code in LabVIEW - and we're trained in many disciplines but we all have comp sci education as well. But in the end, what matters to our customers is "do our test and measurement systems work" and that's why we have to recruit people for our team with varied backgrounds (heck, my undergrad was in ME) because it's not enough to understand code development, you have to understand the problem you're trying to solve.

Mark

This is what I consider "most" labview programmers (regardless of traditional or not) to be and the analogy I've used before is between "pure" mathematicians and "applied" mathematicians. Pure mathematicians are more interested in the elegance of the maths and its conceptual aspects as opposed to "applied" which are more interested in how it relates to real-world application. Is one a better mathematician than the other? I think not. It's purely an emphasis. Both need to have an intrinsic understanding of the maths. I think most Labview programmers by the very nature of the programs they write and the suitability of the language to those programs are "applied" programmers, but that doesn't mean they don't have an intrinsic understanding of programming or indeed how to architecture it.

Like most LabVIEWer's I started the out in the world using Traditional LabVIEW techniques and design patterns e.g. as taught in NI Courses etc... Of course, I implemented these rather poorly, and had a limited understanding at the time (hey - I was learning after-all!). After a while I discovered LVOOP, and above all, encapsulation saved my apps (I cannot overstate this enough). I then threw myself into the challenge of using LVOOP exclusively, without fail, on every project - for every implementation. This was great in terms of a short learning curve, but what I discovered was that I was creating very complex interactions for every program.

(Whilst I quickly admit I am not full bottle on OOP design patterns) I found these implementations were very time consuming. I also saw colleagues put together projects much faster than I could Traditionally, and they were achieving similar results (although IMHO using LVOOP is much easier to make simple changes and test), but I wanted to weigh up the time involved and answer the question ...could I do it better?

Pre-8.2 (aside from RT where we could only start using Classes in 2009), people (some very smart ones at that - who have been around for ages in the LabVIEW community) have been solving problems without LVOOP, successfully. This lead me to recently undergo a reassessment of my approach. My aim was to look at the Traditional techniques, now having a better understanding of them (and LabVIEW in general), and reintegrate them with what I was doing in LVOOP etc... - and I am having success (and more importantly fun!).

Damn, I have even started to find I like globals tongue.gif.

Anyways, at the end of the day I find using what works and not trying to make something fit is the best and the most flexible approach. With the aim of becoming a better programmer, I hope I continue this iterative approach to my learning (and of course these means I want to keep learning about LVOOP and OOP too as part of this too).

JG says enjoy the best of both worlds!

Nice, pragmatic and modest post.

I think many people are coming to this sort of conclusion in the wake of the original hype. As indeed happened to OOP in C++ more than 10 years ago. It's actually very rare to see a pure OOP application in any language. Most people (from my experience) go for encapsulation then use structured techniques.

I think the discussion has been interesting. My opinion is that encapsulation is what matters most in increasing maintainability and reuse and reducing bugs. Keeping as many routines private as you can, and minimizing the interface (the number and complexity of the public routines) is the goal.

The LV library (lvlib), and it's cousin the lvclass, the have been a big help to the language in this regard, despite other annoyances. I think Shaun has some valid criticisms about the need to maintain more state when using an lvclass, so I only use them when I need dynamic dispatching. Another problem with classes (objects) is the difficulty of adding a new dynamic dispatch method to a bunch of existing classes.

I find I get more work done if I can rapidly prototype and iterate to find a good design, but lvclasses encourage more upfront planning and careful architecture because reworking is pretty painful. This need to plan ahead encourages the waterfall development model, which everyone loves to hate.

Jason

You've actually hit on the one thing "traditional" labview cannot do (or simulate). Run-time polymorphism (enabled by dynamic dispatch). However. there are very few cases that it is required (or desirable) unless, of course, you are using LVOOP. Then it's a must have to circumvent labviews requirement for design-time strict typing (another example of breaking Labviews in-built features to enable LVOOP). Well. That's how it seems to me at least. There may be some other reason, but in other languages you don't have "dynamic dispatch" type function arguments.

But aside from that. I never use waterfall (for software at least). I find an iterative approach (or "agile" as it is called now-a-days) much more useful and manageable. Sure the whole project (including hardware, mechanics, etc) will be waterfall (it's easier for management to see progress and you have to wait for materials) but within that at the macro level the software will be iterative with release milestones in the waterfall. As a result, the software is much quicker to react to changes than the other disciplines which means that the software is never a critical path until the end-of-project test phase (systems testing - you can't test against all the hardware until they've actually built it). At that point, however, the iterative cycle just gets faster with smaller changes since by that time you are (should be wink.gif) reacting to purely hardware integration problems so it's fairly straight forward to predict.

Link to comment

If you're being serious this is funny. Paragraphs of LVOOP dialog and all you have to say is: "I like the pretty colors". :lol:

And if you're being funny, well, this is funny. :lol:

I'm absolutely serious. Colored wires (not overused of course, not too fancy and well suited to icon colors) help me to keep block diagram clean and understandable. I seldom use LVOOP for whole application architecture, because 90% of my programming is prototyping solutions and I agree with what Jason said: changing the architecture based on LVOOP is a pain. I use classes mainly for well defined data types, often strongly related to real-world objects, not suspected to change too much and not too much dependent on other data types. So I mainly take advantage of encapsulation and wire colors play a main role here - I have up to several classes in whole the code and I just remember these colors. If I sometimes don't remember I know that they are related to headers of class methods icons. When I have slightly more complex inheritance hierarchy, I vary colors of children wires little bit.

I found that in general aesthetics is a one of most important factors which make (at least for me) programming in LabVIEW efficient and just a pleasure.

Link to comment

Wow... lots of comments and limited time. (It's my wife's bday today; can't ignore her and surf Lava too much.)

It's not a "proper" solution? How so? How does the mere fact of using a "type-def preclude it being one or indeed re-usable?.

It doesn't make it not reusable. Rather, it limits your ability to reuse it. A good reusable component doesn't just wrap up a bit of functionality for developers. It gives them extension points to add their own customizations without requiring them to edit the component's source code. When a component exposes a typedef as part of its public api it closes off a potentially beneficial extension point.

My bigger issue with typedefs is that it makes it harder to refactor code during development, which I do a lot. I know typedefs are the golden boy of traditional Labview programmers, due (I think) to their ability to propogate changes through the project. Here's the rub... changes propogate only if the vis that use the typedef are loaded into memory. How do you know if everything that depends on that typedef is loaded in memory? Unless you are restricting where the typedef can be used (by making it a private member of a library, for instance) you don't.

"But," you say, "the next time a vi that depends on the typedef is loaded it will link to the typedef and all will be well." Maybe it will, maybe it won't. Have you added new data to the typedef? Renamed or reordered elements to improve clarity? Sometimes your edits will cause the bundle/unbundle nodes to access the wrong element type, which results in a broken wire. So far so good. However, sometimes the bundle/unbundle node will access the wrong element of the same type as the original, in which case there's nothing to indicate to you, the developer, that this has happened. (Yes, this does happen even with bundle/unbundle by name.) You have to verify that it didn't happen by testing or by inspection.

Classes are superior to typedefs for this reason alone. If I rename an accessor method (the equivalent to renaming a typedef element) I don't have to worry that somewhere in my code Labview might substitute a different accessor method with the same type output. If it can't find exactly what it's looking for I get a missing vi error. It might take me 2 mintues to create a class that essentially wraps a typedeffed cluster, but I save loads of time not having to verify all the bundle/unbundle nodes are still correct.

Another thing I've been doing lately is using classes to create immutable objects. I give the class a Create method with input terminals for all the data and appropriate Get accessors. There are no Set accessors. Once the object is created I can pass it around freely without worrying that some other process might change a value. This saves me time because I never even have to wonder what happens to the data, much less trace through the code to see who does what to it.

In short, using classes instead of typedefs gives me, as a developer, far, far, more confidence that changes I'm making aren't having negative effects elsewhere in my code. That translates directly into less time analyzing code before making a change and testing code after the change.

Using a typedef with queues is a well established technique.

Doesn't mean there aren't better alternatives. Sending messages by telegraph was a well-established technique for many years. Can I expect your response to come via Western Union? ;)

For deductive reasoning...

That was an off the cuff comment intended mostly as a joke. However,

-My premise 2 can be proven to be true, though I have not done so.

-There's no reason deductive reasoning doesn't work for generalizations as long as the conclusion doesn't claim certainty.

-"Most OOP programs are more complex..." Software complexity is a completely subjective term. How do you measure the relative complexity of different programming paradigms? Lines of code written? Number of execution paths? Time it takes to understand enough code to make a necessary change?

That's a no then biggrin.gif

The fact is there is no evidence that code re-use is any more facilitated by OOP than any other programming.

Yep, it's a no. You're asking for scientifically verifyable and repeatable evidence for a process that defies scientific analysis. Doesn't mean it's a false claim.

My experience is that re-usable code is only obtainable in meaningful measures in a single project or variants on that project. I have a very powerful "toolkit" which transcends projects and for each company I obtain a high level of re-use due to similarity between products. But from company to company or client to client there is little that can be salvaged without serious modifications (apart from the toolkit).

If I'm understanding this chain of reasoning... You've found limited success reusing code across projects using traditional LV techniques, therefore, the benefit of improved reuse using OOP techniques isn't "real" benefit. That seems like flawed logic.

You mean not everything "is a" or "has a"?

Nope. As a matter of fact my personal experience is that using that as a way to decompose a problem into a class implementation doesn't work very well. It creates hierarchies that tend to be inflexible, which defeats the purpose of using OOP in the first place.

Pure OOP is very hard (I think) and gets quite incestuous, requiring detailed knowledge of complex interactions across the entire application.

Odd, because I view this as a major weakness of traditional LV techniques. With traditional programs I often need to trace code down into the lowest level to understand what is happening. Even if the lower level stuff is of the cut-and-paste or boilerplate reuse variety (often favored by traditional LV programmers) it could and probably does contain custom modifications, so I have to dig into it everytime I encounter it in a different project and if I am working on multiple projects it is easy to get the differences mixed up.

I believe overly complex interactions are a result of inadequate componentization as opposed the decision to use an OOP or structured programming approach. Classes make it easier to create decoupled components, which in turn makes it easier to create an application using components compared to structured programming. (I'm not even talking about reuse components here, just the major functional components of that application being developed.)

The huge numbers of VI's. The extra tools required to realise it.. The bugs in the OOP core.. The requirement for initialisation before using it... all big advantages...

-OOP does use a lot more vis than traditional techniques. That used to bother me, but I view it as a benefit now. That's one of the things that keeps them flexible. Building too much functionality into a single vi increases the chance you'll have to edit it. Every time you edit a vi you run the risk of adding new bugs. Furthermore, on multi-developer projects only one person can edit a vi at any one time without having to merge the changes. (Which LV doesn't support very well.)

-What extra tools are required to realize OOP? I have Jon's class theme changer, but it's certainly not a requirement.

-Bugs in OOP core... I do get frustrated with the occasional crash. Some of them are directly related to using classes. I don't think all of them are though and the crashes I can positively attribute to using classes amounts to less than half of all crashes I have.

-Whether a class requires initialization depends entirely on what you put in the class. There's nothing inherent to classes that makes initialization a requirement.

One of those is that, because it is a data-flow paradigm, state information is implicit. <snip>

I'll have to think about this for a bit. Off the top of my head I'm not sure there's a real difference. At a fundamental level a class is just a cluster that uses accessor methods instead of bundle/unbundle. I don't see how that makes a difference here.

In this respect an object instance is synonymous to a dynamically launched VI where an "instance" of the vi is launched.

I don't think this analogy holds. Classes only need to be "launched" if they contain run-time references. A closer approximation of an object instance is dropping a typedef constant on the bd. If the typedef has a queue constant in it then you have to worry about the same things.

In a nut-shell, I'm suggesting that in LVOOP, the implicit data-flow nature nature has been broken (it had to be to implement it) and requires re-implementation by the programmer.

See, I don't follow this argument. There's nothing inherent about classes that violates data flow. Some do, some don't. It's all about what the class designer intends the class to do.

The reason I use classes instead of AE's if I need a singleton has more to do with not wanting to put too much functionality in a single place. I do try to avoid using singletons though, simply because it makes it harder to control the sequencing.

(heck, my undergrad was in ME)

Me too! Brother! :D

My opinion is that encapsulation is what matters most in increasing maintainability and reuse and reducing bugs. Keeping as many routines private as you can, and minimizing the interface (the number and complexity of the public routines) is the goal.

QFT. :star:

I'm pretty sure you can get about the same level of encapsulation using libraries and traditional techniques. Using classes instead of typedefs and naked vis for the component api makes the component much, much, more flexible.

I find I get more work done if I can rapidly prototype and iterate to find a good design, but lvclasses encourage more upfront planning and careful architecture because reworking is pretty painful. This need to plan ahead encourages the waterfall development model, which everyone loves to hate.

Not necessarily. Like Shaun, I also use an agile approach. My dev work is broken down into 2 week sprints, with the goal of delivering a functional and complete component at the end of the sprint. The components are designed without a UI, so I'll create a very simple UI that wraps the component and exposes all of it's functionality. I build a small executable and give that to the customer so they can determine if that part of the app does what they need it to do.

There's no big design up front, I'm just focusing on how to get that component working correctly. Because the rest of the app hasn't been designed, I don't necessarily know what the data sent into the component is going to look like. The flexibility provided by classes are a big help here. In fact, Friday afternoon I finished a PacketFilter component for extracting only the data I'm interested in from a fairly complex communication protocol from one of our devices. I should do a write up of how a component evolves over time.

the analogy I've used before is between "pure" mathematicians and "applied" mathematicians.

The charge of being too interested in the academic aspects of OOP at the expense of practical functional has been levelled at me before, but I don't think this analogy applies to me. My decision to use OOP derives entirely from practical considerations. I can deliver a more robust application and reduce the time I spend on that app over its lifetime. Admittedly, I am interested in what the "academically optimal" design looks like. Understanding that helps me understand the tradeoffs of the many implementation shortcuts, knowing that helps me know when it's okay to use those shortcuts.

Link to comment

Lots of excellent points here. I'll break them up into different posts since it's getting rather tedious reading my OWN posts in 1 go biggrin.gif

Wow... lots of comments and limited time. (It's my wife's bday today; can't ignore her and surf Lava too much.)

21 again? biggrin.gif Happy b'day Mrs Daklubeer_mug.gif

It doesn't make it not reusable. Rather, it limits your ability to reuse it. A good reusable component doesn't just wrap up a bit of functionality for developers. It gives them extension points to add their own customizations without requiring them to edit the component's source code. When a component exposes a typedef as part of its public api it closes off a potentially beneficial extension point.

I don't think this is so. To extend a component that uses a typedef it is just a matter of selecting "create sub-vi" and then "create constant" or "Create control/indicator". Then the new vi inherits wink.gif all the original components functionality and you are free to add more if you desire (or hide).

My bigger issue with typedefs is that it makes it harder to refactor code during development, which I do a lot. I know typedefs are the golden boy of traditional Labview programmers, due (I think) to their ability to propogate changes through the project. Here's the rub... changes propogate only if the vis that use the typedef are loaded into memory. How do you know if everything that depends on that typedef is loaded in memory? Unless you are restricting where the typedef can be used (by making it a private member of a library, for instance) you don't.

Well. You do. In classical Labview, loading the top level application loads ALL vis into memory. (Unless it is dynamically loaded).

"But," you say, "the next time a vi that depends on the typedef is loaded it will link to the typedef and all will be well." Maybe it will, maybe it won't. Have you added new data to the typedef? Renamed or reordered elements to improve clarity? Sometimes your edits will cause the bundle/unbundle nodes to access the wrong element type, which results in a broken wire. So far so good. However, sometimes the bundle/unbundle node will access the wrong element of the same type as the original, in which case there's nothing to indicate to you, the developer, that this has happened. (Yes, this does happen even with bundle/unbundle by name.) You have to verify that it didn't happen by testing or by inspection.

A number of points here:

1. Adding data (a control?) to a typedef cluster won't break anything (only clusters use bundle and unbundle). All previous functionality is preserved, but the new data will not be used until you write some code to do it. The previso here (as you say) is to use "bundle/unbundle by name" (see point #3) and not straight bundling or array to cluster functions (which have a fixed number of outputs) . The classic use, however, is a typedef'd enumerated control which can be used by various case structures to switch operations and are impervious to re-ordering or renaming of the enum contents.

2. Renaming may or may not break (as you state). If it's a renamed enumeration, string, boolean etc (or base type as I call them). Then nothing changes. If it's an element in a cluster, then it will.

3. I've never seen a case (nor can I see how) where an "unbundle/bundle by name" has ever chosen the wrong element in a typedef'd cluster or indeed a normal cluster (I presume you are talking about clusters because any control can be typedef'd). A straight unbundle/bundle, I can understand (they are index based) but that's nothing to do with typedefs ( I never use them a) because of this and b) by-name improves readability) An example perhaps?

Classes are superior to typedefs for this reason alone. If I rename an accessor method (the equivalent to renaming a typedef element) I don't have to worry that somewhere in my code Labview might substitute a different accessor method with the same type output. If it can't find exactly what it's looking for I get a missing vi error. It might take me 2 mintues to create a class that essentially wraps a typedeffed cluster, but I save loads of time not having to verify all the bundle/unbundle nodes are still correct.

I think a class is a bit more than just a "super" typedef. In fact, I don't see them as the same at all. A typedef is just a control that has this special ability to propagate it's changes application wide. A class is a "template" (it doesn't exist until it is instantiated) for a module (a nugget of code if you like) . If you do see classes and typdefs as synonymous, then that's actually a lot of work for very little gain. Each new addition to a cluster (classes data member?) would require 2 new VIs (methods). 10 elements, 20 VI's blink.gif Contrast this with adding a new element to a type-def'd cluster. No new VIs. 10 elements, 1 control (remember my single-point maintenance comment?)

.

Another thing I've been doing lately is using classes to create immutable objects. I give the class a Create method with input terminals for all the data and appropriate Get accessors. There are no Set accessors. Once the object is created I can pass it around freely without worrying that some other process might change a value. This saves me time because I never even have to wonder what happens to the data, much less trace through the code to see who does what to it.

In short, using classes instead of typedefs gives me, as a developer, far, far, more confidence that changes I'm making aren't having negative effects elsewhere in my code. That translates directly into less time analyzing code before making a change and testing code after the change.

Immutable objects? You mean a "constant" right? wink.gif

Doesn't mean there aren't better alternatives. Sending messages by telegraph was a well-established technique for many years. Can I expect your response to come via Western Union? ;)

It''d probably be better than my internet connection recently. biggrin.gif The other night I had 20 disconnectsblink.gif

More to follow....

Link to comment

21 again?

You suck up. :D

To extend a component that uses a typedef it is just a matter of selecting "create sub-vi" and then "create constant" or "Create control/indicator".

You're limited to extending what operations are done on the data. Using classes I can extend what operations are done on the data AND extend it to different kinds of data.

Well. You do. In classical Labview, loading the top level application loads ALL vis into memory. (Unless it is dynamically loaded).

Yep... dynamically loaded vis present a problem. You also have a problem if you have a sub vi that isn't part of the dependency chain--perhaps you temporarily removed it or are creating separate conceptual top-level code that uses the same low level functions. There are lots of ways to shoot yourself in the foot when your workflow depends on vis being loaded into memory. (I have two feet full of holes to prove it.)

Actually your comment reveals a fundamental difference between our approaches. "Loading the top level application" implies a top-down approach. After all, you can't load the top level vi during development if you don't have one. I tried that for a while but, for many reasons, abandoned it. I have had much more success building functional modules with fairly generic interfaces from the bottom up and assembling the top level application from the components.

The classic use, however, is a typedef'd enumerated control

My comments are directed at typedeffed clusters. I'm still on the fence with typedeffed enums in a public interface. I can see holes where changes might cause things to break, but I haven't explored them enough yet.

3. I've never seen a case (nor can I see how) where an "unbundle/bundle by name" has ever chosen the wrong element in a typedef'd cluster... An example perhaps?

Sure thing. Grab the attached project (LV2010) and follow the steps in Instructions.txt. The example should only take a couple minutes to work through. Once you do that come back and continue reading. :D

...

There are a couple interesting things to note about this example.

1) After editing the typedef, there were no compiler errors to indicate to the developer that something went wrong, yet three of the six paths returned the wrong data and five of the six paths now have incorrect source code, even though none of the sub vis were edited. I don't know about you, but I'm not too keen on code that automatically changes when I'm not looking.

2) Although Test.vi illustrates the outcome of all six code paths (Case 1 through Case 6) simultaneously, in a real app you usually won't have that luxury. You'll be looking at one specific code path in isolation because that's where the bug popped up. After following the steps outlined you can't make a change to a sub vi that fixes one code path without creating an error in another code path. (Ignoring the two paths using the constant, which are kind of degenerate cases I didn't know about until today.) For example, let's assume you made the edit described in Instructions.txt and during testing happen to encounter outputs from the code paths in order, 1 through 6. However, there could be some hours of dev work between one path and the next.

Probing the outputs from the different code paths could easily lead to the following:

Case 1: Output is correct. Nothing to investigate.

Case 2: That's weird. I wonder how that constant got mixed up? No worries. I'll just swap the data and the output will be correct.

Case 3: Output is correct. Nothing to investigate.

Case 4: Output is correct. Nothing to investigate.

Case 5: Hmm... output data is wrong. Oh look, the wrong element is unbundled. I can't believe I made that mistake. I'll just fix and... sure enough, the output is correct.

Let me stop here and point out something. Fixing code path 5 actually breaks code paths 3 and 4, which were already checked and verified to return the correct result. Surely there's no reason to test them again, right? Eventually, if you keep iterating through the list of code paths and testing them you'll end up with correct code. (I'm not sure that's mathematically provable, but I think it is true.) In this example there are only 6 bundle or unbundle nodes in a total of four vis and it's pretty easy to see how the sub vis with the data. How many bundle/unbundle nodes will there be and how many sub vis have bundle/unbundle nodes in a medium sized application? Are you going to go check them everytime you edit the typedef?

Furthermore, how do you *know* a typedef edit hasn't accidentally changed your code somewhere else?

Opt 1 - Make sure all dependent vis are loaded into memory before editing the cluster.

Ans 1 - This is impractical for the reasons I explained above... you can't *guarantee* you have all the dependent vis loaded unless you have known, closed set of vis that can access the typedef.

Opt 2 - Limit the kinds of edits allowed. It's okay to add new elements but don't ever reorder them. Renaming is probably okay, as long as you never use a name that has been used previously in that cluster.

Ans 2 - Personally I use naming and grouping as a way to help communicate my intentions to other developers and my future self. Removing the ability to do that sacrifices code clarity for me, so I reject that option just based on that alone. However, there are other considerations that also make this option undesirable. How easy is it to recover when somebody mistakenly violates the rule? If discovered right away it's not too bad. You can just revert to a previous check in and call it good. If it's been a couple weeks you're in a whole lot of hurt. There's no way to know exactly which vis were affected by the change and which were not. (Remember, testing doesn't necessarily uncover the problem.) You either revert two weeks of work or manually inspect every bundle/unbundle node for correctness. Neither option is appealling when schedules are tight.

Opt 3 - Use classes as protected typedeffed clusters.

Ans 3 - A few extra minutes up front pays off in spades down the road.

Opt ? - Any other ideas I've missed?

Typedeffed clusters are safe when used under very specific circumstances. When used outside of those circumstances there are risks that changes will break existing code. In those situations my confidence that I'm making the "correct" change goes way down, and the time required to make the change goes way up. Using a class in place of a typedeffed cluster gives me a much larger set of circumstances in which I can make changes, and a larger set of changes I can make and still be 100% confident no existing code, loaded into memory or not, that depends on it will break.

"Single point maintenance" of typedeffed clusters is an illusion. :P

I think a class is a bit more than just a "super" typedef.

Absolutely it is. My point was that even if a class was nothing more than a protected typedef, there are enough advantages just in that aspect of it to ditch typdeffed clusters and use classes instead. Don't underestimate the value of *knowing* a change won't negatively impact other areas of code. Some may consider adequate testing the proper way to deal with bugs my example illustrate. I prefer to design my code so the bugs don't get into the code in the first place. (I call it 'debugging by design,' or alternatively, 'prebugging.')

A class is a "template" (it doesn't exist until it is instantiated).

The object is instantiated with default values as soon as you drop the class cube on the block diagram, just like a cluster. What do you do if you want multiple instances of a cluster? Drop another one. What do you do if you want multiple instances of a class? Drop another one. What does a class have inside its private ctl? A cluster. How do you access private data in a class method? Using the bundle/unbundle prims. At it's core a class is a cluster with some additional protection (restricted access to data) and features (dynamic dispatching) added to it.

Immutable objects? You mean a "constant" right? wink.gif

Nope, I mean immutable objects. Constant values are defined at edit-time. The values of immutable objects are defined at run-time. I might have many instances of the same class, each with different values, each of which, once instantiated by the RTE is forever after immutable and cannot be changed.

More to follow....

I can't wait... ;)

Typedef Hell.zip

Link to comment

You suck up. :D

Hell hath no fury like a woman scorned (or ignored wink.gif)

You're limited to extending what operations are done on the data. Using classes I can extend what operations are done on the data AND extend it to different kinds of data.

Yes and norolleyes.gif. It depends on what exactly you are talking about. Extending a class by adding methods/properties? Or inheriting from that class and overriding/modifying existing properties and methods. My proposition is that is that my method identical to the latter. I assumed you meant the latter since you were speaking in context with a closed component that would be extended by the user. In that scenario the user only has the inheritance option (there are other differences but nothing significant I don't think). Or maybe I just missed what you are trying to say. But it's interesting you say that I can extend the operations. i would have argued (was expecting) the opposite since in theory, although properties are synonymous to "controls". Operations seem fixed. But I'll leave that one for you to think about how I might respond since I appear to be basically arguing against myself biggrin.gif

Yep... dynamically loaded vis present a problem. You also have a problem if you have a sub vi that isn't part of the dependency chain--perhaps you temporarily removed it or are creating separate conceptual top-level code that uses the same low level functions. There are lots of ways to shoot yourself in the foot when your workflow depends on vis being loaded into memory. (I have two feet full of holes to prove it.)

Problem? No. It's much more of a problem building an executable with them. There are many more ways to shoot yourself in the foot with OOP. But for the case when you don't have a top level VI there are a couple of "tricks" from the old boys......

Many moons ago there used to be a "VI Tree.vi". You will see it with many drivers and I think it is still part of the requirement for a LV driver (although haven't checked recently). Is it to show everyone what the VI hierarchy is? Well. Yes. But thats not all. It's also the "replacement" application to ensure all VIs are loaded into memory. However, with the advent of "required" and "optional" terminals, it's effectiveness was somewhat diminished since you can no-longer detect broken VIs without wiring everything up.

The other method (which I employ) is to create test harnesses for grouped modules (system tests). You will see many of my profferings on lava come with quite a few examples. This is because they are a subset of my test harnesses so they are no extra effort and help people understand how to use them. Every new module, gets added to the test harnesses and the test harnesses get added to a "run test" vi. That is run after every few changes (take a look at the examples in the SQLite API). Its not a full factorial test harness (that's done later), but it ensures that all the VIs are loaded in memory and is a quick way to detect major bugs introduced as you go along. Very often they end up being a sub-system in the actual application.

Actually your comment reveals a fundamental difference between our approaches. "Loading the top level application" implies a top-down approach. After all, you can't load the top level vi during development if you don't have one. I tried that for a while but, for many reasons, abandoned it. I have had much more success building functional modules with fairly generic interfaces from the bottom up and assembling the top level application from the components.

Labview is well suited to top-down design due to its hierarchical nature. Additionally top-down design is well suited to design by specification decomposition. Drivers, on the other hand lend themselves to bottom-up. However. for top-down, I find that as you get further down the tree, you get more and more functional replication where many functions are similar (but not quite identical) and that is not condusive to re-use and modularisation (within the project). I use a "diamond" approach (probably not an official one, but describes the resulting architecture) which combines top-down AND bottom-up which (i find) exposes "nodes" (the tips of the diamonds) that are ripe for modularisation and provide segmentation (vertically and horizontally) for inter-process comms.

My comments are directed at typedeffed clusters. I'm still on the fence with typedeffed enums in a public interface. I can see holes where changes might cause things to break, but I haven't explored them enough yet.

Is this because you have only made a relationship between typedefs as clusters being synonymous with a classes' data control only? What about an enumeration as a method?

Sure thing. Grab the attached project (LV2010) and follow the steps in Instructions.txt. The example should only take a couple minutes to work through. Once you do that come back and continue reading. :D

Ahhhh. IC. Here is your example "fixed"

Everything is behaving as it should. But you are assuming that the the data you are supplying is linked to the container. It isn't, therefore you are (in fact) supplying it with the wrong data rather than the bundle/unbundle selecting the wrong cluster data, It's no wonder I've never seen it. It's a "type definition" not a "data definition".

"Single point maintenance" of typedeffed clusters is an illusion. :P

That's why I use the phrase "automagically" biggrin.gif

Absolutely it is. My point was that even if a class was nothing more than a protected typedef, there are enough advantages just in that aspect of it to ditch typdeffed clusters and use classes instead. Don't underestimate the value of *knowing* a change won't negatively impact other areas of code. Some may consider adequate testing the proper way to deal with bugs my example illustrate. I prefer to design my code so the bugs don't get into the code in the first place. (I call it 'debugging by design,' or alternatively, 'prebugging.')

Disagree. laugh.gif There has to be much, much more to justify the complexity switch and the ditching of data-flow. I do know when a change will impact other areas, because my designs are generally modularised and therefore contained within a specific segment rather than passed (transparently) through umpteen objects that may or may not exist at any one point in time.

The object is instantiated with default values as soon as you drop the class cube on the block diagram, just like a cluster. What do you do if you want multiple instances of a cluster? Drop another one. What do you do if you want multiple instances of a class? Drop another one. What does a class have inside its private ctl? A cluster. How do you access private data in a class method? Using the bundle/unbundle prims. At it's core a class is a cluster with some additional protection (restricted access to data) and features (dynamic dispatching) added to it.

Ermm. Nope. You don't have multiple instances of a cluster. We are in the "data-driven" world here. A cluster is just a way of viewing or segmenting the data. It. the data thats important, not the container or the access method. Yes a class has a cluster as the data member. But thats more to do with realising OOP with labview than anything else. If anything the similarity is between a data member and a local variable that is protected by accessors.

Nope, I mean immutable objects. Constant values are defined at edit-time. The values of immutable objects are defined at run-time. I might have many instances of the same class, each with different values, each of which, once instantiated by the RTE is forever after immutable and cannot be changed.

Ahh. I'm with you now. Sounds complicated wink.gif I prefer files biggrin.gif

That reminds me of the "Programmers Quick Guide To the Languages'" entry for C++ (I've posted it on here before)

YOUR PROGRAMMING TASK: To shoot yourself in the foot.

C++: You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying, "That's me, over there."

I can't wait... ;)

I'm sure wink.gif

Link to comment

Responding out of order, starting with (what I think is) the most interesting part because it's late and I won't get to address everything...

Ahhhh. IC. Here is your example "fixed"

Uhh... disconnecting the constants from the typedef doesn't fix the problem. The only output that changed is code path 2, which now outputs the correct value instead of an incorrect value at the cost of code clarity. I can easily imagine future programmers thinking, "why is this unbundling 's1' instead of using the typedef and unbundling 'FirstName?'" And doesn't disconnecting the constant from the typedef defeat the purpose of typedeffing clusters in the first place? You're going to go manually update each disconnected constant when you change the typedef? What happened to single point maintenance?

Regardless, the constants was something of a sideshow anyway... like I said, I just discovered it today. The main point is what happens to the bundle/unbundle nodes wired to and from the conpane controls. (Paths 1, 3, 5, and 6.) Your fix didn't change those at all.

Results from Typedef Heaven:

Path 1 is still correct.

Path 3 still returns correct data but has incorrect code.

Path 5 still returns incorrect data and has incorrect code.

Path 6 still returns incorrect data and has incorrect code.

And, fixing path 5 still alters path 3 so what used to return correct data now returns incorrect data. All I can say is, if that's your version of heaven, I don't want to see your hell. :lol:

The errors these different code paths demonstrate cannot be solved with typedefs. You can put process in place (insist on keeping Tree.vi updated) to try to mitigate the risk, but the process only works as long as people follow it. You can implement testing to try to catch those errors that have crept into your code, but no testing is going to catch all bugs and anyone who relies on testing to ensure quality is playing a losing game.

At the same time, these very same errors cannot exist with classes. The whole reason these bugs pop up is because vi's that bundle/unbundle the typedef aren't loaded into memory when the edits were made. With classes, loading any method in the class automatically loads them all. In other words, you can't edit the cluster without also loading all the vi's that bundle or unbundle the cluster into memory.** The way LVOOP classes are designed doesn't allow these bugs into code in the first place.

(**Strictly speaking this is not 100% true. However, breaking a class in this way takes almost a concentrated effort to do so and classes are far more robust to these kinds of changes than typedefs.)

But you are assuming that the the data you are supplying is linked to the container. It isn't, therefore you are (in fact) supplying it with the wrong data rather than the bundle/unbundle selecting the wrong cluster data

I see your point with respect to the cluster constants, though as I mentioned above I'm not convinced disconnecting the constant from the typedef is a good general solution to that problem.

Additionally top-down design is well suited to design by specification decomposition.

Specification? You get a software spec? And here I thought a "spec document" was some queer form of modern mythology. (I'm only half joking. We've tried using spec documents. They're outdated before the printer is done warming up.)

Daklu, on 12 December 2010 - 02:27 PM, said:

My comments are directed at typedeffed clusters. I'm still on the fence with typedeffed enums in a public interface. I can see holes where changes might cause things to break, but I haven't explored them enough yet.

Is this because you have only made a relationship between typedefs as clusters being synonymous with a classes' data control only? What about an enumeration as a method?

It's past the stupid hour in my timezone... I don't understand what you're asking.

My concern with typedeffed enums is the same concern I have with typedeffed clusters. What happens to a preset enum constant or control on an unloaded block diagram when I make different kinds of changes to the typedef itself? (More precisely, what happens to the enum when I reload the vi after making the edits?)

Disagree. laugh.gif

I'm shocked. ;)

There has to be much, much more to justify the complexity switch and the ditching of data-flow.

Using a class as a protected cluster is neither complex nor disposes of data flow. There are OO design patterns that are fairly complex, but it is not an inherent requirement of OOP.

I do know when a change will impact other areas, because my designs are generally modularised and therefore contained within a specific segment...

So your modules either do not expose typedefs as part of their public interface or you reuse them in other projects via copy and paste (and end up with many copies of nearly identical source code,) right?

Ermm. Nope. You don't have multiple instances of a cluster.

My fault for not being clear. I meant multiple instances of a typedeffed cluster. I was freely (and confusingly) using the terms interchangably. Dropping two instances of the same class cube on the block diagram is essentially equivalent to dropping two instances of a typedeffed cluster on the block diagram. Each of the four instances on the block diagram has it's own data space that can be changed independently of the other three.

If anything the similarity is between a data member and a local variable that is protected by accessors.

No. Based on this and a couple other comments you've made, it appears you have a fundamental misunderstanding of LVOOP. Labview classes are not inherently by-ref. You can create by-ref or singleton classes using LVOOP, but data does not automatically become by-ref just because you've put it in a class. Most of the classes I create are, in fact, by-val and follow all the typical rules of traditional sequential dataflow. By-ref and singleton functionality are added bonuses available for when they are needed to meet the project's requirements.

Link to comment

Hi,

I'm a theoretical OOPer. My problem is that I'm stuck in 7.1. and really like uml... :D

So here my therotical 'insights'.

There is a line of developement in other languages as well from the cluster (record, struct) to the class. In this developement of programming languages, different features were added, merged, dismissed as evolution plays. Using a type def, you introduce the class(=*.ctl)/object(=wire) abstraction already. With the Action Engine, we got encapsulation (all data is private) and methods (including accessors). With LVOOP we have inheritance. Still LVOOP isn't supporting things that other languages do since ages (interfaces, abstract classes and methods). But on the other hand it allows for by-val implementation (objects that down have an identity) as well as by-ref.

I severly consider LVOOP unfinished, because it doesn't allow you to draw code the same as you do non-LVOOP code with wires and nodes. It's mainly some trees and config-windows.

But also I don't think the evolution of other OOP languages is not yet finished. See uml, where you only partially describe the system graphically, which means you never can create a compilable code (partially undefined behaviour). Also uml still has a lot of text statements (operations and properties are pure BNF text-statements).

So the merging towards a graphical OOP code is still work in progress.

Let's get practical. On my private project I have to deal with OOP designs (uml/xmi and LV-GObjects). One issue that isn't possible to handle with type def/AE is to represent the inheritance. Let's say I want to deal with control (parent), numeric control (child) and string control (child) and have some methods to serialize them to disk.

For a generic approach I started using variants. The classID (string) is changed to a variant. All properties I read from the Property nodes are placed as Variant attributes. This can even be nested, e.g. for dealing with lables (the get serialized as decoration.text as an object of their own and the set as attribute). Wow, I have compositions! :yes:

Well, I lose all compile time safety. But I wonder what I'd get when combining it with AEs and some 'plugin' way to get dynamic dispatch.

Ahh, wasn't C++ written with C?

Felix

Link to comment

Uhh... disconnecting the constants from the typedef doesn't fix the problem. The only output that changed is code path 2, which now outputs the correct value instead of an incorrect value at the cost of code clarity. I can easily imagine future programmers thinking, "why is this unbundling 's1' instead of using the typedef and unbundling 'FirstName?'" And doesn't disconnecting the constant from the typedef defeat the purpose of typedeffing clusters in the first place? You're going to go manually update each disconnected constant when you change the typedef? What happened to single point maintenance?

No it doesn't defeat the object of typedefs.

Typedef'd clusters (since you are so hung-up on just clusters rolleyes.gif) are typically used to bundle and unbundle controls/indicators so compound/complex controls so we can have nice neat wires to and from VIs biggrin.gif. Additionally they can be used to add clarity an easy method to select individual components of the compund control.wink.gif The benefit as opposed to normal clusters is that a change propagates through the entire application so there is no need to go to every VI and modify a control/indicator just because you change the cluster. I (personally) have never uses typedef'd constants (or ever seen them used the way you are trying to use them) except as a datatype for bundle by name. As I said previously, it is a TypeDef not a Datadef.

Regardless, the constants was something of a sideshow anyway... like I said, I just discovered it today. The main point is what happens to the bundle/unbundle nodes wired to and from the conpane controls. (Paths 1, 3, 5, and 6.) Your fix didn't change those at all.

Results from Typedef Heaven:

<snip>

Well. I'm not sure what you are seeing. Here is a vid of what happens when I do the same.

http://www.screencast.com/users/Imp0st3r/folders/Jing/media/6d552790-5293-4b47-85bc-2fcb1402b085

All the names are John (which I think was the point). Sure the bundles change, so now the 0th container is labeled "LastName". But its just a label for the container (could have been z5ww2qa). But because you are imposing ordered meaning on the data you are supplying, I think you are expecting it to read your intentions and choose an appropriate label to match your artificially imposed meaningful data.You will have noticed that when you change the cluster order (again something I don't think most people do - but valid), the order within the cluster changed too (Lastname is now at the top). So what you have done is change into which container the values are stored. they are both still stored. They will all be taken out of the container that you stored them in. Only you are now storing the first name (data definition) in the last name (container).

If you are thinking this will not happen with your class....then how about this?l.

http://www.screencast.com/users/Imp0st3r/folders/Jing/media/672c5406-a56d-4c7a-a177-ab31a3c0cd15

I see your point with respect to the cluster constants, though as I mentioned above I'm not convinced disconnecting the constant from the typedef is a good general solution to that problem.

What problem? biggrin.gif I think you are seeing a typedef as more than it really is and you have probably found an edge case which seems to be an issue for your usage/expectation. It is just a control. It even has a control extension. It's no more an equivalent to a class than it is to a VI. The fact you are using a bundle/unbundle is because you are using a compound control (cluster) andt has little to do with typedefs. Making such a control into a typedef just means we don't have to go to every VI front panel and modify it manually when we change the cluster.

Specification? You get a software spec? And here I thought a "spec document" was some queer form of modern mythology. (I'm only half joking. We've tried using spec documents. They're outdated before the printer is done warming up.)

Yup. And if one doesn't exist, I write one (or at least title a document "Design Specification" biggrin.gif) by interrogating the customer But mainly our projects are entire systems and you need one to prove that the customers requirements have been met by the design. Seat-of-yer-pants programming only works with a lot of experience and a small amount of code.

It's past the stupid hour in my timezone... I don't understand what you're asking.

My concern with typedeffed enums is the same concern I have with typedeffed clusters. What happens to a preset enum constant or control on an unloaded block diagram when I make different kinds of changes to the typedef itself? (More precisely, what happens to the enum when I reload the vi after making the edits?)

It's nothing to do with in memory or not (I don't think). What you are seeing is the result of changing the order of the components within the cluster. An enum isn't a compound component so there is no order associated.

Using a class as a protected cluster is neither complex nor disposes of data flow. There are OO design patterns that are fairly complex, but it is not an inherent requirement of OOP.

So your modules either do not expose typedefs as part of their public interface or you reuse them in other projects via copy and paste (and end up with many copies of nearly identical source code,) right?

Nope. The source is in SVN. OK you have to have a copy of the VIs on the machine you are working on in the same way that you have to have the class vis present to be able to use them. So I'm not really sure what you are getting at here.

A module that might expose a typedef would be an action engine. I have a rather old drive controller, for example, that has an enumerated typedef

with Move In, Move Out, Stop, Pause, Home. If I were to revisit it then I would probably go for a polymorphic VI instead purely because it would only expose the controls for that particular function (you don't need a distance parm for Home or Stop for example) rather than just ignoring certain inputs.But its been fine for 3 years and if it "'aint broke, don't fix it" tongue.gif

My fault for not being clear. I meant multiple instances of a typedeffed cluster. I was freely (and confusingly) using the terms interchangably. Dropping two instances of the same class cube on the block diagram is essentially equivalent to dropping two instances of a typedeffed cluster on the block diagram. Each of the four instances on the block diagram has it's own data space that can be changed independently of the other three.

I suppose. But it's not used like that and I cannot think of a situation where you would want to (what would be the benefit?) Its used either as a control, or as a "Type Definition" for a bundle-by-name. It's a bit like laying down a queue reference constant. Sure you can. But why would you? Unless of course you want to impose "Type" or cast it.

No. Based on this and a couple other comments you've made, it appears you have a fundamental misunderstanding of LVOOP. Labview classes are not inherently by-ref. You can create by-ref or singleton classes using LVOOP, but data does not automatically become by-ref just because you've put it in a class. Most of the classes I create are, in fact, by-val and follow all the typical rules of traditional sequential dataflow. By-ref and singleton functionality are added bonuses available for when they are needed to meet the project's requirements.

Maybe I don't. worshippy.gif But I do know "by-val" doesn't mean it's "data-flow" any more than using a "class" means "object oriented". Like you said. It's up to the programmer. It's just that the defaults are different. In classic labview, the default is implicit state with single instances. In LVOOP its multiple instance with managed state. Either can be made to do the other. It's just the amount of work to turn one into the other. Well that's how it seems to a heathen like me wink.gif

Link to comment

Hi,

I'm a theoretical OOPer. My problem is that I'm stuck in 7.1. and really like uml... :D

So here my therotical 'insights'.

There is a line of developement in other languages as well from the cluster (record, struct) to the class. In this developement of programming languages, different features were added, merged, dismissed as evolution plays. Using a type def, you introduce the class(=*.ctl)/object(=wire) abstraction already. With the Action Engine, we got encapsulation (all data is private) and methods (including accessors). With LVOOP we have inheritance. Still LVOOP isn't supporting things that other languages do since ages (interfaces, abstract classes and methods). But on the other hand it allows for by-val implementation (objects that down have an identity) as well as by-ref.

I severly consider LVOOP unfinished, because it doesn't allow you to draw code the same as you do non-LVOOP code with wires and nodes. It's mainly some trees and config-windows.

But also I don't think the evolution of other OOP languages is not yet finished. See uml, where you only partially describe the system graphically, which means you never can create a compilable code (partially undefined behaviour). Also uml still has a lot of text statements (operations and properties are pure BNF text-statements).

So the merging towards a graphical OOP code is still work in progress.

Let's get practical. On my private project I have to deal with OOP designs (uml/xmi and LV-GObjects). One issue that isn't possible to handle with type def/AE is to represent the inheritance. Let's say I want to deal with control (parent), numeric control (child) and string control (child) and have some methods to serialize them to disk.

For a generic approach I started using variants. The classID (string) is changed to a variant. All properties I read from the Property nodes are placed as Variant attributes. This can even be nested, e.g. for dealing with lables (the get serialized as decoration.text as an object of their own and the set as attribute). Wow, I have compositions! :yes:

Well, I lose all compile time safety. But I wonder what I'd get when combining it with AEs and some 'plugin' way to get dynamic dispatch.

Ahh, wasn't C++ written with C?

Felix

I'm not sure I would agree that OOP is still evolving (some say its a mature methodology). But I would agree LVOOP is probably unfinished. The question is as we are already 10 years behind the others, will it be finished before the next fad wink.gif Since I think that we are due for another radical change in program designing (akin to text vs graphical was or structured vs OOP). It seems unlikely.

As for a plug-in way of invoking AEs. Just dynamically load them. If you make the call something like "Move.Drive Controller" or "Drive Controller.Move" (depending on how you like it), strip the "Move and use if for the action and load your "Drive Controller".vi. But for me, compile time safety is a huge plus for using labview.

Link to comment

Quick response on your videos. Everything else will have to wait.

Well. I'm not sure what you are seeing. Here is a vid of what happens when I do the same.

I'm seeing you not following the instructions. :P

Step 2: Close the project. Open Control 1.ctl, LoadedBundle.vi, and LoadedUnbundle.vi. (You only opened the ctl.)

Step 3: On the typedef, rename s1 to FirstName and s2 to LastName. Apply changes. (You didn't apply the changes before reordering the controls, which isn't surprising since the option isn't available if there are no vis in memory to apply the changes to.)

Your examples showed a single edit. My instructions detail a sequence of two distinct edits that can be separated by an arbitrary amount of time. There is a difference.

The instructions may appear to be an extreme corner case, but they are designed to simulate what can happen over a period of time as the project evolves, not give a realistic sequence of actions one is likely to sit down and do all at once. Most developers know (through painful experience) not to make lots of changes to typedeffed clusters without forcing a recompile by saving or applying the changes during intermediate steps. What the example shows is how the bundle/unbundle nodes reacts differently to the typedef edits depending on whether or not the vi containing that bundle/unbundle node is loaded in memory at the time of the edit. (At least it does if you follow the instructions. ;) )

But because you are imposing ordered meaning on the data you are supplying, I think you are expecting it to read your intentions...

It's not an expectation I created. It's an expectation set because bundle/unbundle by name prims usually do successfully interpret what is supposed to happen. When you follow the instructions the bundle/unbundle nodes in LoadedBundle and LoadedUnbundle are still wired correctly. It's the vis that weren't loaded during the edits that end up wrong.

Editing the typedef can easily cause inconsistent results that depend entirely on when in time the dependent vis were loaded into memory relative to a sequence of typedef edits. A series of small changes that under many conditions do not cause any problems do, in fact, cause unexpected problems in situations that are not well understood by the Labview community as a whole. That this happens is crystal clear and, as near as I can tell anyway, indisputable. (I think you'll agree once you follow the instructions correctly. :lol: )

The million dollar question is, how much do you want the quality of your software to depend on the state of your development environment while editing and what are you going to do to mitigate the risks? (That's a general "you," not you--Shaun--specifically.)

If you are thinking this will not happen with your class....then how about this?

You did see where I said "it takes almost a concentrated effort" to break the class, yes? (And yes, I do consider your example a bit of an effort because users are not likely to spend hours, days, or weeks making that single edit.) If you follow the scenario I gave the class doesn't break. I will also point out that if it was discovered sometime in the future that someone did break the class this way, it will be far easier to fix the damage than it would be with a typedef cluster. Why? Because the set of vis that have the ability to bundle/unbundle that data is known. Using a typedef cluster to pass data between modules means you have a lot more work to do

The fact you are using a bundle/unbundle is because you are using a compound control (cluster) andt has little to do with typedefs.

Unless we parted ways somewhere along the line, the context of this discussion is how to best pass data around your application: clusters or classes. When I say "typedef" I have been referring specifically to typedeffed clusters because they are commonly used as Labview's equivalent to a struct as an easy way to pass data around the app and they have the (illusionary :lol: ) benefit of single point maintenance. I thought I mentioned it earlier, but maybe not. I apologize for not being more clear.

(Da-yem... 9:15 pm. Mrs. Dak is going to kill me...)

((BTW, did I mention you didn't follow the instructions correctly? :lol: )

Link to comment

<snip>laugh.gif

Ahhhhh. I see what you are getting at now.lightbulb.gif The light has flickered on (I have a neon one biggrin.gif)

I must admit, I did the usual. Identify the problem and find a quicker way to replicate it (I thought you were banging on about an old "feature" and I knew how to replicate it oops.gif). That's why I didn't follow your procedure exactly for the vid (I did the first few times to see what the effect was and thought "Ahhh, that old chestnut"). But having done so I would actually say the class was easier since I didn't even have to have any VI's open laugh.gifSo it really is a corner of a corner case Have you raised the Car yet? biggrin.gif

But it does demonstrate (as you rightly say) a little understood effect.. I've been skirting around it for so long; I'd forgotten it. I didn't understand why it did it (never bothered like with so many nuancies in LV), I only knew it could happen and modified my workflow so it didn't happen to me :)

But in terms of effort defending against it. Well. How often does it happen? I said before, I've never seen it (untrue of course given what I've said before) so is it something to get out nickers in a twist about? A purist would say yes. An accountant would say "how much does it cost"? biggrin.gif Is awareness enough (in the same way I'm fidgety about windows indexing and always turn it off?). What is the trade off between detecting that bug and writing lots of defensive code or test cases that may be prone bugs themselves? I think it's the developers call. If it affects both LVOOP and traditional labview then it should be considered a bug or at the very least have a big red banner in the help yes.gif

Still going to use typedefs though biggrin.gif

Edited by ShaunR
Link to comment

A couple of last comments and then I'll get out of the way. First, LabVIEW native OOP is absolutely dataflow (and not just by-val). No VI runs without all data present at its inputs There are no "variables, pointers, etc" - all data is "on the wire". Copies of data are made at wire branches when required. Or, in OOP jargon, No METHOD runs without all data present at its inputs There are no "variables, pointers, etc" - the INSTANCE is "on the wire". Copies of the INSTANCE are made at wire branches when required. As always, one can break dataflow when necessary.

Second, a typedef is just a logical container for datatypes that is a convenience to the programmer. It should represent a group of data that belong together - like maybe the IP address and port for a TCP/IP server connection. The programmer typically needs both in the same place at the same time. The fact that LabVIEW auto-propagates seems to me to be desirable because LV doesn't have separate link/compile operations. If you changed a typedef in C, the changes would propagate at link and then compile - LAbVIEW needs to detect the change and then force a recompile everywhere because of the JIT compiler.

Mark

Link to comment

Been busy on the road, cna't spend much time reading... forgive interuption.

1) Comparing some of the bugs in type def'd cluster to non-buggy LVOOP just isn't fair. In some versions of LV the type def worked great. (LV 7.1 maybe)

2) AE's taht use type defs to encapsulate data and LVOOP are not mutually exclusive and live quite well on the same diagram.

3) I still use a Tree.vi (Greg McKaskle's suggestions became my mandates) and they take care of the issues with VI's knowing about changes.

4) I use bottom-up/Top-down approach to fisrt ID the risk, then turn my attention to the top down.

5) Type defs work just fine if you know the spec ahead of time and changes are additions only. Renaming and removing is hazardous mostly because of teh bugs in type-defs.

6) Viewed rom the VI Hiarchy, the stuff at the bottom is where I find most of the potential for re-use and get first concideration for implementation in LVOOP. The closer I get to the top, the more specific the code to the app and therefore the less re-use potential so they the the "quick and dirty" AE typed treatment.

7) I still find LVOOP to be more work up front.

8) I am repeatedly supprised by how much I can change quickly if first implemented in LVOOP.

9) I have avoided type-def'd enums on LVOOP classes to avoid issues with upgrades. I use rings and new functionality only gets added at higher ring values. I may choose to convert the rings to enums inside the class itself but I have not made up my mind on that point yet.

10) LVOOP suffers big time from its vocabulary and its insistance on keeping everything soooo arbitrary that developers can't find a toe-hold when starting the trek up the learning curve.

11) My re-use library has grown quite a bit after switching to LVOOP. I was having a hell of a time doing that beofe using LVOOP.

Done interupting,

Ben

Link to comment

Still going to use typedefs though biggrin.gif

I didn't expect to change your mind. That's okay... the first step to finding a cure is admitting there's a problem. :lol:

Have you raised the Car yet?

Nope.

First, it's not a bug. To clarify, while it is undesirable behavior from the developer's point of view, it's not a "bug" in that the cluster and bundle/unbundle source code is faulty. Rather, it is an inherent design limitation due to clusters not maintaining a history of their changes. The issues arise because people use typedeffed clusters in ways that (probably) were not originally intended--namely, as a robust mechanism to pass data between independent components.

Second, NI has already implemented the solution. Classes. There may be some bugs in Labview's OOP code, but they will be fixed over time. It's unlikely clusters will ever be "fixed."

So it really is a corner of a corner case

For some, yes. Personally I was vaguely aware something was odd but, like you, chalked it up to Labview's quirkiness. It wasn't until I understood exactly what was happening and why it was happening that I began to recognize how frequently this issue was costing me extra time. As I said earlier, changing my design approach to use classes (or native data types) instead of typedeffed clusters to pass data between components has been liberating and saved me tons of time.

Who won't run into this issue?

-Developers who create highly coupled applications, where loading one vi essentially forces Labview to load them all.

-Developers who adhere to strict process rules, such as maintaining Tree.vi and not making certain changes to clusters.

-Developers who use the copy and paste method of code reuse, establishing unique instances of their "reuse" library for each project.

When does this start becoming an issue that needs to be addressed?

-In multi-developer environments, where process standards are hard to enforce and verifying they have been followed correctly is difficult and time consuming.

-In decoupled applications, when dev work occurs at the component level instead of the application level.

-When dev teams begin using single point deployed reusable components (i.e. installed to user.lib or vi.lib) instead of copy and paste reuse.

I think it's the developers call.

I absolutely agree. However, the developer has to understand the risks and costs associated with the choices before an informed decision can be made. The risks of using clusters are not well understood, hence (in my experience) the cost is almost universally underestimated. OOP is new to the Labview community, so the difficulty in using classes in place of clusters is typically overestimated and the expected cost is set too high. With a skewed evaluation of the risks and costs it's not surprising there's so much resistance to classes.

Now that we've put that issue to bed I have to figure out what you're trying to say about classes violating data flow, because it makes no sense to me. (I guess the stupid hour hit early today in the pacific timezone.) "Dataflow" means different things in different contexts. Can you explain what it means to you?

(Note: I'm not ignoring Ben and Mark, but no time to respond right now.)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.