Jump to content

Destination: Task


Recommended Posts

First, it's not a bug. To clarify, while it is undesirable behavior from the developer's point of view, it's not a "bug" in that the cluster and bundle/unbundle source code is faulty. Rather, it is an inherent design limitation due to clusters not maintaining a history of their changes. The issues arise because people use typedeffed clusters in ways that (probably) were not originally intended--namely, as a robust mechanism to pass data between independent components.

Second, NI has already implemented the solution. Classes. There may be some bugs in Labview's OOP code, but they will be fixed over time. It's unlikely clusters will ever be "fixed."

I agree that it may not technically be a bug if there's some spec somewhere at NI that says this behavior is expected, but it's still a horrible design defect in the LabVIEW development environment. There's no reason bundle by Name and Unbundle by Name couldn't keep track of the names and break the VI if there were any change in the names, sort of like relinking subvis. The amount work to fix hundreds of broken subvis pales in comparison to the havoc wreaked by having your tested production code silently change its functionality.

We've learned to be more vigilant about cluster name and order changes, and have added more unit testing, but there's really no excuse for NI permitting this. Re the current discussion, I don't believe the existence of this horrible design bug ought to inform the choice of whether to use OOP or not, though given that NI seems reluctant to fix it, maybe it does factor in.

Jason

Link to comment

[/size]

I agree that it may not technically be a bug if there's some spec somewhere at NI that says this behavior is expected, but it's still a horrible design defect in the LabVIEW development environment. There's no reason bundle by Name and Unbundle by Name couldn't keep track of the names and break the VI if there were any change in the names, sort of like relinking subvis. The amount work to fix hundreds of broken subvis pales in comparison to the havoc wreaked by having your tested production code silently change its functionality.

We've learned to be more vigilant about cluster name and order changes, and have added more unit testing, but there's really no excuse for NI permitting this. Re the current discussion, I don't believe the existence of this horrible design bug ought to inform the choice of whether to use OOP or not, though given that NI seems reluctant to fix it, maybe it does factor in.

Jason

Slightly off-topic...

If you really have to delete an element from a type-def'd cluster I suggest you first change it to a data type that is incompatable with the original version eg change numeric to a string wiht the Tree.Vi open. This will break all of the bundle/unbundles and let you find and remove reference to that value.

THEN you can delete the value knowing that you have not missed any of them.

The VI Analyzer used to report an error about bundling the same value twice (often the case if you delete a field and LV wires to the next one of the same type).

AS far as i know the bug that are reported are being fixed.

Ben

Link to comment

I didn't expect to change your mind. That's okay... the first step to finding a cure is admitting there's a problem. :lol:

Who won't run into this issue?

-Developers who create highly coupled applications, where loading one vi essentially forces Labview to load them all.

-Developers who adhere to strict process rules, such as maintaining Tree.vi and not making certain changes to clusters.

-Developers who use the copy and paste method of code reuse, establishing unique instances of their "reuse" library for each project.

(Note: I'm not ignoring Ben and Mark, but no time to respond right now.)

OK - so I lied - I'm back for more :)

I think this confuses highly coupled with statically loaded. I don't write code I consider highly coupled but I seldom if ever run into this kind of issue because I don't use much code deployed as dynamic libraries. I do have a bunch of classes and OO frameworks that I use and re-use but I use them by creating a unique project file for each deployed app and then adding those components that I need. So, I have a class library that is immutable (within the context of that project) that I drag into the project explorer - this is not a copy of the code, just a "link" to where the class is defined. Now, if I use any of that class in any capacity in that project, the class gets loaded into memory (and if I'm not using it, it shouldn't be there). But, the only "coupling" between the classes I use is that they are all called at some point by something in my application-specific project. My classes often include public typedefs for creating blocks of data that benefit from logical organization. But these typedefs get updated across all callers because of the specific project (not a VI tree, in this case). I realize the project doesn't force a load into memory, but once again, using the class does and that's the only reason they're in the project.

I'm still forced to deal with other users of the classes that might not be loaded, but that's what an interface spec is for - any changes to a public API shouldn't be taken lightly. The big difference is that all my code is typically statically linked so everything the project needs is there at compile and build time. But this does NOT mean it's highly coupled as each class has a clear interface, accessors, protected and private methods, and so on.

Just to help derail this thread, I'll state that I'm not a big fan of using the plug-in library architecture just because you can. Sometimes it's really helpful, but if an application can be delivered as a single executable (and that includes with 8.6 style support folders) then I find it much easier to maintain since I don't get LabVIEW's version of DLL hell. I don't care if my installer is 160 Mb or the installed app is 60 Mb. The performance under LabVIEW's memory manager is more than adequate.

Mark

Link to comment

There's no reason bundle by Name and Unbundle by Name couldn't keep track of the names and break the VI if there were any change in the names, sort of like relinking subvis.

[Edit - After typing all this up, I realized you're asking for a broken vi, not automatic mutation. While that has some similar problems of it's own, it's not the question I was addressing with my response. Sorry about that. I'll leave the text just in case somebody comes along wondering about automatic mutation.]

:oops:

Technically you may be correct, though it still wouldn't give you the user experience it appears you think it would. If you can tolerate my lengthy explanation I think you'll see why.

First, the reason classes behave correctly in the example I posted is because all the vis that bundle/unbundle the data are loaded into memory during the edits. NI has taken a lot of flak from users (me included) for the load-part, load-all functionality built into classes, but it was the correct decision. So the first question is, do you want Labview to automatically load all the vis that bundle/unbundle the typedeffed cluster when you open the ctl? I'll go out on a limb and guess your answer is "no." (Besides, implementing that would be a problem. There really isn't a good way for the typedef to know which vis bundle/unbundle the data.)

So for this behavior to exist, the ctl needs to maintain a version number and mutation history of all the edits and a that have been made to it. That (theoretically) would allow the vi that bundles/unbundles the cluster, the next time it loads, to compare its cluster version against the current cluster version and step through the updates one at a time until all the changes have been applied.

As a matter of fact, NI has already implemented this exact scheme in classes. Not to make sure the bundle/unbundle nodes are updated correctly (that's already taken care of by the auto-loading behavior,) but for saving and loading objects to disk. Consider the following scenario:

1. Your application creates an object containing some data, flattens it, and saves it to disk.

2. Somebody edits the class cluster, perhaps renaming or reordering a few elements.

3. Your updated application attempts to load the object from disk and finds the object's data cluster on disk no longer matches the class' cluster definition.

This is where the class' mutation history kicks in. The class version number is stored on disk with the object data, so when the data is reloaded LV can step through and apply the updates one at a time, until the loaded object version matches the current class version. Sounds perfect, yes?

As it turns out, automatic class mutation is very error prone and subject to fairly esoteric rules. It is risky enough that most developers write their own serialization methods to manually flatten objects for saving to disk rather than letting LV automatically flatten them. This is not a failure on NI's part. It is simply because there is no way for LV to definitively discern the programmer's intent based on a series of edits.

Suppose I do the following edits to the class cluster:

- Remove string control "String"

- Add string control "String"

- Rename "String" to "MyString"

Was my intent for the data that used to be stored in "String" to now be stored in "MyString?" Possibly. Or was my intent to discard the saved data that used to be stored in "String" and create an entirely new field named "MyString?" That's possible too. Both scenariors are plausible. There's simply no way LV can automatically figure out what you want to happen, so it makes reasonable educated guesses. Unfortunately, it guesses wrong sometimes, and when that happens functionality breaks.

Giving clusters a mutation history isn't a real solution to this problem. It will just open another can of worms that has even more people screaming at NI to fix the "bug." The solution is for us, as developers, to recognize when our programming techniques, technologies, and patterns have reached their limitations. If we have requirements that push those things beyond their limits, the onus is on us to apply techniques, technologies, and patterns that are better suited to achieving the requirements.

[/soapbox]

I think this confuses highly coupled with statically loaded.

I'm not sure why you think that. Most of the code I write is statically linked, yet I consider it reasonably well decoupled.

Just to help derail this thread, I'll state that I'm not a big fan of using the plug-in library architecture just because you can.

In general, nobody should use a particular technique "just because they can." If you're going to spend time implementing it you ought to have a reason for doing so. But I'm not advocating a plug in architecture so I'm not sure who you're protesting against...?

Link to comment

Technically you may be correct, though it still wouldn't give you the user experience it appears you think it would. If you can tolerate my lengthy explanation I think you'll see why.

First, the reason classes behave correctly in the example I posted is because all the vis that bundle/unbundle the data are loaded into memory during the edits.

Indeed. they "behave" correctly. As indeed my procedure yielded, for the reasons I argued previously about containers. But they aren't immune as I think you are suggesting here (remember class hell?).

NI has taken a lot of flak from users (me included) for the load-part, load-all functionality built into classes, but it was the correct decision. So the first question is, do you want Labview to automatically load all the vis that bundle/unbundle the typedeffed cluster when you open the ctl?

I'll go out on a limb and guess your answer is "no." (Besides, implementing that would be a problem. There really isn't a good way for the typedef to know which vis bundle/unbundle the data.)

Guess again wink.gif

Yes. yes.gif That is effectively what you are doing when you use a Tree.vi. In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded (i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong). Of course this would be a lot less painful for many if you could "nest" projects.

[

So for this behavior to exist, the ctl needs to maintain a version number and mutation history of all the edits and a that have been made to it. That (theoretically) would allow the vi that bundles/unbundles the cluster, the next time it loads, to compare its cluster version against the current cluster version and step through the updates one at a time until all the changes have been applied.

As a matter of fact, NI has already implemented this exact scheme in classes. Not to make sure the bundle/unbundle nodes are updated correctly (that's already taken care of by the auto-loading behavior,) but for saving and loading objects to disk. Consider the following scenario:

1. Your application creates an object containing some data, flattens it, and saves it to disk.

2. Somebody edits the class cluster, perhaps renaming or reordering a few elements.

3. Your updated application attempts to load the object from disk and finds the object's data cluster on disk no longer matches the class' cluster definition.

This is where the class' mutation history kicks in. The class version number is stored on disk with the object data, so when the data is reloaded LV can step through and apply the updates one at a time, until the loaded object version matches the current class version. Sounds perfect, yes?

As it turns out, automatic class mutation is very error prone and subject to fairly esoteric rules. It is risky enough that most developers write their own serialization methods to manually flatten objects for saving to disk rather than letting LV automatically flatten them. This is not a failure on NI's part. It is simply because there is no way for LV to definitively discern the programmer's intent based on a series of edits.

Suppose I do the following edits to the class cluster:

- Remove string control "String"

- Add string control "String"

- Rename "String" to "MyString"

Was my intent for the data that used to be stored in "String" to now be stored in "MyString?" Possibly. Or was my intent to discard the saved data that used to be stored in "String" and create an entirely new field named "MyString?" That's possible too. Both scenariors are plausible. There's simply no way LV can automatically figure out what you want to happen, so it makes reasonable educated guesses. Unfortunately, it guesses wrong sometimes, and when that happens functionality breaks.

Intent is irrelevant if the behaviour is consistent (as I was saying before about containers). Although I hadn't spotted the particular scenario in the example, treating a typedef'd cluster as just a container will yield the correct behaviour (note I'm saying behaviour here since both classes and typdef'd clusters can yield incorrect diagrams) as long as either

1. ALL vis are in memory.

OR

2. ALL vis are not in memory.

It's only that in your procedure some are and some aren't that you get a mismatch.

[

There really isn't a good way for the typedef to know which vis bundle/unbundle the data.)

Giving clusters a mutation history isn't a real solution to this problem. It will just open another can of worms that has even more people screaming at NI to fix the "bug." The solution is for us, as developers, to recognize when our programming techniques, technologies, and patterns have reached their limitations. If we have requirements that push those things beyond their limits, the onus is on us to apply techniques, technologies, and patterns that are better suited to achieving the requirements.

[/soapbox]

Well. there is already a suggestion on the NI Black hole site. To drop the simplicity of typedefs for a different paradigm I think is a bit severe and in these sorts of issues I like to take the stance of my customers (it's an issue....fix it biggrin.gif ). But even that suggestion isn't bullet-proof. What happens if you rename a classes VI? wink.gif

[

I'm not sure why you think that. Most of the code I write is statically linked, yet I consider it reasonably well decoupled.

I think it is probably due statements where you appear to assume that classic labview is highly coupled just because it's not OOP (I too was going to make a comment about this, but got bigged down in the typedef details tongue.gif ).

[

In general, nobody should use a particular technique "just because they can." If you're going to spend time implementing it you ought to have a reason for doing so. But I'm not advocating a plug in architecture so I'm not sure who you're protesting against...?

I don't think he's against anyone. Just picking up on the classic labview = highly coupled comments.

Once thing I've noticed with comments from other people (I'm impressed at their stamina biggrin.gif ) is that most aren't writing OOP applications. I've already commented on encapsulation several times, and this seems to be its main use. If that is all it's used for, then it's a bit of a waste (they could have upgraded the event structure instead biggrin.gif ). I wonder if we could do a poll?

[/size]

I agree that it may not technically be a bug if there's some spec somewhere at NI that says this behavior is expected, but it's still a horrible design defect in the LabVIEW development environment. There's no reason bundle by Name and Unbundle by Name couldn't keep track of the names and break the VI if there were any change in the names, sort of like relinking subvis. The amount work to fix hundreds of broken subvis pales in comparison to the havoc wreaked by having your tested production code silently change its functionality.

We've learned to be more vigilant about cluster name and order changes, and have added more unit testing, but there's really no excuse for NI permitting this. Re the current discussion, I don't believe the existence of this horrible design bug ought to inform the choice of whether to use OOP or not, though given that NI seems reluctant to fix it, maybe it does factor in.

Jason

I'm right behind you on this one. One thing about software is that pretty much anything s possible given enough time and resource. But to give NI their due, perhaps the "old timers" (like me cool.gif ) just haven't been as vocal as the OOP community. Couple that with (i believe) some NI internal heavyweights bludgeoning OOP forward I think a few people are feeling a little bit "left out", Maybe it's time to to allocate a bit more resource back into core labview features that everyone uses.

Link to comment

OK - so I lied - I'm back for more :)

And very welcome you are toothumbup1.gif

I think this confuses highly coupled with statically loaded. I don't write code I consider highly coupled but I seldom if ever run into this kind of issue because I don't use much code deployed as dynamic libraries. I do have a bunch of classes and OO frameworks that I use and re-use but I use them by creating a unique project file for each deployed app and then adding those components that I need. So, I have a class library that is immutable (within the context of that project) that I drag into the project explorer - this is not a copy of the code, just a "link" to where the class is defined. Now, if I use any of that class in any capacity in that project, the class gets loaded into memory (and if I'm not using it, it shouldn't be there). But, the only "coupling" between the classes I use is that they are all called at some point by something in my application-specific project. My classes often include public typedefs for creating blocks of data that benefit from logical organization. But these typedefs get updated across all callers because of the specific project (not a VI tree, in this case). I realize the project doesn't force a load into memory, but once again, using the class does and that's the only reason they're in the project.

I'm still forced to deal with other users of the classes that might not be loaded, but that's what an interface spec is for - any changes to a public API shouldn't be taken lightly. The big difference is that all my code is typically statically linked so everything the project needs is there at compile and build time. But this does NOT mean it's highly coupled as each class has a clear interface, accessors, protected and private methods, and so on.

Just to help derail this thread, I'll state that I'm not a big fan of using the plug-in library architecture just because you can. Sometimes it's really helpful, but if an application can be delivered as a single executable (and that includes with 8.6 style support folders) then I find it much easier to maintain since I don't get LabVIEW's version of DLL hell. I don't care if my installer is 160 Mb or the installed app is 60 Mb. The performance under LabVIEW's memory manager is more than adequate.

Mark

I think the main difference between myself and Daklu is that I write entire systems whereas Daklu is focused on toolchains. As such our goals are considerably different. Re-use, for example, isn't the be-all and end-all and is only a small consideration for my projects in in the scheme of things. However in Daklus case, it saves him an enormous amount of time and effort. A good example of this is that I spend very little time automating programming processes because I'm building bespoke systems so each is a one off and takes (typically) 9 months to design and build. Contrast that with Daklus 2 week window and it becomes obvious where priorities lie. Thats not to say that re-use is never a consideration, it's just the focus is different. My re-use tends to be at a higher level (around control, data logging and and comms). You might consider that I would be a "customer" of Daklu. (Hope I've got Daklus job spec right laugh.gif )

As such, I'm in a similar position to you in that the output tends to be monolithic. It cannot run on other machines with different hardware so I don't need "plug-and-pray" features

Edited by ShaunR
Link to comment

Indeed. they "behave" correctly. As indeed my procedure yielded, for the reasons I argued previously about containers. But they aren't immune as I think you are suggesting here (remember class hell?).

No, they aren't immune to mistakes, and I didn't mean to imply they were. But they are more robust to common mistakes than typedeffed clusters. They correctly handle a larger set of editing scenarios than clusters.

Guess again wink.gif

Yes. yes.gif That is effectively what you are doing when you use a Tree.vi.

No it isn't. I believe you're focusing on how it would fit into your specific workflow, not how it would work in the general case.

Using Tree.vi loads only those vis you want loaded. Usually it's all the vis in the project. What happens if vis not in the project also depend on the cluster? What happens if people aren't even using projects? Should those non-project vis be loaded too? Fixing this issue using an auto-loading system requires those vis be loaded.

What if you've deployed reusable code to user.lib and want to update a typedeffed cluster? You'll have to dig out *every* project that has used that code and put it on your machine before you can update the typedef. No thanks. How should LV react if it can't find some of the dependent vis? Disallow editing? Disconnect the typedef? Two-way dependencies between the typedef and the vi simplifies a certain subset of actions, but it creates far more problems than it solves.

In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded.

Not me. That's death for large projects. 6-10 minute load times are no fun.

(i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong).

By "class editor" are you referring to the window that looks like a project window that pops up when you 'right click --> Open' a class from your project? There isn't much difference between them because it *is* a project window. Labview opens a new project context and loads the class into it for you. The reason the "class editor" loads everything is because loading a class (or any class members) automatically loads everything in the class. It's a feature of the class, not the "class editor."

[Edit - I was wrong. It doesn't open the class in a new project context. Given that it doesn't, I confess I don't see an advantage of opening the class in a new window?]

Intent is irrelevant if the behaviour is consistent (as I was saying before about containers). Although I hadn't spotted the particular scenario in the example, treating a typedef'd cluster as just a container will yield the correct behaviour as long as either

1. ALL vis are in memory.

OR

2. ALL vis are not in memory.

Okay, so since there's no practical way to ensure ALL dependent vis are in memory, the only way to guarantee consistent editing behavior is to not propogate typedef changes out to the vis at all. I'm sure the Labview community would support that idea. :lol:

To drop the simplicity of typedefs for a different paradigm I think is a bit severe and in these sorts of issues I like to take the stance of my customers (it's an issue....fix it biggrin.gif ).

They did fix it. They gave us classes. Any solution to this problem ultimately requires putting more protection around bundling/unbundling data types. That's exactly what classes do. Using classes strictly as a typed data container instead of typedeffed clusters is *not* a completely different paradigm. (There are other things classes can do, but you certainly don't have to use them.) You don't have to embrace OOP to use classes as your data types.

What happens if you rename a classes VI? wink.gif

You get load errors, which is vastly more desirable than behind the scenes code changes to the cluster's bundle/unbundle nodes. Besides, the risk of renaming a vi is *far* better understood by LV users than the risk of editing a typedeffed cluster.

I think it is probably due statements where you appear to assume that classic labview is highly coupled just because it's not OOP

I didn't mean to give that impression. At the beginning of the thread I mentioned I don't pursue OOP for it's own sake. My goal is component-based development and that OOP makes it easier for me to achieve that goal. You can have reasonably well decoupled applications using the structured approach. It is harder to achieve the same level of decoupling with typedefs as you can with classes, but that doesn't mean structured apps are tightly coupled.

If [typedef encapsulation] is all it's used for, then it's a bit of a waste.

Disagree. (Surprised? :lol: )

Regardless, that isn't all it is used for; it's just one way classes can be used.

Maybe it's time to to allocate a bit more resource back into core labview features that everyone uses.

Or maybe it's time for some "old timers" to discard their prejudices and see how new technologies can help them. ;):lol:

I think the main difference between myself and Daklu is that I write entire systems whereas Daklu is focused on toolchains. As such our goals are considerably different. Re-use, for example, isn't the be-all and end-all and is only a small consideration for my projects in in the scheme of things.

You're not far off. Most of our projects are fairly small, maybe 1 to 2 months for a single dev. We do usually have 1 or 2 large projects (6-24 months) in progress at any one time. Since I build tools for a product development group, we don't have the advantage of well-defined requirements. They change constantly as the target product evolves. At the same time, we can't reset a tool project schedule just because the requirements changed. Product development schedules are based around Christmas releases. Needless to say, missing that date has severe consequences.

During our rush (usually Sep-Mar) we have to build lots of tools very quickly that are functionally correct yet are flexible enough to easily incorporate future change requests. (The changes can be anything from updated an existing tool to support a new product, to using different hardware in the test system, to creating a new test using features from several existing tests, to creating entirely new test system.) Reusable component libraries give me the functional pieces to assemble the app. Classes give me the flexibility to adapt to changing needs.

It cannot run on other machines with different hardware

It could if you built in a hardware abstraction layer. :P

Link to comment

1) By repatedly dismissing the use of a Tree.VI (which was the recomended standard prior to the introduction of the project in LV 8) you ignoring the obvious solution to typedef updates. see this thread on the Dark Side.

http://forums.ni.com.../m-p/4328#M3291

(note that message number. current threads are numbered 388242. THat was message number 3291 on the dark-side)

2) Re-Use code using clusters and LVOOP are different animals with LVOOP being more powerful. In my shop "re-use" code has a requirement that "it can be used AS-IS with no modifications". So changes to type defs to use code again is technically not -re-use but recycled since it requires a set of testing to enusre it works after the change. So or my ear the idea of how to update a cluster in re-use code is an oxymoron. THis is exaclty why LVOOP has made a difference in my development. I can re-use parent classes without modifying them so they meet the requirement of "re-use with changes".

Steopping back and looking at the bigger picture...

I used to look for collections of code at the bottom of my VI hiarchies for re-use candidates.

Now I look at the TOP of my Class hiarchies for re-use.

Ben

Edited by neBulus
Link to comment

No, they aren't immune to mistakes, and I didn't mean to imply they were. But they are more robust to common mistakes than typedeffed clusters. They correctly handle a larger set of editing scenarios than clusters.

Marginally :P And on a very particular edge case issue that no-one else seems particularly bothered by tongue.gif

No it isn't. I believe you're focusing on how it would fit into your specific workflow, not how it would work in the general case.

Indeed. And I could probably level the same argument at you, since I do not consider my work flow atypical.

Using Tree.vi loads only those vis you want loaded. Usually it's all the vis in the project. What happens if vis not in the project also depend on the cluster? What happens if people aren't even using projects? Should those non-project vis be loaded too? Fixing this issue using an auto-loading system requires those vis be loaded.

What if you've deployed reusable code to user.lib and want to update a typedeffed cluster? You'll have to dig out *every* project that has used that code and put it on your machine before you can update the typedef. No thanks. How should LV react if it can't find some of the dependent vis? Disallow editing? Disconnect the typedef? Two-way dependencies between the typedef and the vi simplifies a certain subset of actions, but it creates far more problems than it solves.

Lots of what-ifs in there laugh.gif. Projects haven't always existed and (quite often) I do a lot of editing without loading it. But that's just an old habit because projects haven't always been around and I'm just as comfortable with or without. Perhaps thats the reason I don't see many of the issues that others see since I'm less reliant on config dialogues, wizards and all the bells and whistles (sure I use them, but it's not necessary).

User lib? Don't use it; I'm not a tool-writer. I don't have any problems re-using my re-usable stuff, never have. To me it's a bit of a storm in a tea-cup tongue.gif

Not me. That's death for large projects. 6-10 minute load times are no fun.

Thats quite funny. The last project I delivered was about 2000 VIs (excluding LV shipped). Only took about 1 minute to load and run in the dev environment (including the splash screen tongue.gif) . And that could run a whole machine.

By "class editor" are you referring to the window that looks like a project window that pops up when you 'right click --> Open' a class from your project? There isn't much difference between them because it *is* a project window. Labview opens a new project context and loads the class into it for you. The reason the "class editor" loads everything is because loading a class (or any class members) automatically loads everything in the class. It's a feature of the class, not the "class editor."

Well. that (I would say) is a feature of Labview. If the project also did it. then I'd be a lot happier.

[Edit - I was wrong. It doesn't open the class in a new project context. Given that it doesn't, I confess I don't see an advantage of opening the class in a new window?]

Okay, so since there's no practical way to ensure ALL dependent vis are in memory, the only way to guarantee consistent editing behavior is to not propogate typedef changes out to the vis at all. I'm sure the Labview community would support that idea. :lol:

Sure there is a practical way; load everything in the project.

They did fix it. They gave us classes. Any solution to this problem ultimately requires putting more protection around bundling/unbundling data types. That's exactly what classes do. Using classes strictly as a typed data container instead of typedeffed clusters is *not* a completely different paradigm. (There are other things classes can do, but you certainly don't have to use them.) You don't have to embrace OOP to use classes as your data types.

Requiring a programmer to write extra code to mitigate a behaviour is not fixing anything. Suggesting that classes (OOP?) is a valid method to do so is like me saying that I've fixed it by using C++ instead.

You get load errors, which is vastly more desirable than behind the scenes code changes to the cluster's bundle/unbundle nodes. Besides, the risk of renaming a vi is *far* better understood by LV users than the risk of editing a typedeffed cluster.

I was specifically thinking about the fact it deletes the mutation history so being reliant on it not fool-proof.

Disagree. (Surprised? :lol: )

Never biggrin.gif But it's a bit cheeky re-writing my comment. wink.gif. I was not referring to typedefs at all. I was refering to LVOOP in it's entirety. From the other posters comments it just seems that the main usage that it's being put to is functional encapsulation. Of course it's not a "significant sample". Just surprising.

Or maybe it's time for some "old timers" to discard their prejudices and see how new technologies can help them. ;):lol:

I'm not prejudiced. I hate everybody biggrin.gif

I have seen how it can help me. Like I said before; Lists and collections.wink.gif I've tried hard to see other benefits. But outside encapsulation I haven't found many that I can't realise much more quickly and easily in Delphi or C++ wink.gif

You're not far off. Most of our projects are fairly small, maybe 1 to 2 months for a single dev. We do usually have 1 or 2 large projects (6-24 months) in progress at any one time. Since I build tools for a product development group, we don't have the advantage of well-defined requirements. They change constantly as the target product evolves. At the same time, we can't reset a tool project schedule just because the requirements changed. Product development schedules are based around Christmas releases. Needless to say, missing that date has severe consequences.

During our rush (usually Sep-Mar) we have to build lots of tools very quickly that are functionally correct yet are flexible enough to easily incorporate future change requests. (The changes can be anything from updated an existing tool to support a new product, to using different hardware in the test system, to creating a new test using features from several existing tests, to creating entirely new test system.) Reusable component libraries give me the functional pieces to assemble the app. Classes give me the flexibility to adapt to changing needs.

If it works for you, thats fine. It sounds like a variation on a theme (additions to existing......modification etc) That fits with what was saying before about only really getting re-use within or on variants of a project.

It could if you built in a hardware abstraction layer. :P

No it couldn't. Once machine might have cameras, one might have a paint head another might have Marposs probes whilst the other has Reneshaw (you could argure that those can be abstracted, but you still have to write them in the first place). The only real common denominator is NI stuff. And in terms of hardware, we've moved away from them. Thats not to say there is no abstraction (check out the Transport" library in the CR). It's just we generally abstract abstract further up (remember the diamonds?)

Link to comment

That is effectively what you are doing when you use a Tree.vi. In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded (i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong). Of course this would be a lot less painful for many if you could "nest" projects.

For the record, you can achieve the intended effect using project libraries (.lvlib files).

Link to comment
[...] At the same time, we can't reset a tool project schedule just because the requirements changed. Product development schedules are based around Christmas releases. Needless to say, missing that date has severe consequences. [...]

You get a lump of coal in your stocking?

<EDIT> SNR going down... </EDIT>

Edited by jcarmody
Link to comment

So changes to type defs to use code again is technically not -re-use but recycled since it requires a set of testing to enusre it works after the change.

Just curious Ben, didn't you ever release new versions of your reuse libraries and what did you do when a cluster would benefit from an update?

For the record, you can achieve the intended effect [loading all vis] using project libraries (.lvlib files).

I used to think that too until AQ and someone else (Adam Kemp maybe) straightened me out. Project libraries will check to make sure all the necessary files can be found, but it doesn't actually load them. (Though it does load sub-libraries.) I have no idea if vis are "loaded enough" for cluster edits to propogate through the library. The details of when something is actually loaded seem rather obscure... I saw a post once from a blue referring to something along the lines of, "dependent vis 4 or more levels down aren't loaded..." (I'm sure I've completely misrepresented what the post really said. I didn't pay much attention at the time and haven't run across it again.)

Now, Shaun could create a single class and put all his project vis in there, but he might well have an aneurysm if he tried that. ;)

You get a lump of coal in your stocking?

Yeah, with a pink slip wrapped around it. :blink:

And I could probably level the same argument at you...

You probably could... of course, I'm not wishing NI to give me another solution to a problem they've already solved. :P

Lots of what-ifs in there laugh.gif.

Yep, there are. And they all need to have good answers because they are all valid ways in which people use Labview. The question was meant to illustrate the kinds of scenarios people will encounter, not presume that you in particular will do these things. Loading all the vis in the project is a solution to cluster propogation if and only if,

1) Every vi that depends on the cluster is part of the project, and

2) One always open the project before editing your code.

If those requirements fit into your workflow, as it appears they do, great! But people do work outside of those limitations. I'm not trying to say your dev habits are wrong; I'm simply pointing out that isn't a very good general purpose solution to the problem of cluster propogation.

I don't have any problems re-using my re-usable stuff, never have.

If copy and paste reuse works for you, it's certainly an easier way to go about it. The tradeoffs aren't workable for everybody.

Requiring a programmer to write extra code to mitigate a behaviour is not fixing anything.

That's right! Sorry, I forgot. We're Labview programmers... we expect NI to write our code for us. :P

Suggesting that classes (OOP?) is a valid method to do so is like me saying that I've fixed it by using C++ instead.

Really? You're equating the simple act of replacing a typedeffed cluster with a class to that of writing C++ code? :blink:

"Fixing" the cluster to prevent the propogation issue is a lot like fixing up the family minivan to compete on the F1 circuit. You might be able to get close to the desired performance, but it's going to take a lot of hacking, duct tape, and it ain't gonna be pretty.

I was specifically thinking about the fact it deletes the mutation history so being reliant on it not fool-proof.

Renaming a class vi has no impact on mutation history.

Link to comment

Renaming a class vi has no impact on mutation history.

Well. I think from that little lot that it's pretty obvious that we've reached an impasse on that topic and perhaps it's time to expand the scope of the thread so at least it has some technical content again. But I will finish off off by drawing your attention to the LV help because it was obviously something you were not aware off.

If you rename a class, LabVIEW considers it a new class, deletes the mutation history of the class, and resets the version number to 1.0.0.0.

So. OOP is great. It's fixes all the ills in the world. It increases re-use so you only need 1 program to do anything anyone will ever want. So might I make a practical suggestion?

There is a very simple library in the the Code Repository that would be very easy to convert to OOP and actually (I think) is ideally suited to it (we don't want anything too hard eh? tongue.gif ). Why not re-write it OOP stylee and we can use it as a basis for comparison between OOP and "Traditional" labview? Then we can plug it in to the other things in the CR that also use it and see what (if any) issues we come across in integration.

Does that sound like something you can commit to?

Link to comment

Just curious Ben, didn't you ever release new versions of your reuse libraries and what did you do when a cluster would benefit from an update?

...

To the best of my knowledge, the only cluster exposed in my re-use is an error cluster. If NI updated it, our testers SHOULD catch descrepancies. All of the rest use native LV data types. Enums on the other hand are a different animal. Jim Kring taught me about using wrappers around my action engines (thank you again Jim!) and that suggestion has helped reduce the coupling between re-use and the code that uses them.

Aside... to all contributors on this thread...

Please accept my thanks for the interesting discusion. It is a public serfvice that I appreciate greatly.

Thank you!

Ben

Link to comment

But I will finish off off by drawing your attention to the LV help because it was obviously something you were not aware off.

I am very much aware of that. It also occurs if the namespace changes, such as by changing the library nesting heirarchy if you're into that sort of thing. That's exactly why most people don't depend on automatic serialization. (And mutation history doesn't have any impact whatsoever on the bundle/unbundle nodes contained within class methods.) However, you said, "what happens if you rename a classes VI," not "what happens if you rename a class." Renaming a vi does not affect mutation history.

<snarkiness>

In the future I'll do my best to respond to what you think you wrote instead of what you actually did write.

</snarkiness>

So. OOP is great. It's fixes all the ills in the world.

You are misunderstanding my intentions. I'm not trying to convince the Labview world to start developing OOP applications. Designing good OOP apps is actually quite hard when first starting out. Designing good single-point reuse OOP code is even harder. This discussion has focused on the contrast between using clusters and using classes to pass data between different parts of an application. Using a class as nothing more than a data container to replace a cluster is dirt simple.

My point in the original post and in this discussion is this:

1. There is an issue with typedef cluster propogation that many people are not aware of.

2. Here are situations where one might unwittingly encounter this issue. (Listed earlier.)

3. If these situations are possible in one's workflow using a class instead of a cluster provides protection that prevents this problem from occurring.

Your solution to the issue was to set up your workflow so all dependent vis are always loaded into memory whenever you edit the source code. That's fine. It works for you. But there are other equally valid workflows where that isn't an option. For those people, using a class is the only way to ensure they will not encounter that issue.

Does that sound like something you can commit to?

I can't commit to anything right now. It's the busy season at work, christmas is upon us, my wife is recovering from major knee surgury, LapDog is behind schedule, I have presentations to prepare for the local users group, etc. Besides, I honestly do not see the point. Classes clearly provide me with a lot of added value while you get little benefit from them. What would be the goal of the exercise?

Additionally, comparing procedural code to OO code using a pre-defined, fixed specification will always make the procedural code look better. The abstraction of classes is a needless complication. The benefit of good OO code is in it's ability to respond to changing requirements with less effort.

Link to comment

I can't commit to anything right now. It's the busy season at work, christmas is upon us, my wife is recovering from major knee surgury, LapDog is behind schedule, I have presentations to prepare for the local users group, etc. Besides, I honestly do not see the point. Classes clearly provide me with a lot of added value while you get little benefit from them. What would be the goal of the exercise?

Granted. It is a difficult time of year. The new year would be fine when things are less hectic.

The goal? To to highlight the advantages and disadvantages of one paradigm over the other with a real-world practical example instead of esoteric rhetoric. Your Father may be bigger than my Father. Let's get the tape measure out yes.gif

Additionally, comparing procedural code to OO code using a pre-defined, fixed specification will always make the procedural code look better. The abstraction of classes is a needless complication. The benefit of good OO code is in it's ability to respond to changing requirements with less effort.

This I don't understand. You should always have a spec (otherwise how do you know what to create?). It's not fixed (what about adding serial?), only the interface is fixed (which you should understand since you are creating re-use libraries).

In fact I chose it because it is particularly suitable for classes and IS a re-use component. It is very simple, well defined, obviously possible (since it already exists) and if it takes you more than a day, I'd be very surprised.. You talked previously about HW abstraction. Well here it is. You talk about re-use; It's used in Dispatcher and OPP Push File. It ticks all the boxes that you say OOP is good at, so I think it would be a good candidate for comparison.

At the end, your lapdog thingy should work over TCPIP, UDP, IR and bluetooth as well. Wouldn't that be nice?

If you think OOP is just for changing requirements. then you have clearly not understood it.wink.gif

Link to comment

Great stuff.thumbup1.gif Lets see if Daklu is prepared to expand a little on it now you've shown the way wink.gif

D'ya know what? I agree. Of course, some people seemed to think that's not necessarily a good thing. ;)

I know biggrin.gif You never know. Maybe I'll get my arse kicked and version 2 will be a class with you named as a major contributor laugh.gif

So, here you go. I did some one-handed bed coding and here's a basic mod of the transport library into LVOOP (2009).

Some relevant comments:

  1. The transport example shows off the OOP advantage mainly through the inheritance property of OOP. This does not seem to be what your discussion was about, although the example does also reflect on your discussion.

The discussion was originally going to be much broader. But we got bogged down on a specific facet.

  1. I didn't follow your API exactly. For example, your API leaked the TCP read mode enum out to the outer layer, where it's irrelevant. I didn't create a relevant equivalent in the API, since you didn't use it, but an accessor could be created for it and called before calling the Read VI

The mode is relevant since the CRLF and buffered are very useful and vastly affect the behaviour. But for the purpose of this discussion it's irrelevant..

  1. I only implemented TCP and UDP.

OK. They are 2 of the most popular.

  1. I recreated the server and client examples (they appear under the classes in the project).

Yup. They work fine.

  1. You'll note that the inputs are now safer and easier to use (e.g. local port in the Open VI is now a U16 and does not mention BT).

Yup.

  1. I changed the access scope on your VIs so I could use them without having to make a copy.

Naturally.

  1. The VIs themselves are also simpler (See the UDP Write VI, for instance. In your VI, it takes wires from all kinds of places. In my VI, it's cleaner and clearly labeled).

And some are just wrappers around my VI's wink.gif I, of course have to make a decision. Do I put certain "messy" ones in a sub-vi just to make the diagram cleaner or not (maybe I should have). You don't have that decision, since you have to create a new VI anyway.

  1. Whenever you make a change (e.g. add a protocol), you have to recompile the code which is called by your various programs (assuming you reuse it) and you have no way of guaranteeing that you didn't break anything. With the classes version, you don't have to touch the existing classes. Code which is not recompiled does not to be verified again.

Good point.

  1. I didn't like all the choices you made (such as using a string to represent either a port or a service), but I kept some of them because I was not planning on doing a whole refactoring.

What would you have done instead?

  1. Also, you should note that my implementation is far from complete. Ideally, each class would also have more private data (such as which ports were used) and accessors and do things like input validation and some error handling, but I only created the most basic structure.

Of course. I'm not expecting a complete re-factor. In fact. It's probably to our advantage that there is only a partial re-factor since it mimics a seat-of-yer pants project. That way, as the new design evolves we will be able to see what issues come up, and what decisions we make to overcome them and, indeed,, what sacrifices we make (you have already made 1 wink.gif).

To expand a bit on points 8 and 9 - You mentioned adding serial. That's a good example. What would happen if you now need to add serial, USB and custom-DLL-X to your protocols? You would have to touch all the existing VIs. You will be asked to recompile every single caller of the library (although with 2010 this is finally less of an issue). You would need to overload your string inputs to include even more meanings than they already do, etc. Contrast that with creating another child class and creating overrides for the relevant VIs - it becomes simpler. Also, with classes you can guarantee that no one will change the existing functionality by locking it with a password.

Serial was mentioned for a very good reason. Can you think of why it isn't included? After all. It covers most other bases.

For point 9, I would have a preferred a poly VI - one for service name and one for port. Internally, they can call a single utility VI to minimize code duplication, but for the API, you should have it as easy as possible. Currently, you place the burden of formatting the data correctly on the user of the library instead of constraining them where applicable. One example where your code will simply fail is if there's a service name which begins with a number. I have no idea if that's even possible, but if it is, your code will assume it's a port number and fail. This isn't actually an advantage of LVOOP (you could have created a poly VI using your library as well), but it would be easier in LVOOP.

Yup. I was a bit lazy on that. I could have checked a bit harder. We'll call that a bug laugh.gif

As you said, you could probably now compare the two by setting a series of tasks, such as adding more protocols or adding features, such as adding logging to every write and read.

Actually, logging is a pretty good example. For you, adding logging is fairly simple - you add a path input to R/W VIs or you add it into the cluster. In the LVOOP code, adding an input to the R/W VIs would require touching all the classes, so I would probably do this by creating an accessor in the Transport class and simply calling it before the R/W VIs. This is one specific example where making a change to a class can be a bit more cumbersome, but it's probably worth it to avoid the other headaches.

You actually have some advantage in this challenge in that you know and use the code whereas as I simply did some quick cowboy coding to adapt it, as opposed to planning the API beforehand. I also didn't do some needed stuff (see point 10). That said, the OOP version should still probably hold up, although I wouldn't unleash it as-is. Also, I doubt I will have more time to do actual coding, but I think this base is enough for some thought experiments.

I should also point out that I'm more in your camp regarding the kinds of projects I work on, but that I generally agree more with Daklu in this thread.

OK. I won't comment on your examples just now. Let's wait and see if Daklu is prepared to put some effort in.

Indeed. I probably do know a little bit more since I've been through the design from start to finish. However. It was only 2 days from cigarette packet to release candidate so probably not much more. The only difficulty was UDP, everything else is pretty much a wrapper around in-built LV functions.

Link to comment
what sacrifices we make (you have already made 1 wink.gif).

So first, just to clarify, since it's also relevant for some of the other points (such as changing the scope on your VIs or only creating two classes) - the main design parameter for me on this one was speed - I wanted to put in as little time as possible on the refactoring, just to get the point across. It's really not the proper way to design any kind of API.

The mode is relevant since the CRLF and buffered are very useful and vastly affect the behaviour. But for the purpose of this discussion it's irrelevant..

Yes, but since it's only relevant for some of the transports, it's a leak. It should not have been part of the read VI which is shared by all transports unless the vast majority of them support it.

What would you have done instead?

...

Serial was mentioned for a very good reason. Can you think of why it isn't included? After all. It covers most other bases.

...

Yup. I was a bit lazy on that. I could have checked a bit harder. We'll call that a bug laugh.gif

...

The only difficulty was UDP, everything else is pretty much a wrapper around in-built LV functions.

To wrap all of these together, I think they are all part of the problem. Ideally, this utility should have a well defined API (where port number for TCP is a U16, for instance, so you don't have that bug) which would also require not just wrapping the primitives (which I assume is why serial wasn't implemented), but coming up with a specific API which would be relevant.

If I understand him correctly, one of Daklu's points, based on his experience, is that because you don't know what's going to happen, this API should be as simple as possible and he actually doesn't particularly like extension through inheritance, because as you add features (such as the read mode on TCP), it becomes more complicated, but I'll let him expand on that.

Link to comment

So first, just to clarify, since it's also relevant for some of the other points (such as changing the scope on your VIs or only creating two classes) - the main design parameter for me on this one was speed - I wanted to put in as little time as possible on the refactoring, just to get the point across. It's really not the proper way to design any kind of API.

Of course

Yes, but since it's only relevant for some of the transports, it's a leak. It should not have been part of the read VI which is shared by all transports unless the vast majority of them support it.

It's only UDP that doesn't so its 75% which I would count as the vast majority. I could have left it out completely and probably 1 in 50 programmers wouldn't have noticed.

A leak means something different to me. I don't consider a NO OP a leak.

To wrap all of these together, I think they are all part of the problem. Ideally, this utility should have a well defined API (where port number for TCP is a U16, for instance, so you don't have that bug)

Detecting whether there are any a-Z chars will fix that and it will work as intended. It's a trivial point.

The port number change is the sacrifice I was talking about. You have sacrificed the user having to write more code to deal with it (as you have done in the example) for your preference for strict typing and partitioning.. That's a design decision.. But they didn't create the "Variant" type for nothing and I use strings like variants.

which would also require not just wrapping the primitives (which I assume is why serial wasn't implemented), but coming up with a specific API which would be relevant.

That's not the reason. But good effort. The main reason is that a serial interface cannot be configured just by a single parameter as all the others can (a port or a service name). It takes many more parameters to make anything useful. Therefore it would have considerably complicated the users interface to the API in that he would no longer have to supply a single string but things like baud rate, term char, parity etc just for that one interface. It is much more appropriate to add that to a layer up, which is outside the scope of the current API implementation.

If I understand him correctly, one of Daklu's points, based on his experience, is that because you don't know what's going to happen, this API should be as simple as possible and he actually doesn't particularly like extension through inheritance, because as you add features (such as the read mode on TCP), it becomes more complicated, but I'll let him expand on that.

Well. No-one has a crystal ball. You code to the requirements and try to mitigate where you can (usually based on experience). Anything else is just blowing in the wind.

Link to comment

It's only UDP that doesn't so its 75% which I would count as the vast majority. I could have left it out completely and probably 1 in 50 programmers wouldn't have noticed.

A leak means something different to me. I don't consider a NO OP a leak.

It's only UDP at the moment. What happens if you want to add more protocols. What if those protocols require additional parameters (as you pointed out for serial)? That was my point about leaking the implementation of specific protocols to the outer layer of the API. There's nothing wrong with it functionally, but it's a problem with the design of the API.

Detecting whether there are any a-Z chars will fix that and it will work as intended. It's a trivial point.

The port number change is the sacrifice I was talking about. You have sacrificed the user having to write more code to deal with it (as you have done in the example) for your preference for strict typing and partitioning.. That's a design decision.. But they didn't create the "Variant" type for nothing and I use strings like variants.

I don't see it as trivial, at least not in the context of strict vs. weak typing. LV is basically a strictly typed language, for good reason, and while you can choose to use variants or strings, it means you're placing some of the burden on the user of your API and you're risking running into bugs and run-time errors. A valid decision, but which I would generally prefer to avoid. For example, imagine what would happen if every time you wanted to use the DAQmx Write VI, you would need to pass along a DBL formatted as a string and if you wanted to use the 1D version, you would need to pass along a DBL array formatted as a pipe-delimited string. It's not very convenient, is it?

And although this entire point has nothing to do with LVOOP, really, I should also point out that LVOOP was also designed to replace some of the cases where you use variants for run-time typing.

That's not the reason. But good effort. The main reason is that a serial interface cannot be configured just by a single parameter as all the others can (a port or a service name). It takes many more parameters to make anything useful. Therefore it would have considerably complicated the users interface to the API in that he would no longer have to supply a single string but things like baud rate, term char, parity etc just for that one interface. It is much more appropriate to add that to a layer up, which is outside the scope of the current API implementation.

There was nothing forcing you to use a single VI to open the connection. You could have created a TCP Open, Serial Open, etc. Of course, with LVOOP this is somewhat easier, as the components are already broken off into classes. And just to demonstrate it, here's a simplified Serial VISA class which took less than 10 minutes to create - just drop it in the classes folder and it's ready for use. Of course, you won't be able use it in your client/server program, as that doesn't have the configuration options required for serial, but that's a problem with the program, not the API although that, of course, depends on the definition of the API. If one of the design decisions is that users can configure it with a single, simple string, then this class breaks that rule. I would think that breaking the rule would be worth it in that one point if it means that your users only need to use a single read VI and a single write VI in the rest of the code, but I'm not an actual user, so I can't say that with any certainty.

Serial.zip

Link to comment

It's only UDP at the moment. What happens if you want to add more protocols. What if those protocols require additional parameters (as you pointed out for serial)? That was my point about leaking the implementation of specific protocols to the outer layer of the API. There's nothing wrong with it functionally, but it's a problem with the design of the API.

Well. thats the crux of the differences in argument position, isn't it.wink.gif I never try to make all singing, all dancing APIs that will be all things to all men with a view to abstracting all current and future interfaces. I don't want to add more protocols, nor will I ever at this level. I just want to make it easier to use those ones. I've looked at how I, and others use them and simplified it to a couple of parameters,unified the inputs, and implemented a client server relationships for those that don't have it (UDP). If other interfaces are required, then they will build on this API at the next higher level (similar to what you have done with the UDP). TCP, UDP, Bluetooth and IR have been modularised and simplified.

I don't see it as trivial, at least not in the context of strict vs. weak typing. LV is basically a strictly typed language, for good reason, and while you can choose to use variants or strings, it means you're placing some of the burden on the user of your API and you're risking running into bugs and run-time errors. A valid decision, but which I would generally prefer to avoid. For example, imagine what would happen if every time you wanted to use the DAQmx Write VI, you would need to pass along a DBL formatted as a string and if you wanted to use the 1D version, you would need to pass along a DBL array formatted as a pipe-delimited string. It's not very convenient, is it?

The only burden I'm placing on the user is to use a properly formatted string (which doesn't have a huge amount of restrictions) This is no different from VISA for example.I would in fact say that the the class implementation burdens the user more since now he/she has to cope with multiple VI's depending on what which interfaces he wants to use and supply different parameters depending on the use. This will get worse if you do require the API to support more interfaces that take different parameters and allthough it's easy for you to keep adding classes, the user is getting more and more interface to contend with. I have attempted to simplify and the classes are putting it back in again.

And although this entire point has nothing to do with LVOOP, really, I should also point out that LVOOP was also designed to replace some of the cases where you use variants for run-time typing.

If you mean replacing a variant primitive with a class acting like a variant. then possibly. I don't know what the intention was, but if it involves me writing more code; then I'm not a fan.

There was nothing forcing you to use a single VI to open the connection. You could have created a TCP Open, Serial Open, etc. Of course, with LVOOP this is somewhat easier, as the components are already broken off into classes. And just to demonstrate it, here's a simplified Serial VISA class which took less than 10 minutes to create - just drop it in the classes folder and it's ready for use. Of course, you won't be able use it in your client/server program, as that doesn't have the configuration options required for serial, but that's a problem with the program, not the API although that, of course, depends on the definition of the API. If one of the design decisions is that users can configure it with a single, simple string, then this class breaks that rule. I would think that breaking the rule would be worth it in that one point if it means that your users only need to use a single read VI and a single write VI in the rest of the code, but I'm not an actual user, so I can't say that with any certainty.

It's not quite ready because it needs a client/server emulation the same as the UDP. But that's not the point. I could also add (in 10 mins) serial, but it would basically be the same as yours (without classes) requiring the user to wire up additional controls. The only difference would be I'd be adding cases instead of VIs.

But I disagree that It's a problem with the program. The program is able to write and read data which is it's only purpose. If the API can get the required configuration options from a single input. then the program will work. I experimented with mine and I could do things like make the string a comma-delimited config or pass it a file name to load the settings, but that doesn't translate well to the other interfaces, is very unintuitive and well. Just plain crap. Anything else needsd more controls and requires the user to treat serial differently so that doesn't fit with the remit.

But user/component boundaries aside.

The really interesting thing for me is that. If (hypothetical because it won't happen) I were to add more interfaces that did fit with my remit for single parameter config. I would only be editing 4 VI's. If there were 100 interfaces; I only need to take care of 4 VI's. My maintenance-base doesn't change. However for classes each new interface requires the addition of 4-5 VIs no matter how similar the interface is. So in this hypothetical case of 100 interfaces, I only have make 400 changes as opposed to oooh... thousands for the addition of your logging example. Classes in LV are replication which isn't conducive to maintenance.

Edited by ShaunR
Link to comment

There's no disagreement that adding more features and protocols would complicate matters. I don't use the transport library at all, so I definitely don't plan on adding any more protocols to it. Let's just say that the expansion point is apparently not too relevant here and that there's not much point in discussing it.

If you mean replacing a variant primitive with a class acting like a variant. then possibly. I don't know what the intention was, but if it involves me writing more code; then I'm not a fan.

The transport library is actually one example where this is applicable - I replaced your variant+case structure with classes. The classes don't actually have more code, but they do have some overhead (more VIs, more documentation, etc.).

The really interesting thing for me is that. If (hypothetical because it won't happen) I were to add more interfaces that did fit with my remit for single parameter config. I would only be editing 4 VI's. If there were 100 interfaces; I only need to take care of 4 VI's. My maintenance-base doesn't change. However for classes each new interface requires the addition of 4-5 VIs no matter how similar the interface is. So in this hypothetical case of 100 interfaces, I only have make 400 changes as opposed to oooh... thousands for the addition of your logging example. Classes in LV are replication which isn't conducive to maintenance.

Actually, neither of us would need to make hundreds of changes. You would just add the logging after the case structure (assuming you still have single write VI) and I would set the logging info IN THE TRANSPORT CLASS before calling the write VI. Inside the write VI (assuming I did the job properly), each VI also calls the parent implementation, so all that would be needed would be to add the logging code in one place - Transport.lvclass:Write.vi. Of course, as I mentioned originally, LVOOP makes this more cumbersome than your method, but then again, your VI with the hundred cases isn't the most ideal either.

Link to comment

There's no disagreement that adding more features and protocols would complicate matters. I don't use the transport library at all, so I definitely don't plan on adding any more protocols to it. Let's just say that the expansion point is apparently not too relevant here and that there's not much point in discussing it.

That's an interesting comment, since it is usually proposed as one of the greatest advantages to using classes. In this case, however, I agree. There is not much point because both implementations are complicated to the same degree. The goal of this exercise is to look at functionally identical implementations and discuss the differences. You have gone a long way towards that so that we can. (and it is appreciated).

The transport library is actually one example where this is applicable - I replaced your variant+case structure with classes. The classes don't actually have more code, but they do have some overhead (more VIs, more documentation, etc.).

Indeed. The difference is adding or modifying cases or VIs. The latter, I would wager, is the reason for Daklus excessive load times.But you also left out that new VIs also require new test harnesses (or modification of an existing one depending how pedantic you are about "white-box" testing) to exercise all the inputs and outputs and that can be quite an additional overhead. The cases only require additions to the input parameters of the same harness so can be scripted (not LV scripted, good ol' fashioned text scripted). The only real difference is how we realise the tasks. You switch by overriding and invoking a VI,. I switch by cases.

Actually, neither of us would need to make hundreds of changes. You would just add the logging after the case structure (assuming you still have single write VI) and I would set the logging info IN THE TRANSPORT CLASS before calling the write VI. Inside the write VI (assuming I did the job properly), each VI also calls the parent implementation, so all that would be needed would be to add the logging code in one place - Transport.lvclass:Write.vi. Of course, as I mentioned originally, LVOOP makes this more cumbersome than your method, but then again, your VI with the hundred cases isn't the most ideal either.

You're right. I'll have to lay off the mulled winewink.gif In this case it is the same. It won't always be, but we have agreed that (for this) it is irrelevant.

If we expand further by saying we also want to log opening/closing and listener info. I think you would probably decide to add the definitions to the virtual class for "Open" and "Listener" and would make the implementations completely synonymous (unless of course you want to modify all the open and listener VIs for every class). The difficulty here though is that you (or rather the user) still needs to tell it which one to invoke by laying down a class constant. I did something similar in the beginning using refnums (i.e using type to choose the interface), but rejected it in favour of a ring control as I perceive it easier for he user.

Link to comment

If we expand further by saying we also want to log opening/closing and listener info. I think you would probably decide to add the definitions to the virtual class for "Open" and "Listener" and would make the implementations completely synonymous (unless of course you want to modify all the open and listener VIs for every class). The difficulty here though is that you (or rather the user) still needs to tell it which one to invoke by laying down a class constant. I did something similar in the beginning using refnums (i.e using type to choose the interface), but rejected it in favour of a ring control as I perceive it easier for he user.

Actually, there I have a bigger problem. I don't require the user to drop a constant because the "constructors" don't have a class input. Instead, the user needs to select a specific VI. This has advantages in that each VI can have different inputs, but it does cause problems in the logging challenge in that there is no VI in the base class. That would mean that we would need to create a logging VI for the base class and then have every constructor call that VI.

But again, this is not because of the LVOOP thing, but rather because I use separate VIs (which I would anyway, since I want different inputs on that VI).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.