Jump to content

OO question about composition


Recommended Posts

Hi All,

 

I have a set of classes representing an instrument driver which allows for different firmware versions. The instrument can operate in certain modes, and depending on the mode it periodically returns a different number of characters. What I would have done up till now is have a "mode" enum in the parent, and have a single Read method and inside that using a simple case structures read a different number of bytes depending on the mode, and then parse the string accordingly. No problem here, very simple to implement.

 

What I want to do now is remove the enum, and make it a class. (it is my understanding that have type-defined controls inside a class can lead to some weirdness).

 

So I figure I create a mode class (and child classes corresponding to the different modes my instrument can be in), and then at run-time change this object. Each of these mode child classes would implement a Read function, and they would know exactly how many bytes to read for their specific mode. This seems a bit weird as I would be implementing the Read function in the Mode class which does not feel like the right place to put it. Alternatively I can implement a BytesToRead function in each of the Mode classes and then also a Parse method. 

 

Does this sound sensible?

 

Is this going to be complicated by the fact that my actual class holding the mode object is an abstract class?  

Link to comment

Sounds like the Strategy Pattern.

 

If you have much shared code between the different modes make an abstract base mode with the read method in there (to be overridden by child classes) and inherit from that for the different modes.  Then use that class in your device class which in itself may have a completely different inheritance tree.

Link to comment

 

 

What I want to do now is remove the enum, and make it a class. (it is my understanding that have type-defined controls inside a class can lead to some weirdness).

 

I saw that in the thread about decoupling messages when using Actors. What I'm wondering is: does making it a class improve things at all? It seems like you'd have the same dependency problems, plus potential class library corruption problems.

 

I use OO for all my big projects, but more and more, I find myself battling the IDE and mysterious class corruption issues.

 

I'm not sure if switching from a typedef to composition improves things. I do it all the time, but I also use typedefs in the class private data, and I'm not convinced that one is worse than the other. (Though I'm open to being persuaded otherwise.)

Link to comment

I saw that in the thread about decoupling messages when using Actors. What I'm wondering is: does making it a class improve things at all? It seems like you'd have the same dependency problems, plus potential class library corruption problems.

 

I use OO for all my big projects, but more and more, I find myself battling the IDE and mysterious class corruption issues.

 

I'm not sure if switching from a typedef to composition improves things. I do it all the time, but I also use typedefs in the class private data, and I'm not convinced that one is worse than the other. (Though I'm open to being persuaded otherwise.)

 

 

I too am finding this the further and further i get into LVOOP.  It seems like when deciding to use classes for a portion of a project, you really have to consider the other factors you mention more so than traditional LV.  To the point that if you don't make the right decision in the beginning you can really hurt yourself way down the line with IDE and other issues.  

 

Lately I've limited myself to using classes mainly as fancy clusters b/c of these issues.  Maybe its a lack of understanding on my part but it seems to be more hassle than its worth in a lot of cases.  

Link to comment

Lately I've limited myself to using classes mainly as fancy clusters b/c of these issues.  Maybe its a lack of understanding on my part but it seems to be more hassle than its worth in a lot of cases.  

 

I still use OO for its design features. If you're not going to use inheritance, dynamic dispatch, etc. then what're you using the classes for?

 

If you want to switch off classes because of the corruption/IDE problems, I'm wholly sympathetic to that. But are you getting enough added value from simply using classes "as fancy clusters" to justify the added corruption/IDE issues?

 

That sounds like the worst of both worlds; opening yourself up to hard-to-track problems with very little of the benefit.

Edited by Mike Le
Link to comment

I still use OO for its design features. If you're not going to use inheritance, dynamic dispatch, etc. then what're you using the classes for?

 

If you want to switch off classes because of the corruption/IDE problems, I'm wholly sympathetic to that. But are you getting enough added value from simply using classes "as fancy clusters" to justify the added corruption/IDE issues?

 

That sounds like the worst of both worlds; opening yourself up to hard-to-track problems with very little of the benefit.

 

 

I should clarify that... i still use it as a HAL layer as usually i deal with hardware, but i have moved away from full scale frameworks such as the actor framework or class based state machines b/c of some of these issues.  I have moved away from class based messages as well and gone back to strings. I'm not trying to completely bash on LVOOP its very powerful, but as soon as you hit one of these IDE issues, the troubleshooting time becomes insane and make you regret your decision to go that route.  I won't even get into how heavy labview classes feel.  

 

 Right now i have a major project where one branch of  the repository is totally corrupted with a strange object error that refuses to load the entire project.  These are the type of things i don't have time to track down b/c of IDE issues. 

Link to comment

If you're not going to use inheritance, dynamic dispatch, etc. then what're you using the classes for?

 

Encapsulation of state, I expect.That is about the only reason I will consider LV classes - stateful drivers/APIs like websockets or HTTP. It's a little bit cleaner than LV2 globals and doesn't multiply like a tribble.

 

I have moved away from class based messages as well and gone back to strings.

:thumbup1:

People also forget that Hardware Abstraction was solved many years ago by firmware engineers. It's called SCPI and, what dya know? It's strings :yes:

 

I was going to write a presentation called "String Theory - The fundamental building blocks of programming"  and demonstrate that you can create complex and scalable systems that transcend networks, using a service oriented, string messaging design and bugger all code. No-one seems particularly interested unless it has classes in it though :D

 

Meanwhile..........back on topic :P

Edited by ShaunR
  • Like 1
Link to comment

If it 'aint broke, don't fix it. (Not kool-aid, more like resisting the Tribbles. ;) )

 

I like to try and extend my understanding of things wherever possible, so I can make informed decisions later. This often means trying out features or design techniques I have not used in the past to see if there are better ways of accomplishing things. This is how I have evolved my style over the years. Sometimes the experiment works, sometimes it does not, but I always get to keep some knowledge from the exeperience.

 

One thing I am trying to get my head around is proper OO design (forget LabVIEW for now). This is something I have some understanding of, but could certainly do with more practice; hence the original question.

 

I agree now with Shane that this looks a lot like the Strategy Pattern.

Link to comment

There are people who may claim that agreeing with me on a LVOOP architecture issue may be a clear indication that you have not fully understood the problem at hand.  :lol:

 

You have a Starcraft II avatar, already that gets you points in my book :-)

 

I have actually approached the problem slightly different, as I did not like the Strategy object needing to do the VISA read, and due due to the asynchronous nature my device sends data (all on its own it periodically sends data). I have implemented the Received data (and the parsing thereof) as a type of Strategy pattern, but the actual reading of the characters on the serial port is done somewhere else.

Link to comment

I like to try and extend my understanding of things wherever possible, so I can make informed decisions later. This often means trying out features or design techniques I have not used in the past to see if there are better ways of accomplishing things. This is how I have evolved my style over the years. Sometimes the experiment works, sometimes it does not, but I always get to keep some knowledge from the exeperience.

 

One thing I am trying to get my head around is proper OO design (forget LabVIEW for now). This is something I have some understanding of, but could certainly do with more practice; hence the original question.

 

I agree now with Shane that this looks a lot like the Strategy Pattern.

 

Fair comment.

You asked how to replace an enum. What you got, though, was a suggested architecture framework that will shape your whole application.

OOD !== Classes.

Link to comment

Using the strategy pattern for a specific set of functions doesn't havfe to extent into the entire application.  It can itself be encapsulated in it's own sub-system.  The essence of the strategy pattern doesn't even have to use LVOOP, it can be done with vanilla LabVIEW (but I'm not sure that would make sense).

 

The Strategy pattern is NOT an architecture framework, it's an approach to solving a problem, the scope of which is left up to the programmer.

Link to comment

Hi,

 

What I want to do now is remove the enum, and make it a class. (it is my understanding that have type-defined controls inside a class can lead to some weirdness).

 

It sounds like the primary problem that you want to solve is the problem of a misbehaving IDE, and you have chosen LVOOP as your mechanism for working around that problem. Is this correct?

 

Some good questions to ask yourself are:

  1. From your primary problem's point of view: Will replacing enums with classes make your IDE problem go away, without introducing new IDE problems?
  2. From an architectural point of view: Which approach feels more sensible to you -- your existing enum-based system, or your proposed class-based system?

 

So I figure I create a mode class (and child classes corresponding to the different modes my instrument can be in), and then at run-time change this object. Each of these mode child classes would implement a Read function, and they would know exactly how many bytes to read for their specific mode. This seems a bit weird as I would be implementing the Read function in the Mode class which does not feel like the right place to put it. Alternatively I can implement a BytesToRead function in each of the Mode classes and then also a Parse method. 

 

I agree that your first proposal feels weird: a "Mode" sounds like it should contain config information and parameters, but it shouldn't perform any actions itself. Thus, I'm also not convinced that the Parse method belongs in the Mode class. (Of course, this is also a matter of taste -- I'm sure there are others who are happy to use this approach)

 

Having each Mode subclass report BytesToRead back to the caller sounds quite reasonable, but only if every current and future Instrument subclass is expected to read the same number of bytes for a particular Mode. (Would there be any cases where different Instruments read a different number of bytes for the same Mode?)

Link to comment

Can't find the diagram, but I used composition for communication with a family of devices. Device class contained "transport", "firmware version", "key map", etc. Transport (serial VISA, FTDI dll, custom .net dll) had read and write methods - no abstract parent. Firmware Version parent class had specific functions (one per Byte, kind of thing) for version 1. Version 2 inherited from version 1 - some overrides, some extras. Version 3 inherited from version 2, etc. Key map turned scanned ADC arrays into other things. It worked very well, even at six firmware versions. Selection of each type of object was done via enum/case statement. This allowed direct selection and trial-and-error detection of each object in the device.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.