Jump to content

Object-Oriented Programming — The Trillion Dollar Disaster


Recommended Posts

Some food for thought:

https://medium.com/better-programming/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7

Note that LVOOP is indeed the "C++, Java and C# variety of OOP" too, and is taught the same way, complete with "cats and dogs", all the same "Gang of Four" design patterns, S.O.L.I.D (https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design), etc.

" The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper — we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns. OOP makes common development practices, like refactoring and testing, needlessly hard."

I don't agree with everything he says, in particular with a definition of state. A variable is a state variable in a reactive (the vast majority of systems/applications and their components built with LabVIEW are indeed reactive and, hence, best modeled as such https://www.reactivemanifesto.org/) module/system/object in my opinion only if the reaction to some event/message (transition, actions that are run in response to the event, and/or target state) can be different depending on the value of that variable. And each object has its own state. The states of all the objects in the program combined constitute the state of the entire program itself, of course. But the very purpose of breaking a program into objects with their own ("isolated") states is to not have to deal with that combined state. The state of each object can and should be "mutable" in a sense, because real life objects indeed change their state with time and that indeed can have an effect not only on themselves but on their interaction with others. It should not be shared explicitly with other objects though, of course. I totally agree with him on that.

Personally, I came to developing my LabVIEW applications as collections of asynchronous modules/active objects/actors which are backed by (hierarchical if needed) state machines  and communicate with each other only via queued messages long before LVOOP was created, let alone "THE Actor Framework", DQMH, JKI's "State Machine Objects", etc. showed up. Search for "LabHSM" and "EDQSM", they are both circa 2004-2006. Those two templates/toolkits/design/patterns/frameworks of mine, just like some other similar "asynchronously communicating modules" frameworks, have no explicit sharing of state and are closer to the Alan Kay's/Smalltalk idea of objects, where messaging between objects is "The big idea" and where the state of the receiver object (and, hence, the actual code/action/method that will be run as the reaction to a particular message from the sender) is of no business of the sender (now recall Do method in AF's message class and what it takes to decouple actors exchanging such messages)! "Active" objects and actors are not just data and methods, which need to be called externally, by some other methods of some other objects and ultimately by the "God" main loop (well, technically, for active objects as opposed to actors, it is still their methods that are called but those "method calls" are actually just message sending to its process). The main idea is that in contrast to the "regular" OOP, each of "active" objects (actors, modules) has its own "life", in which it reacts to various events, both internal (resulting from its own actions) and external (messages it receives from environment/other objects). It can and often does react to the same event differently, depending on its current state. The reaction consists of running some other actions (which may include sending messages to other objects) and changing its state (transitions). Ideally, it is a true parallelism, not just concurrency, i.e. while communicating with others (within the same physical machine or over the network), it runs in its own thread, process, container, virtual machine or even its own CPU core or an entire real physical machine, because each such object is a "microservice". It is like a complete computer in a sense, just like Alan Kay imagined it.

 I counted at least three more similar idea "actor frameworks" presented at the latest NI Week and GDevCon. Some of them even use LVOOP :-)

Maybe it is time for many other LabVIEW developers to look at the LVOOP and "regular", "passive", single loop/thread (C++, Java, C# style) OOP more critically  too.

Some more on the topic:

https://medium.com/@richardeng/the-object-oriented-schism-9dedbbc3496b

 

Edited by styrum
Link to comment

It's a rubbish post.

OOP is not a framework, as the author claims it is. He / she has clearly not understood what OOP is, what it's supposed to do, how it's supposed to work and when it's NOT best suited to the job.

Yes, OOP needs to be used wisely. No, it's not a band-aid for people who still don't know what they're doing.

And the claim that OOP does not map to the human mind..... well, darling, back when OOP was introduced, it's still better than completely unstructured code.

As with all software tools, if you don't understand it, it will hurt you.

Statefulness, messaging and concurrent code organisation actually has nothing to do with OOP. Yes, OOP can be used to get that working, but the two are completely unrelated topics. They can both co-exist without each other.

Link to comment

😂He knew what was coming:

" I expect some sort of reaction from the defenders of OOP. They will say that this article is full of inaccuracies. Some might even start calling names. They might even call me a “junior” developer with no real-world OOP experience. Some might say that my assumptions are erroneous, and examples are useless. Whatever. "

I think we all still can benefit from a more substantive discussion of his particular points rather than from accusations of "not understanding", let alone "calling names" and insults. 

A "straw man" demagogic trick is no good either. The author opposes not OOP in general or as coined in by Alan Kay, but rather the C++/Java "kind" of OOP, especially inheritance hierarchies and "promiscuous sharing" of state. Neither he advocates "unstructured coding". And the C++/Java/C# style of OOP is not and never has been the only (let alone "best") alternative to "unstructured coding". Actor model is circa 1973. Smalltalk is even older.

Statefulness, messaging and concurrent code organization have everything to do with OOP if/when they need to be present in the same program in which you use OOP. Just because you really have to figure out how to implement them in your OOP code then. And, very softly speaking, they are not easily implementable with "regular"/"passive" objects consisting of only data and methods that need to be called from "outside" of the objects themselves.

 

Edited by styrum
typos
Link to comment

I had this discussion with a good friend of mine who is a senior developer in a texted based language.  He has 15 years of experience in real world development, has probably 10 developers under him, and he keeps up with all the latest in the tech and software development world.  He mentioned to me a few times that in his experience he sees the projects that have many many layers of abstraction, and code hiding behind code, only to find that debugging them is difficult.  And even searching for what a function is actually doing leads down many holes of things calling things.  Where the intermediate developers make a program that is straight forward, and does what it needs to. 

His conclusion is that his years of experience of seeing when things work well and when they don't, help guide him how complex or how simple a set of code needs to be.  And when he talks about OO he very much is open to the idea that he just doesn't fully get it, but all of his experiences are summarized with "It looks great and it sounds like it solves my problems, but then in practice it falls apart" and I followed it up with a reply I've heard and that was "Well maybe you just don't fully grasp the right way to use it."  And his reply was "That's what OO experts tell me."

When it comes to LabVIEW I feel like I have a good mix of OO and non-OO code.  Having no classes in a large project is probably a bad sign.  And having all clusters be a class, is also probably a bad sign.  Hardware abstraction, and plugin architectures is a couple places that OO just fits in really well in my mind.  Reuse code in general also works well.  Everywhere else I'm not apposed to it, but I can see some draw backs.

  • Like 2
Link to comment

I think it is important what exactly we call "OO code". If objects are not simply "dead" combinations of data and methods, but rather have their own "life"/"process", exchange messages with the user, "environment", and other objects the way actors do, react to events, I am all for such "OO code". The "regular/"passive"/"C++/Java/C# style" objects indeed have a very limited use and I agree that the attempts to build large applications using only them by constructing and utilizing complicated inheritance hierarchies and design patterns can lead to a disaster.

Link to comment
6 hours ago, styrum said:

I think it is important what exactly we call "OO code". If objects are not simply "dead" combinations of data and methods, but rather have their own "life"/"process", exchange messages with the user, "environment", and other objects the way actors do, react to events, I am all for such "OO code".

I would say here you are just describing what an Actor does and nothing specific to OOP. An Actor can be an object but doesn't have to be from my experience. I personally think that this use of inheritance can be the most difficult to actually understand/maintain as well. If you've ever used an NI's Actor Framework where the inheritance reaches depths of 5+, trying to understand what the Actor does can be quite difficult (though someone might claim this is because they violated SRP).

Perhaps I'm missing some historical context though and should read some older papers.

  • Like 1
Link to comment
8 hours ago, styrum said:

I think we all still can benefit from a more substantive discussion of his particular points rather than from accusations of "not understanding", let alone "calling names" and insults. 

His arguments are a bit woolley. There are much better articles about the deficiencies of OOP in general (fat pipes, the view-port problem etc). But all of them are for non-dataflow languages and one of my main arguments is that LVPOOP solves a non-existent problem due to inherent state control. If you want to upset some people, just say that a normal subVI is inheritance by composition ;)

There are reams and reams of posts on here (Lavag) somewhere with me arguing about LVPOOP. Nowadays I use it for mainly two reasons.

  1. Custom Wires.
  2. Session memory.

But that's more to do with the implementation in LabVIEW that anything OOPy.

Edited by ShaunR
Link to comment
7 hours ago, jacobson said:

I would say here you are just describing what an Actor does and nothing specific to OOP. An Actor can be an object but doesn't have to be from my experience. I personally think that this use of inheritance can be the most difficult to actually understand/maintain as well. If you've ever used an NI's Actor Framework where the inheritance reaches depths of 5+, trying to understand what the Actor does can be quite difficult (though someone might claim this is because they violated SRP).

Perhaps I'm missing some historical context though and should read some older papers.

Exactly! To implement an actor in LabVIEW one doesn't need LVOOP! Quite a few "asynchronously communicating modules" producer-consumer design patterns, developed by different people independently from each other including my LabHSM and EDQSM show that very clearly. After all, LVOOP didn't even exist when mine were developed. One can get away with just a cluster in a shift register to access any the data from any action case, but, OK, lets instead make an instance of a "regular" (LVOOP) class and make each action case just call a corresponding method of that class. It will look a little cleaner. The main point of creating an actor is to make an object more than just a bunch of data and methods, to give it a "process", "life", to encapsulate its state and enable it to exchange messages with other objects while having that own "life", i.e. the original Alan Kay's idea about what an(y) object is(should be). If your objects are like that (actors), and you build your application with (out of) them instead of the "regular" (data plus methods) objects, then you don't need all those complex "Gang of Four" design patterns which you indeed better use if you stick with "regular" objects.

IMHO the main harm from "THE" Actor Framework is that it scares LabVIEW developers away from the actor model.

Edited by styrum
Link to comment

There are lots of design decisions with C++, C#, and Java that in hindsight are poor, but I don't feel like the author focused on the problematic parts, and maybe confused/conflated encapsulation and abstraction.

In my opinion, this style of OO can be best described by:

  • the ability to bundle data (something like a struct or record) and operations on that data (methods, which are essentially functions with an implicit "this" argument) into classes
  • the ability to extend classes via inheritance, and automatically and irrevocably get subtype polymorphism, which can be ad-hoc through overrides
  • the ability to hide implementation details within a class (private fields & methods)
  • Additional ad-hoc polymorphism via interfaces

I don't think I have a problem with a language having any of those features.

I don't know if bundling data and operations is helpful, but I certainly don't think having the option is harmful. I do appreciate that the <class instance>.<method>() syntax is ergonomic. This can also be achieved by "." being syntax sugar where <expression>.<function>(...args) is equivalent to function(expression, ...args).

I do think it's a problem when developers shoehorn their data types into convoluted inheritance hierarchies, but this isn't an OO problem. It's equally problematic when developers shoehorn data types into any convoluted data representation. I acknowledge that there may be some times where the type extension along with the subtype polymorphism may be an elegant representation.

Hiding implementation details is not something I see as problematic at all, and it's not unique to OO languages. Other languages allow private struct fields and other definitions that are only usable inside the module where they're defined.

It's nice when languages have something more powerful than these style of interfaces like Rust's traits or Haskell's typeclasses, but I don't think that makes interfaces bad, maybe just less useful.

The big problems with these languages are:

  • null - Anywhere you expect a class instance you must also expect the possibility of null. Terrible!
  • unchecked exceptions - effectively an implicit potential return type of every function you write. Terrible!
  • lack of type-safe sum-types/discriminated unions/tagged unions. Abstract parent class and multiple concrete children, the OO equivalent, is comparatively clunky. Scala's case classes are a nice implementation with an OOish flavor.
  • lack of robust treatment of functions as data (this is improving as functional concepts are making their way into other languages)
  • inability to extend the functionality of classes without subtyping
  • Unsoundness in Java's type system via inappropriate generic covariance
  • C++ being enormous and complicated
  • loads of safety issues in C++, but it should get a partial pass for being a lower level language
  • C++'s lack of package manager - this is an ecosystem problem, not a language problem
  • verbosity

What are the alternatives?

  • Maybe<T> instead of null (a generic sum type which is either T or nothing. It must be cased on)
  • Result<T, Error> instead of exceptions (another generic sum type that is either T or an error. It must be cased on, and what an "error" is a different language decision)
  • Give the option for sum-types! Please LabVIEW!
  • Give facilities for functions as data, lambda functions, partial application, etc.
  • Rust-style traits, typeclasses
  • Appropriate generic covariance/contravariance (invariance if you can't be sound otherwise - restrictive is better than unsound)
  • Good package managers & build systems
  • type inference

What languages do these things well that are easy to use? In my opinion: TypeScript, F#, and Scala, which all also support OO stuff. Rust is probably my favorite language, but it is not easy. Haskell is great as a learning experience, but is definitely difficult and is so high level that it makes it hard to reason about when your program will do what at runtime because of laziness.

Edited by DTaylor
  • Like 2
Link to comment
57 minutes ago, DTaylor said:

There are lots of design decisions with C++, C#, and Java that in hindsight are poor, but I don't feel like the author focused on the problematic parts, and maybe confused/conflated encapsulation and abstraction.

He confuses OOD with OOP.

 

17 hours ago, styrum said:

Quite a few "asynchronously communicating modules" producer-consumer design patterns, developed by different people independently from each other including my LabHSM and EDQSM show that very clearly.

Again. OOD.

  • Like 2
Link to comment
On 9/3/2019 at 2:42 PM, styrum said:

😂He knew what was coming:

" I expect some sort of reaction from the defenders of OOP"

I refuse to accept the OPs idea of all people opposing his childish rant as being a "defender of OOP". I'm a defender of intelligence and critical thinking.  I'm an engineer. The OP is a petulant fool.  Of course some things are true, some things of OOP are good, some are bad. Judgement is required. Being intellectually lazy and understanding "OOP is a framework" is just not worth even arguing against. If he was working with me, I'd make sure the guy is fired.  Dead weight.

I've had several beers. This means this post is completely honest. If possibly not accurate.... ☺️

Link to comment
  • 10 months later...

Much longer "nail to the coffin" 😜 of THAT KIND of OOP (written 5 years earlier than the subject rant/"opinion"). Or an entire box of nails rather.  And even interfaces introduced in 2020 don't help LVOOP considering all those "nails".

Even if you don't read the whole thing, notice the same conclusions (recommendations of alternatives) in the end regarding the paradigms and languages to use instead: actor model (Erlang) and/or functional programming (Haskel, Closure). Again, it was written in 2014. 

Also notice that some comments above are too similar to "No True Scotsman..."😜

http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end

Edited by styrum
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.