Object Orientation Isa Hoax

An interesting phrase used by AlexanderStepanov in the following context:

From http://www.stlport.org/resources/StepanovUSA.html

Question:

I think STL and Generic Programming mark a definite departure from the common C++ programming style, which I find is almost completely derived from SmallTalk. Do you agree?

Answer:

Yes. STL is not object oriented. I think that object orientedness is almost as much of a hoax as Artificial Intelligence. I have yet to see an interesting piece of code that comes from these OO people. In a sense, I am unfair to AI: I learned a lot of stuff from the MIT AI Lab crowd, they have done some really fundamental work: BillGosper's Hakmem is one of the best things for a programmer to read. AI might not have had a serious foundation, but it produced Gosper and RichardStallman (Emacs), Moses (Macsyma) and GeraldSussman (Scheme, together with GuySteele). I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.


Anybody care to translate these for me? (Or indicate some concise references that might discuss what the heck he is talking about)

I think he is saying that OO is SingleDispatch, not MultipleDispatch.

Types are usually either tree-based or a list of mutually-exclusive options. It is possible to have multiple orthogonal dimensions by which to group or divide things by, and this tends to be anti-type. For example, types become messy if we try to make Employee types. We might have Manager and Non-manager. But we could also have Part-time and Full-time. But these can be orthogonal. We could have a part-time manager, for example. Thus, either we have a jillion competing "types" (MultipleInheritance?), make a goofy tree (see soda example at LimitsOfHierarchies), or we turn them into attributes of an "instance" of employee and toss the idea of types altogether.

Confusing types with attributes in this way is the kind of thing I've learned to expect from someone just starting on OOP. It imposes on Object the Relational idea that identity is tied to attributes; it's not. That's why DBAs are the last to grasp Object concepts. --mt

AspectOrientedProgramming is one attempt to remedy this. TableOrientedProgramming is another such attempt. IBM has a project called "hyper-spaces" or the like to try to deal with this. I will see if I can find the link.

ThereAreNoTypes


There is no common understanding of just what ObjectOrientation is. Hoaxes are things people do to deceive and usually rob others. C++ does not present itself as only an ObjectOrientedLanguage. It would be best here to insert a quote by the designer of C++ on this very subject: (from interview -- http://www.research.att.com/~bs/ieee_interview.html)

"So what is OO? Certainly not every good program is object-oriented, and not every object-oriented program is good. If this were so, "object-oriented" would simply be a synonym for "good," and the concept would be a vacuous buzzword of little help when you need to make practical decisions. I tend to equate OOP with heavy use of class hierarchies and virtual functions (called methods in some languages). This definition is historically accurate because class hierarchies and virtual functions together with their accompanying design philosophy were what distinguished Simula from the other languages of its time. In fact, it is this aspect of Simula's legacy that Smalltalk has most heavily emphasized.

Defining OO as based on the use of class hierarchies and virtual functions is also practical in that it provides some guidance as to where OO is likely to be successful. You look for concepts that have a hierarchical ordering, for variants of a concept that can share an implementation, and for objects that can be manipulated through a common interface without being of exactly the same type. Given a few examples and a bit of experience, this can be the basis for a very powerful approach to design.

However, not every concept naturally and usefully fits into a hierarchy, not every relationship among concepts is hierarchical, and not every problem is best approached with a primary focus on objects. For example, some problems really are primarily algorithmic. Consequently, a general-purpose programming language should support a variety of ways of thinking and a variety of programming styles. This variety results from the diversity of problems to be solved and the many ways of solving them. C++ supports a variety of programming styles and is therefore more appropriately called a multiparadigm, rather than an object-oriented, language (assuming you need a fancy label)." -- BjarneStroustrup


I find the quote from Stepanov to be typical of programmers who understand the technology of object orientation, but do not understand the way of it (sorry for the eastern reference). In principle, OO is simple. If I have those wonderful 5 properties listed in Booch's book (inheritance, polymorphism, encapsulation, etc...), I have object orientation. However, Stroustrup's comment is much closer to the point. OO is not about creating a set of classes with interfaces defining their actions. In fact, that is closer to *modular* programming than OO programming. OO is about creating a set of data-centric objects whose behaviour is defined by their mutual interactions. People ask me how I can write large C++ programs where no method is larger than 10 lines. The answer is that the functionality is defined by the structure of the objects, not in the methods. There is a huge set of misconceptions about OO that have arisen from our naive understanding in the 1980s and early 1990s. People who cling to these misconceptions are not stupid, but they are self limiting. It takes a long time and a lot of effort to grok this stuff. After 13 years I feel like I've only scratched the surface. In fact, I feel like there is a huge universe out there waiting to be discovered.

Very well said. I came to the same attitude towards OO by reading Booch (and combining it with my own experience in building software), and only had it confirmed by Stroustrup after the fact. As for "...long time and a lot of effort to grok this stuff", see SoakTime. -- AndyMoore

And very wrong, too. I have four of Booch's five properties in FunctionalProgramming, only inheritance is missing. Does that make FP the same as OOP? Of course it doesn't, I'd have to add in inheritance, too, wouldn't I? But that's pointless, since delegation works so much better. It follows, that OOP is crap, quite probably defined in an ad hoc way just to extract more money from gullible programmers. I rest my case.

Absolutely. Stepanov's reliance on the worst of the OO commercial brochures is sad. --PhlIp

An exact consensus definition of ObjectOrientedProgramming and/or its "true nature" is highly elusive. I am not sure I would blame Stepanov for this "sin" if it is indeed one. It is not his job to clean up the definition of OO. Being vague is a criticism of OO in itself even if not the one he had in mind. Some long-time practitioners believe that "types" are central to OOP, while others shift toward the "physical modeling" camp, where OO was born. They argue for days with each other in some e-forums.


Anybody care to translate these for me?

It attempts to decompose the world in terms of interfaces that vary on a single type.

He must be thinking of CeePlusPlus or JavaLanguage here. It makes no sense connecting interfaces to 'types' in untyped languages. In C++ each type (hierarchy) defines an interface but a separate type with the same interface is completely unrelated.

It has nothing to do with "typed" or "untyped" languages. He says that OO doesn't support MultipleDispatch nicely.

To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types.

"Families of interfaces" is a misnomer. It's still the same interface, it's just that the algorithm doesn't depend on static types any more.

Same here. He speaks about interfaces involving several objects (like multiplication of two matrices, when each of them can be sparse, tridiagonal etc. or implemented as a plain 2-dimensional array).

Nikita, you're probably right but I'm not sure I understand. Does he mean that template <class T1, class T2> foo(T1 t1, T2 t2) defines an interface that varies on two types? -AQ

I think so. But I cannot say that C++ provides an elegant algebra of template dispatches, let alone the runtime-deferred dispatch issues. --NikitaBelenki

Stepanov seems to believe that generic programming in the C++ sense is the only true path. The interview is pretty dated and the C++ community has enthusiastically embraced generic programming by now. Read ModernCeePlusPlusDesign to get a more current view of C++ state of the art. --AndrewQueisser

see also GenericVsObjectOrientedProgramming


Let me drop out of theory and into practicality. I have done functional programming, and I have done OO programming. When I program in OO, I can compartmentalize the concepts better in my mind. The code is more readable. In general, my skills are enhanced by OO. Thus, I don't believe it is a hoax. --RobMandeville

But if the OO BenefitsAreSubjective (useful to your particular mind, but maybe not others), then it could still be a hoax to imply that OO is *universially* better, when in fact it may not be. Thus, being good for you does not necessarily exclude a hoax of sorts.

Using the word hoax is ridiculous. It is unnecessary inflammatory rhetoric. [It is equivalent to trolling.]

This is about an article that happened to use that word. Perhaps StepanovControversy? or StepanovArticle? is a better name?

A whole area of knowledge about programming (or any other endeavor) cannot be dismissed simply because you encountered people who made bigger claims for it than it could satisfy. Or you were personally, deeply betrayed by it somehow, emotionally.

There is no Hoax. There is only human knowledge stumbling in the general direction of Progress. Please quit the grandstanding.

Also, there is absolutely *NO* programming tool or methodology that could be said to be "universally better". Anyone who uses such language at all is betraying a deep foolishness. Either for or against, regardless.

There is no VastConspiracyOfOo?. TheyreNotOutToGetYou?. TakeItDownaNotch?.

The Stepanov allegation seems to be that the industry over-hyped something to make a profit rather than desires for meritous software. Whether that is a "conspiracy" or not is yet another fuzzy definition debate.

No fuzzy definition necessary.

People originate new ideas and new techniques. If useful, and helpful, these ideas spread. People get excited about them. When marketers get excited, they want to sell. When they want to sell, they produce hype.

I don't think being useful or helpful is a prerequisite for hype bubbles.

The word conspiracy doesn't work. Unless you're trying to say that marketers of "programming solutions" all conspired to be greedy short-sighted bastards. But for that matter, did they act at all out of character with respect to OO, as a special case? No.

But then that is neither here nor there, in relation to the knowledge itself.

[deleted "vacuous" insult]

Regarding what "conspiracy" means, are you suggesting that if marketers and opportunists (M&O) individually, but in temporal unison, knowingly exaggerate the truth it is not a conspiracy, but if they got together first and agreed to exaggerate the truth, *then* it is a "conspiracy"? The irony is that M&O don't have to agree, because they do that stuff out of habit. Either way, it is mostly splitting hairs IMO. Would you be satisfied if he said "massive hype bubble" instead of "conspiracy"?

Are we arguing over the definition of "hoax" or "conspiracy" here? He said "hoax" no? Hoaxes do not reqire "getting together and planning" do they? M&O's naturally ride on each other's hype without formal declarations.


I don't believe there is a conspiracy or a hoax. I think Object Orientation helps you break down a problem into chunks more easily than past methods and I think it can be a big win in the hands of a good programmer.

Of course critics, and probably Stepanov, would want objective evidence of such a statement. (OoEmpiricalEvidence, ArgumentsAgainstOop). It could also be argued that BenefitsAreSubjective and that OO simply fits your own mind better. The hoax was in the hype that it would solve all your problems and let you replace your programming teams with a bunch of trained monkeys. I also know that OO code written by a poor programmer can be much, much worse than structured code written by a poor programmer. The sword cuts both ways. --BruceIde

What about average programmers? If the best OO Is better than the best procedural, but the worse OO is worse than the worse procedural, then in the middle they would probably break even it would follow. Thus, the average programmer will not see OO benefits under this.


Stepanov's understanding of axioms is exactly backwards. So if his class/axiom analogy is correct, he disproves his own point.

Stepanov is speaking from a mathematician's perspective, not a layperson's perspective, and this analogy is entirely appropriate. Which came first, the axioms for Euclidean rings or the Euclidean algorithm? Only after millennia of proofs about arithmetic did we begin to abstract them into things like groups and fields.

Working mathematicians do not start with a set of axioms and see what interesting things they can prove. Such a strategy is intractable for any nontrivial set of axioms. Rather, they take existing proofs and map them into algebra.

Yes, but once we have 'things like groups and fields' it is more effective to talk in terms of those abstractions, and casually ellide the millenia of effort that went into producing them.


This is really interesting, because Stepanov is saying that using class hierarchies is like detecting axioms first and writing the final program is like finding the useful proofs later. He on the contrary is in favor on finding the useful program first (the proofs) and later deciding what the classes should be (this sounds a lot like refactoring to me). -- GuillermoSchwarz


To understand Stepanov's mind, you should read "notes on programming" available (pdf) on the wikipage. The OO way of thinking is actually good for conception. But computers dont works with concepts. The information is processed proceduraly. To be economic, efficient, one must code close to how the computer works. Objects is not a concurent of procedural, it's a (heavy) superset. It has a huge memory cost, and consequently a processing cost. Compilation of OO code can never result in as clean machine code as procedural code will (given the same quality obviously). OO might help you think (and it have to be prooved: how do you think about your next day activities? Describing objects or going procedural? How calendars are made? On the behalf of what objects will compose your day, or by successives hours? Maybe you think of a meeting at 3pm from its attributes? Okay, you can, but what's the point?). Can't you see that Object Orientation is ALL based on procedural, added with conceptual rules that has no other intrinsic value than being a burden? Just like every other paradigms, for one good reason: it's the way a computer actually works. From that point of view, Object Orientation is an hoax. It's a conceptual tool with no practical interest other than giving you a ready-made organized pattern.

Whether OOP matches the way people think probably depends on the individual. No two people think alike. As far as performance/speed, OOP appears to be "good enough" for many applications, and is thus not always a barrier to use. I have a lot of criticisms of OOP, but performance wouldn't be near the top.


I feel that an important concept is being overlooked. There are rich frameworks available for modern software developers, for example Java, .NET... And within these communities there are vast numbers of frameworks that specialise in solving particular problems. Further, there is a large volume of 3rd party commercial frameworks that can be purchased and integrated.

My point is that OO may not be powerful or particularly useful for decomposing problems within a particular vertical problem domain but it is extremely useful in allowing frameworks, types, functionality to be used across problem domains. It is the horizontal integration across problem domains where it stands out.

The power of OO is not about building class hierarchies like "Cat is a Animal", and therefore creating an Animal base class and inheriting it to create a Cat class, and now and then we can treat our cat as an animal and get all excited.


I wouldn't say OOP is a "scam", but I would probably say "overly hyped". The idea of reusing code is great, but you can do that with old fashioned procedural "functions" or "subroutines". If you want to think of them as "objects" that "do" something, that's great, more power to you. But all the contortions of syntax and theory that go into creating and "instantiating" classes has given me a headache that's lasted for well over a decade.

Note that "reuse" has generally fell out of grace as the top benefit of OOP. (See ReuseHasFailed). However, I don't see anything close to a consensus on what the new top benefit actually is. This is a bit strange. Some claim "a little bit of everything". -t


CategoryCpp, CategoryOopDiscomfort, CategoryRant


EditText of this page (last edited January 28, 2013) or FindPage with title or text search