Object Oriented Design Is Difficult

From ObjectOrientationIsDead came the question of whether OO design is "difficult".


Well this is a fascinating topic and there is so much to answer both on this page and the previous page, ObjectOrientationIsDead. One thing I've noticed is that a lot of failures are blamed on OO or Java or C++ when, in fact, they are just due to plain old bad project management practices. -- MikeCorum

DoesOoRequirePristineEnvironment?? (or TechniqueWithManyPrerequisites) If so, isn't this a weakness?

My answer to that would be that OO does not require a pristine environment but it won't save a project from bad practices that would have made a non-OO project fail. I don't see that as a weakness since I don't know of a programming paradigm that would do better in the presence of a bad environment. However, one possible weakness might be that OO takes some time to "get" and some personality types will have a more difficult time than others. This is something I've observed. However, I suppose I am converted. I can't really imagine programming without OO now. Part of my job is to help others get to that place. It takes time. -- MikeCorum


OO doesn't make bad procedural programmers into good OO programmers. Good procedural programmers usually become good OO programmers, but for some it takes longer than others. As to why, who knows?

Perhaps your judgement of "good procedural programmer" is programmers who make their code OO-like. Thus, this is kind of a SelfFulfillingProphecy perhaps.

Some say that OO proponents don't "get" relational, either: See ObjectRelationalPsychologicalMismatch. Most complaints about OO come from functional and relational enthusiasts.

I have to strongly disagree with this thesis, particularly that there is some ObjectRelationalPsychologicalMismatch;

I get OO, I can visualize and translate between static and dynamic models mentally. I also get Relational Models (I can normalize and denormalize models mentally). I know there is an excellent match between Object and Relational models (I can perform an Object to Relational mapping mentally and vice versa). The "mismatch" that many perceive is merely an 'inversion' of the navigability and multiplicity of the links (relations/associations) between thing (tables & schemas/objects & classes). AIH I also have good understanding of Hierarchical models such as Jackson, but think these offer the least useful model. I would struggle to explain any of this to anybody that doesn't already get it.

Discussion moved to TablesAndObjectsAreTooDifferent

I think mental processes inherent in personality types are the crux of this issue, but that Psychological Mismatch is actually between good architects/designers/programmers/coders and good communicators. -- MartinSpamer

Perhaps. Each camp is doing a lousy job of communicating their preference. HolyWarWall

Are you saying that good architects/designers/programmers/coders hardly ever make good communicators?

{I do see what appears to be a reverse correlation between communication ability and OO promotion intensity. Those who are better communicators are more likely to either agree that OO may not be the best tool for all problems, or agree that their preference depends on certain assumptions which others may reject. The more "aggressive" OO'ers seem to accept ArgumentFromAuthority as fully legitimate evidence. But this may be because zealots in anything usually rely heavily on ArgumentFromAuthority. However, a bigger issue here is whether the benefits of OO are psychological or universal. Until we answer that, communication probably will not improve. -- top}

MindOverhaulEconomics


OO languages have more built-in facilities that help manage complexity than procedural languages. So a good programmer in a good OO environment will spend less time fiddling with the mechanics of the programming environment and language, and more time wrestling with the problem domain. This is where the difficulty lies.

I would like to see more specifics on this claim. Past attempts seem to result in PersonalChoiceElevatedToMoralImperative

OO design is difficult in part because design is difficult, and in greater part because, used correctly, OO techniques put you much closer to the human problem being addressed than procedural languages do. And that's as it should be, since software builds upon itself. But human problems are much more complex than programming problems.

Contrast the fact that revenue management systems used to do their calculations overnight with the fact that today, airline systems can work out the optimum offer price for a given seat to fly on a given aircraft on a given route, etc., etc. - and do it in soft real-time. Contemporary problems are harder, and bigger, and have tighter constraints. More is being asked of us.

One criticism of OO is that it doesn't have a sound mathematical basis. But the kinds of problem that it gets applied to these days don't have a sound mathematical basis either. Take revenue management. RM systems do involve a great deal of arithmetic, and the schemes for doing so were derived using mathematical techniques, but the problem that RM addresses is: given what we know about people's historical behaviour in purchasing, let's say airline tickets, what can we predict will be an offer price acceptable to this person on the phone right now?

The way that procedural problem solving tends to proceed is to take a given mechanism that will be used to build the solution, amenable to description in a procedural language running on a von Neumann machine, describe the problem in suitable terms and join them up. OO problem solving, done well, tends to describe the problem in terms that are closer to the client's understanding, then invent computational entities to fit. But most of the people who do programming for a job aren't very good at or interested in, dealing with people's problems in their terms. Since they'd have to be doing that, a little at least, to do OO well, they tend not to.

Measuring how or if a tool or technique "better fits the problem space" or "better fits the customer's mind" has proven difficult. This gets into the fuzzy depths and opinions on psychology and philosophy. It is more art than science.

[Not true. Hammer users can easily tell if a hammer fits their hand by using it. Hammer makers survey and measure hammer users to refine their hammers. The same principles can be applied to programming techniques. Try them all and use the ones that work best for you.]

That seems to agree that it may all be subjective. Different carpenters may select different hammers. Jimi Hendrix held his guitar "wrong", but he could make it do things that nobody knew were possible at that time.

["Subjective" just means a human is involved. Yes, different carpenters select different hammers. Different programmers select different paradigms, techniques, patterns, etc. But look at how a company like Stanley develops hammers. There is science to it.]

They probably use "focus groups" and user sampling surveys. In other words, it is mostly the "science" of marketing. However, marketing tends to have a faddish element to it because humans like status symbols and marketers play into this. Hammers with vertical rubber strips might be "in" for a while, and then horizontal strips for a while, then completely covered with rubber, then back to mostly chrome, etc. Otherwise, they would have gotten it "right" in the early 60's and never had to change much.

[Nonsense. A focus group didn't develop a hammer with a tuning fork in the handle.]

I don't understand how your tuning fork inclusion relates to the above.

[It's one example of hammer makers using science to develop hammers. Vibration is a major cause of long-term injury from tool use. Stanley found that placing a tuning fork inside the handle reduced vibration transmitted to the user's arm. This is physics and medicine, not focus groups.]

Okay, some of it is science, and some is marketing. But I don't see anything close to such rigor with OOP. See also DisciplineEnvy.

[It isn't more of an art than a science as you claim above. It can be observed, models can be built and tested.]

Perhaps, but until it is tested, I hope the industry stops shoving OO down our throats. Prove first, then shove.

Perhaps you could try the same with RDBSs? NickKeighley

I agree 100%. But it's too late, RDBMS have already been accepted as "standard tools" since around the early 1990's, and thus are heavily road-tested. (They are not perfect, but better than existing alternatives such as IMS and file-systems-as-DB's.) Similarly the NoSql movement needs road-testing also. If pioneers want to test them for us and risk arrows in their backs, that's their prerogative (as long as they don't claim they are ready for mainstream.) I'd argue OOP GUI's are similarly entrenched as much as RDBMS, at least for desktop apps, and so far nobody has shown a production approach that works better for desktops. (I'd like to see table-oriented GUI systems tried by pioneers. The HtmlStack is a possible alternative, but most agree it sucks from the developer's perspective. Delphi, VB, etc. are nicer than HtmlStack.)

{"Similarly the NoSql movement needs road-testing also."}

{Google have been "road-testing" it for a decade and a half. The Berkeley DB crew have been "road-testing" it for almost thirty years. Looks like it works.}

For large companies with a similar usage profile to Google, okay, I'll agree. But, some of the hype seems to imply they are ready for "mainstream". I still consider it a niche tool at this point. There are not a lot of Google's around, quantity-wise.

{What about the Berkeley DB, then? It's been a foundation of devices like network routers and PBXs for decades.}

I googled around a bit, and couldn't find descriptions of practical uses for it, other than LDAP or LDAP-like projects. I will agree that LDAP is "mainstream", but it's mainstream for a niche need. (I am not fond of LDAP, but that's another topic.)

{Have you heard of ActiveDirectory? (If you haven't, you've been living in a box.) LDAP underpins it. Anyway, who cares whether it's "mainstream for a niche need" or not, and who cares whether you're fond of it or not? All that matters is that NoSql has been well road-tested. By the way, every SQL DBMS is a language parser built atop a NoSql data store, by definition. That's pretty "road-tested", too.}

Active Directory is covered under "LDAP-like projects". Berkeley DB is a kit for building databases, usually roll-your-own special purpose database, and thus is not directly comparable to an RDBMS. If you want to argue it's road-proven for special purpose databases, I won't disagree. Such databases are typically hard-wired for a predetermined purpose. Anyhow, it seems there may be a difference between what we each call "No Sql".

{I mentioned ActiveDirectory because it's nearly ubiquitous. NoSql in general isn't comparable to an RDBMS, and the Berkeley DB is precisely the sort of thing typically called "NoSql". Tools like MongoDb?, Cassandra, Hadoop and so on, may provide varying functionality but are conceptually the same: a data store -- very often a key/value or "column" store -- accessed via an API rather than a database language.}

Some have optional query languages, such as "Pig". Anyhow, it's still comparing apples and oranges because the boundaries have yet to settle. They are kits, not really products, although that may be gradually changing. But we are wandering off topic.

{Yes, some have query languages. How is that relevant? How does a kit differ from a product? How is discussing NoSql wandering off the topic of NoSql?}

Why is it off-topic? Well, because, this is an OOP topic, in case you forgot.

{You started it, so don't blame me.}

I didn't blame anybody, only pointed out that "we" are wondering off topic. Perhaps it should be split out, but I am not in the mood for a LaynesLaw dance over "mainstream" etc. any time soon. We don't have literature mention studies etc. such that it would merely become yet another AnecdoteImpasse.


One of OO's advantages is that most OO programming environments have known good practice built in to them, much as mathematical notations do.

Example?

The other is that they enable, even encourage, this building upon earlier results (i.e., "reuse"). But more powerful tools require more skill to use. Which is another reason that OO design is hard.

Saying "OO is hard because it is good, and good things are harder" is only inviting HolyWars. If the hardness of OO was really because it (allegedly) has better reuse, would OO be easier if one did not take advantage of reuse? (Related: OoIsNotAboutReuse.)

[That isn't what he said, though. He said more powerful tools require more skill to use.]

Bad tools are also require more skill. I am just pointing out that requiring higher skill is not evidence of betterment.


One criticism of OO is that it doesn't have a sound mathematical basis. Well, for one thing, it's rare for those bits of software that do have a sound mathematical basis to be used in accordance with it (e.g. database schemas that get "denormalized for speed efficiency" or "normalized for space efficiency" long before they've ever been deployed on a machine).

Database efficiency is an example of the classic twofold trade-off between size and speed. Though Instinct and experience can provide a good measure without the requirement for a hard metric. Soft or non-functional requirements should give a good indication of which of these should be targeted.

See also: IsProgrammingMath, OoLacksMathArgument


When you move from procedural to OO code you make your focus smaller. In procedural code, you have much more coupling, so you have to keep more of the code in your head. In OO, you can focus on a few small objects at a time, and trust that the rest of it will be taken care of by somebody else, even if that somebody else is you tomorrow.

For example, say you're writing code that parses data from a text file and then feeds it into a report. A procedural process for writing this code might be all-or-nothing, so you don't feel finished until the whole thing's done. Object orientation, however, lets you write just one part, say the Parser object, and you can write a stub Report object that had interfaces but only enough implementation to compile. You can try to make the Parser robust, if you like, before you even make the Report useful.

What stops one from doing the same thing in procedural? It sounds like regular old SeparateIoFromCalculation to me. One may want to use an intermediate structure, perhaps a database, to store the parsed info. That can be analyzed and tested as both input (2nd stage) and output (1st stage). One could argue it is easier to query and view an intermediate table than a bunch of allocated objects.

But this thinking - that we can solve problems as we encounter them, instead of planning for all of that at once - makes some people uncomfortable. Is this part of the psychological mismatch we're groping for? Do programmers like complete control too much?

I find the opposite. Procedural tends to let me focus on just one task at a time. I don't have to worry about dividing the algorithm up by some noun taxonomy or anything else. It queries the database to get the info it needs and then optionally writes the results back to the database, and then (usually) returns control to the caller. A variation of the ol' input-process-output. OO gets your foot tangled in a bunch of different class protocols and rather arbitrary taxonomies that you must grok before effective use. Talking to the database is more consistent. OO seems to invite too much "creativity". The approach to query and talking to the DB is more established, whereas in OO every class seems to reinvent its own version of add, change, delete, find, sort, join, lock, save, etc. in different ways.


Perhaps it's just as easy to write SpaghettiCode with procedures as it is with objects. -- RobHarwood

I would equate procedures as a technique of StructuredProgramming which is the antithesis of SpaghettiCode. I also think both StructuredProgramming and OO are easier because they allow [aim[ to allow the complexity to be reduced or localized. -- MartinSpamer

The big difference is that procedures are familiar, objects are novel, so LessAbleProgrammers? who've grown up with procedures have a learning curve to figure out where the dials are. It's just as easy to crash a fighter jet as it is to crash a helicopter. Just as you wouldn't send a novice jet pilot to fly a helicopter without proper training... -- RobHarwood

Some percentage of people successfully who learnt either method first can make the transition, others cannot.

I often learn best by being shown how something is superior to alternative approaches. However, I have not seen such external OO evidence. It seems to be more of an emotion or mind-fit thing than something that improves some externally observable quantity or metric. Time (raw exposure) is not always the best teacher. Often it just solidifies bad practices due to the comfort of familiarity.

What I've found in about ten years of OO programming is that OO code is much easier to create, but is much harder to reuse, and that it needs more continuous refactoring to be as maintainable as procedural code. I started coding just about 17 years ago, so I don't think that's because I am too old to learn. ;) -- NikitaBelenki

It seems that "good OO" is just too hard to demonstrate, document, and articulate. There are no (relatively) simple design principles that most OO celebrities will agree on. It seems like you need to hire a full-time personal OO Zen Master in order to convey proper OO to OO novices or people doing it wrong. Conveying the knowledge of how to do good OO is just too tough right now. The training for "reasonable" procedural-relational design seems a more obtainable goal.

What, then, are we to make of the many programmers who believe deeply that they are more effective with OO code and came to this state without the benefit of any Zen Master consultants?

Perhaps some just find OO difficult and some don't. Either those who don't are just smarter, or there is something about OO that fits their particular mind better.


I see a majority of programmers say they do OO because they program in a language with OO features. Almost none of those programmers really understand OO. In my experience, the ratio of those who truly understand OO to those who only say they do is 5%. And all of that is assuming that I have any real understanding of OO. -- MikeCorum

That is because there is no agreement on what OO and OO modeling is. NobodyAgreesOnWhatOoIs. It depends far too much on ArgumentFromAuthority.


Again, you make an ObjectOrientedCulturalAssumption. The fact that you can't do without inheritance doesn't mean that others can't also. I wouldn't be so judgemental about procedural code considering that all the major OSes are written in C, they are very maintainable, well designed, stable and everything, and we have yet to see an OS either designed or implemented as OO. -- CostinCozianu

Actually, BeOS was written entirely using C++ (as in object-oriented, not procedural with class-like structs). The only exception was at the device-driver level. Be, of course, is dead. That shouldn't diminish the fact that BeOS was a fantastic operating system, way ahead of its time, and implemented using object-oriented techniques. In fact, ask anyone: BeOS is, perhaps, one of the fastest operating systems ever implemented.

Oh yeah, and I also forgot: Around 1991, IBM started reimplementing OS/400 using object-oriented techniques with C++. So that's two that I'm aware of. -- JeffPanici

Here's a third: Symbian OS (formerly Epoc) is written entirely in C++ too. -- MattBriggs?


They are also written with 'dials' equivalent to inheritance, such as table-driven programming with function pointers. Six of one, half a dozen of the other. I can do without inheritance, if I can use table-driven programming. But that doesn't make the design any easier, in fact it makes it harder since I have to deal with all these messy tables. If you want to claim that ObjectOrientedDesignIsDifficult, be fair and also claim that ProceduralDesignIsDifficult?. Refactor that to DesignIsDifficult.

Rob, interface implementation (as in "table-driven programming with function pointers") is not equivalent to inheritance. Inheritance is by definition a partial order. Interface implementation is not an order at all, because there is no transitive relation. -- nb

From a pragmatic point of view, they are equivalent. They are two ways of solving the same problems.

No. Being followed further, they produce different results. With inheritance the dependencies you introduce are explicitly transitive. So to get rid of some coupling in one class, you will probably need to refactor all its relatives.


I've seen developers have trouble with OO because they have trouble thinking abstractly: Without a good grasp of abstractions, OO is just a huge number of tiny subroutines all tied together in a confusing matrix. Without a good grasp of where functionality "belongs," one has little guide as to where to find things or where to make changes (and once having changed, what the change will break).

I recently had an argument with a project manager: The "communication" component, which conveyed data between the different tiers of the system was doubling the apostrophe (') characters in strings, so that things would "come out right" in the database. "No," I said, "that doubling should be done in the data layer." "But it works fine the way it is." he said, and therefore "I can't let you change it." "Aside from being a bad idea, from the OO perspective," says I, "it complicates the business logic, which would have to undo and then redo the doubling, and it makes it impossible to transform this from a 2-tier to a 3-tier system." Finally, he let me change it. (Or maybe I just went ahead and changed it while he wasn't watching. ;-)

I think this is a good example where lack of understanding of the abstractions being implemented causes one to put code in the wrong places, making the system brittle, difficult to maintain and unreliable. -- JeffGrigg

I find that OO proponents define abstraction as *being* OOP. A tautology. I consider myself a highly abstract thinker, but I don't find OO an abstraction nirvana. If anything, I find it fails to abstract commonalities I see in dealing with DatabaseVerbs. I see interface repetition in OO. Repetition is often a sign of bad abstraction. To me OO is a pointer nightmare without consistency and reason. It is the Goto of modeling. -- top


I think the problem with OOP is not the method, but the people using the method. Several possibilities exist. The ones I think likely are: Lack of training, lack of familiarity, psycho-technological mismatch, wrong paradigm, misleading hype. Out of all of those, I reject the 'psycho-technological mismatch' hypothesis because I myself, and others I know, including novices and LessAbleProgrammers, have grown to strongly prefer OOP over procedural programming. I'd also say that 'misleading hype' can be subsumed by 'wrong paradigm'. Likewise, since I was formally trained in university for OOP, but still didn't really get it until I read ReFactoring, I'd say that the 'familiarity' and 'training' hypotheses are possibly valid but dwarfed by the 'wrong paradigm' effect.

Overall, my interpretation is that the prevailing paradigm is that OOP is something you do because 'that's the way it should be done'. I used to think that way myself. A much more useful paradigm is that OOP is something you do to help you work faster, given that changes will happen frequently. If it's not helping, don't use it. This goes for the whole method, as well as each individual practice. That's what I mean when I say OoIsPragmatic. Use inheritance to reduce duplication, use polymorphism to simplify logic, use objects to make thinking about your problem more natural, etc. So OO isn't difficult because it's inherently tough, but because we've been learning it in an overly complicated and generally irrelevant fashion. -- RobHarwood

"A much more useful paradigm is that OOP is something you do to help you work faster, given that changes will happen frequently. If it's not helping, don't use it." I'm on your side. I chafe at OO languages when programming a finite state machine... FSM programmers work from tables, not methods. I skip objects when writing quick scripts or file manipulation programs. I question objects when doing screen-to-database programming. I like objects when the program is going to stick around and I have to live with it, or it's bigger and harder. OoIsPragmatic is a fine phrase. (You're also right that people do OO 'cuz it's being done, but that's to the side of the question.) -- AlistairCockburn -- IMO: What is hardest about OO is that too many trainers have claimed for too long that OO is a silver bullet, when its only claim to fame is that it is not a poison-coated lead bullet. OO has shiny bits and can be made too work trivially on some simple examples in a laboratory.

This presents learners of OO with the illusion that they are capable of designing a program (the answer) in at least paradigm. The problem really is that in the real world where problem start out pear shaped, they will fail to understand the question.

If in contrast (a control group) the same sample of randomly selected humans attempted to become procedural programmers, my hypothesis is that self-culling would occur. Thus in the wild there are more incompetent programmers and designers who think that [they?] can do OO than functional decomposition.


Failures of OO projects shouldn't be blamed on language, since TuringEquivalent languages are basically the same. -- RobHarwood

Turing equivalence of the languages isn't the issue - Turing equivalence is at the level of mathematical power. Difficulty in programming is at the cognitive level, and at the code maintenance level. The latter means, "How many lines of code do I have to touch to make this change?" Any kind of badly designed code is more difficult at the code maintenance level, but still equivalent at the Turing level. The question of is OO design more difficult (to me) has to do with the "this mechanism or that?" questions running through the programmer's head while programming, what you call the "more in quantity and more in kind of decisions to make at each step". If, in procedural programming, I had to decide at each step whether to involve a table-lookup or a switch or a FSM or a quick neural net or rule-system reference, then I'd complain about the cognitive complexity of procedural programming. -- AlistairCockburn


I'm not good enough at procedural programming to detect whether a program is full of random "stuff" (in lieu of design) that will make maintenance harder. I am good enough at OO programming to detect that, and I see it a lot these days. I attribute it to OO being harder. A third possibility is that there's the same amount of equally bad rubbish being produced in procedural code, and I've just not had a chance to notice it. -- AlistairCockburn


Or perhaps that's the factor that makes some people not realize that OO is one good method amongst many? It's not just "OO or Pascal" you know.

Quite. It's OO and Pascal: DelphiLanguage

On this page do we mean ObjectOriented as Java and C++? And "procedural" as C and Pascal? If so then I think Smalltalk programmers (and others) would thank us for being more specific. -- LukeGorrie


I remember when I got into OO. I was programming in C and trying to do a direct-manipulation graphics program, something like a UML tool but it wasn't UML. I couldn't handle the complexity of the problem until I started to structure it like objects-in-C. Then it became instantly much more manageable. So I think the kinds of problems people were trying to solve (e.g. GUI programming) influenced the adoption of OO. And at any time, the problem to solve will influence the methods of solution. I suspect there are some other problems emerging now influencing people I know to consider functional languages. -- BobHaugen

GTK (the GuiToolkit) is an example of this: written in C, thoroughly object-oriented, and (it seems to me) widely considered to be very well designed. -- LukeGorrie

Many years ago I had occasion to work with XWindow and Motif, and they appeared the same (object oriented, written in C). It was kind of cool, because every C structure had an 'extension' void pointer, the need of which is now obvious, and you could see the search along the "inheritance" chain for the function that would handle your call. The art of writing a Widget consisted partly of following the conventions so that the OO wouldn't fall apart on you. Grass roots stuff. -- WaldenMathews

Yes, indeed, writing a UI library many years ago was when I first started seeing the use of objects. OO is great for windowed graphics systems, or indeed any setup where you have lots of independent entities hanging around with their own state, and with events zipping between them. Problems start when people try applying OO to inappropriate domains. I have seen drivel from a supposedly prestigious "strategic consultancy" starting that OO is the only programming paradigm worthy of consideration! What tosh. Numerics is a good example of a domain that is often not a great fit to OO. Functional programming is definitely gaining ground there - even in non-dedicated languages (C++). -- anon


My head seems to be screwed on differently. I wrote procedural code for a while, in C Assembler, Pascal and more C. The maximum complexity height I could reach was X. I always failed to achieve enough decoupling, and eventually a task that was X+1 complex went belly up and code that was X-1, devolved under maintenance fairly quickly.

Once I started OO, I was one of the easy converts, I wrote in C++, and a few others, I suddenly found I could build projects at least 2X in complexity. I tried things that were simply not on the map before. Some of them failed.

-- White Hat

I would like to inspect a specific example. As a reader, I cannot tell if OO is simply superior, if you were not good at procedural design, or if you had trouble grokking procedural. Any one of these 3 could be the case.


Now whether that sudden increase was simply coincidental with me learning something else, like 'design by contract', or 'works like a', or 'Group Theory' is unknowable, as my experience and its account here are ANECDOTAL. Does anyone know of ANY science that backs any claims on this page, or are we like the psychologists who used to practice shock therapy simply charlatans, (Apologies to Psychologists for comparing you with our level of shamanism.)

IMO: All rational statements on topics like this is, start with "What works for me ..". When you find that N people agree, the statement is "What works for us ..." When N is large, the statement is "What works for us, apparently with dissent in this forum, ..."

-- White Hat

As far as I can tell, there is very little science *at all* on effective development. Instead, what you get is religious wars based on shaky ground. Fundamentally, any design or programming language or system is there simply because human beings are limited creatures and cannot take user requirements and instantly spit out binary code that machines can understand. And because we are all different we each work better with some tools than others. What is needed (besides some investment into basic research in this area!) is a mechanism that allows each of us to develop our "units" in whatever system fits with the way our brains work. Unix pipes gluing IO streams together was one of the first, and I have no doubt that XML and .Net are the latest attempts.

I'd like to get away from the usual "OO is good", "procedural is great", "I hate Java", etc. and start focusing on *why* OO/procedural/functional/table-oriented is difficult for some and not for others. Then perhaps we can make some headway on the productivity issue that still dogs development. -- NeilWilson

Perhaps we should distinguish between the type of "failures" that are being discussed:

In my opinion, people tend to find the paradigm or tools that best match the way that they think. Perhaps productivity can be increased by letting people use tools that they like instead of those dictated by trade magazines. Of course, it makes sense to perhaps settle on a limited set as "corporate standards" that cover fairly wide styles rather than every variation. For example, Ruby fans could probably live with Python, and visa versa. Maybe choose the best (oh oh, fight) dynamic and static language from each paradigm.


TentativeSummary (add more to this)

Reasons ObjectOrientedDesignIsDifficult


Object Oriented Design is a Two Step Process

It is having to take the second step that makes Object Oriented Design more difficult (Clarification: This is only saying it is more difficult, it does not say anything about whether or not it is beneficial).

When discussing a new or existing software program, the natural description is of the things it will do or it currently does, rather than what will be or what is operated on. People describe the functions of the software. In design or implementation, the first step is to address the first function, then the second function, etc. If one sets out to design objects first, he must step back and look at multiple functions at one time and determine where they intersect to create objects.

But the relationship between nouns and verbs is often many-to-many in the long run. See PrimaryNoun. Resolving this reality in OO gets messy because OO has no built-in many-to-many helpers.

An alternative approach is to begin with the first function, implement it, add the second function, and derive objects when appropriate, i.e., refactor. This still requires a shift from implementing each desired function to creating objects after each function is implemented.

The desired result, for many reasons, is to have objected oriented code. To get there, however, requires a two step process. Without the second step, one is left with procedural code.

The users of software will describe it in terms of functions. It is the responsibility of the developer to translate the users' descriptions into an object level description. A procedural description is a more straight-forward translation of the users' descriptions, but there are other reasons that make an object based design desirable. Dr Joseph Juran describes this as "language translation;" translating the language of users to the language of technology.

The language of the users usually revolves around inputs and outputs in my observation. This is a natural "consumer" point of view: What do I have to feed it, how often, and what does it give me in return?


The above sounds like the classic battle over tasks-first versus nouns-first (or something else first). Plus every methodologist will probably have their own OO techniques. Lack of consistency between OO celebrities is one reason why some find ObjectOrientedDesignIsDifficult.

Perhaps because consistency is the hobgoblin of small minds?

Regardless, without consistency we have an art instead of a science, resulting in endless debates and fads.

You want a field where all experts say the same thing? That's not science, that's a cult.

Or a mature technology way behind the front lines of science.

Can there be such a thing? Anyhow, consistency, or at least the documenting of the differences, is sorely lacking IMO. Is OO nothing more than a BigSoupOfClasses with no larger-scale structure or discipline?

There are plenty of mature technologies. I didn't mean to imply that either OO or software development is one of them. I certainly agree with the thrust of Francis' "cult" statement above. -- TomRossen

Unless there are clear side benefits in the other, I will take the consistency route.


The concepts of OO are relatively simple (polymorphism, inheritance, wrapping data with its operators, etc.) It is just the application of these concepts to the real world that is the messy part. In my opinion, the concepts of OO are too simplistic to match the multiplicity of the real world except in narrow circumstances. OO designs just keep adding more indirection layers of OO concepts on top of each other until it satisfies somebody; but the end result does not match the simplicity of the base concepts. The end result is a shanty town. The beauty of the base concepts is nowhere to be found in the result except in the building blocks. There is no FractalNature of the base concepts bubbling up to the big plan. One ends up feeling betrayed by the simplicity of the building blocks. The parts are simple, but the result is a BigSoupOfClasses, a graph. We need something that improves on graphs, not something that is a graph. -- top

This is why many programmers prefer to stick to writing public API's and work with base algorithms that all the others can then wrap in more private OOP. Consider though how hard it is to work with the procedural Windows API and compare it to an object wrapper like delphi or visual basic? Would you use the plain procedural API each time? Some oop tersens and speeds up development since it wraps verbose public API's into a terser form. Believe it or not, one of the advantages of some OOP code, if written well, is that it is terser than procedural code and neater.

When it comes down to gadgets, widgets, even procedural coders like Linus Torvalds agree that some OOP is better (even files in unix, or files in old pascal are kind of OOP in a way - this also may be a flaw, though, if for example a relational file system could be used in place. How would people send stuff to /dev/null/? With an INSERT INTO DevNull command?).

When filtering inputs and sending data across a pipe, sending some quick text to a web browser, sending SQL text to a database: this is where objects don't necessarily make code neater, quicker, or easier immediately. Especially in prototype applications where one needs input/output sent and received right away. Some people spend more time writing programs that send data through the internet, send data to a console. These people don't see as many of the OOP benefits because they don't work in the gadget arena (buttons, windows, edit boxes), or an arena where OOP would be more useful (business objects do bother me, and I'm not sure OOP is really needed so much there). CSS and HTML is sort of OOP since it allows you to align your gadgets and inherit from them. Do you find the CSS/HTML useful ever, with its inheritance abilities? What would be an alternative? A procedural or relational markup language?

Consider what industry you work in. Maybe it is affecting your view too, Top. My hatred toward OOP is partly because I don't work in an industry that needs to reuse as much code. I have a hard enough time creating more and more NEW code since there are so many projects to complete.. rarely do I tactically reuse an old structure over and over and over again. I reuse algorithms and lists/databases/arrays/buttons/html often, sure. But when writing batch/console/database programs, or web programs that spit out different text each application - I often find there is no need for all of OOP concepts. Since I write a lot of prototypes and applications that do batch tasks, my view will be biased.. compared to someone who writes GUI gadgets and needs to reuse GUI gadgets over and over again. I suspect that you write a lot of DB apps that may not require OOP, top, and you don't see the need for it due to the industry you work in.. but that is just a guess.

This assumes that OOP is about reuse. Even many OO proponents do not accept this as a key reason to use OOP. See ReuseHasFailed.


Hard to Teach?

Maybe the problem is not that OO is difficult per se, but teaching OO is difficult. If somebody could take my hand (figuratively, please) and show me clear benefits, I might finally "get it". OO's benefits and beauty just seem to difficult to turn into words and practical-oriented examples. It is relatively easy to get OOP programs to run based on requirements, but much more difficult to convey why OO is better or how to do it "right". It all comes off as Kungfu-TV-series-like vague mysticism. Something about OO is textbook-elusive.

Yes, that could be it. But then, learning any programming language (unless it is *very* similar to one you already know) is difficult. -- DavidCary

Learning languages is about "how", but often not "why". Slapping classes together until the program runs is not the issue here. It is relatively easy to get OOP to run, but seeing the benefits and perhaps not making a mess with it is tough to teach. Some just seem to "get it" (agree with the benefits) and some don't. Those who don't "get it" find existing training material and techniques vague, elusive, and non-committal. OOP just seems to alienate certain kinds of minds, and this can create a backlash.


What makes OOP hard is the need to generalize everything regardless of the pressing need. In Math and Science this is second nature and no Mathematician or Scientist would consider it sufficient to do otherwise. However, Computer Science,and even more IT, have a strong taint of excessive pragmatism. This is compounded by the way many of us make a living by ' getting it done'. Today the ubiquity of computers in the home and work environment has bread a generation or two of programmers who learned only by doing. They would no more likely see Computer programing as the end product of a scientific discipline than they would see driving a car as an act of Mechanical Engineering.

To do OOD/OOP right you need to have at least an intuitive feel for what a type is, what it means to define operations on a type,and the extent to which marking something as a type is allowing the language system to make inferences about how the type will behave in all cases. Now it is true that all good programing requires this, which is why it is taught to Computer Science students, however the power of inheritance allows poorly defined and inconsistently defined types to promulgate very easily. This means that much more damage can be done. It also means that persons with little or no formal training in Computer Science or Math are defining what amounts to language systems ( or at least extending them). These very pragmatic IT people, who at best may have taken a few practical lab oriented courses or at worst are self- taught ( miseducated ), are in my opinion the cause of 'OOD/OOP failure'.

This is not to say the OOP is a silver bullet or that it should be used everywhere, but at least it should be used well before it is condemned.

The current state of the practice is as if we let Medical Doctors practice with out teaching them the basics of Anatomy and O-Chem. At one point such charlatans did practice 'Medicine' all over this country and the world, until licensing helped stem the problem. I hope we can police our own, but sometimes I am not so sure.

-- MarcGrundfest

I disagree with some of this. First, the need to "get it done" is often forced upon us by those who pay us. Finance theory even suggests that paying too much attention to the long-term is often a poor investment choice, and I have not seen a well-formed argument against it yet. A better model has yet to dethrone it. We are paid to make a profit for the owners, not produce works of art. Needless to say, it's a more complex issue than you seem to be making it.

Second, OOP is often not the best "abstraction technique" (what you call "generalize"). Its abstractions are simply not powerful and flexible enough in my opinion. Set theory and CollectionOrientedProgramming offer more potential in that area. Outside of abstractions that fit nicely into hierarchies (see LimitsOfHierarchies), OOP starts to get ugly, or at least no better than its competitors. Please be clear that I'm not saying OOP "can't do" non-tree abstractions, but rather that it cannot do them better. (And others have their own pet abstraction techniques that they promote.) --top

My Claim is a bit weaker than 'OOP is the best abstraction technique' It is merely that good abstractions are hard. I do not even claim that we should only use OOD/OOP. Most programmers approach OOP from the procedural world and are often required to interface with a large installed base with very poor abstractions. The difficulty of creating good and useful abstractions, coupled with the need to interface with such systems and the need to ' get it done' makes the conversion to OOP hard.

I also do not claim that 'getting it done ' is inherently bad ( though I can see how you may think that I disdain it - I do - even as I practice on a regular basis). I only claim that it has implications for the long term success or failure of a project. Now it may well be that any paradigm that requires a long term view is a bad idea for that reason alone-- an interesting perceptive, but one which I fear condemns Software Engineering to the realm of myth. No Scientific or Engineering discipline can develop if there is no systematic attempt to learn from the past and encode those lessons for the future. Perhaps you only mean to suggest that it is not the role of practitioners to perform this role, but if so who should? And how will it benefit the practice if practitioners are compelled to routinely ignore best practice in the need of expedience? I suspect that you would not make the claim and I think we may agree more than we disagree. My apologies for typos and spelling issues I am not always able to take the needed time to clean up my comments until sometime later.

  -- MarcGrundfest

{As usual, you seem to be conflating some antique notion of (perhaps) object oriented databases and domain modelling ("abstractions that fit nicely into hierarchies" et al) with how OO programming is actually used, and you've yet to demonstrate that "[s]et theory and CollectionOrientedProgramming offer more potential" or define what "offer more potential" actually means. Your explanations appear to follow your own amorphous and irrigorous pet definitions ("TableOrientedProgramming"), and you back away when challenged (PayrollExampleTwo) to show your techniques are as superior as you claim.}

I did not find your original material very specific either. Thus, the complaint of vagueness comes with a dose of irony. But you did specifically mention inheritance and "types", and that is primarily what I was addressing.

{I don't know which of my "original material" you're referring to. I hadn't written anything on this page until a single paragraph that starts with "As usual ...", above.}

As far as PayrollExampleTwo, I think it's clear that the "benefits" largely depend on the domain, environment, and its ChangePatterns. There is insufficient information on both an historical level and language-problem-level to know what allegedly went wrong and why with case/switch. PayrollExampleTwo raises more questions than answers, which I won't repeat here. At best, it's the starting point for further analysis. And, I made no mention of TableOrientedProgramming above.

{Isn't TableOrientedProgramming your term for a coding style that emphasises "[s]et theory and CollectionOrientedProgramming"?}

I never claimed it's the only way to apply set theory and COP. As far as "back away when challenged", I never saw an OOP version where a power-user manages payroll formulas/calculations instead of programmers. My example was first. In other words, think like SAP instead of a one-off payroll programmer. -t

{Having not clearly defined TableOrientedProgramming, it appears you can claim anything you like. As for an "OOP version where a power-user manages payroll formulas/calculations instead of programmers", an obvious way to implement it is to use a spreadsheet package. OO is ideal for creating spreadsheet packages. QED.}

Spreadsheets? You are kidding, right? How many companies with more than say 25 employees run their payroll on spreadsheets? As far as the definition of TableOrientedProgramming, I don't think I can offer a rigorous definition, anymore than OOP can (other than squeezing it through a made-up model). -t

{Excel on its own might not be the best choice for a large-scale payroll package, but the spreadsheet approach has certainly been successfully used as the basis for scalable accounting packages. See, for example, NewViews: http://www.qwpage.com/ }

Based on the FAQ, it uses databases under the hood. Spreadsheets by themselves are not very good at doing things like storing and querying employee info, payroll histories, etc. You could add it, but you'd end up reinventing a database. I would note that my example sort of reinvents a spreadsheet. Thus, there is a partial truth to your statement. Further, spreadsheets are pretty close to TableOrientedProgramming. Perhaps there's a larger concept here: grid-oriented-programming? Tables can be viewed as a constrained grid. -t

{The underlying storage mechanisms are wholly irrelevant to the debate at hand, and you know it. By the way, the "larger concept" you seek is called "spreadsheet programming", and is an established area of academic research.}

Re: "Having not clearly defined TableOrientedProgramming, it appears you can claim anything you like." - This has been a problem in past OOP discussions where it appeared one party was giving OO credit for just about every software invention. But when they are "stretching it", I point it out and then LetTheReaderDecide whether it's really OO or not rather than accuse the writer of conscience manipulation of terms. We live in a fuzzy world; deal with it. -t


Key Piece of Puzzle Missing?

Perhaps the problem is that the description of what OOP "should be" is not sufficient. Most seem to agree that using polymorphism, inheritance, and encapsulation does NOT by itself make software "good". But what does make an OO app "good" has been difficult to describe. OO may describe the brush and the paint can, but it's not describing how to paint a room well yet.

What can make OOP software "good" is the maintenance of the code base (not just how the application looks, feels). OO can be emulated using procedural code (i.e. the WinAPI), sure. However, sometimes it's better just to use OO with an OO capable language, so that you don't end up emulating OO. On the other hand, API's often need to be public and less private. An API author can't always anticipate what should be private and what should not, due to developers using that API for more interesting tasks than originally planned by the IvoryTower.

Moved some discussion to BenefitsOfOo and OopGoesHalfWay.


PickTheRightToolForTheJob

If your OO design is getting messy, perhaps you are using it where it doesn't belong. I find OOP useful for modular-izing certain "function" groupings in terms of putting related functions together and managing the variable/attribute scope related to these functions, but bad things happen if I try to force it on every aspect of the application. Sometimes it's just the wrong tool for the job. See OopNotForDomainModeling.

In addition to domain modelling difficulties, there are also scaling problems. When working with GUI code, sometimes one needs to group code by widget type, sometimes by widget position, sometimes by event type, etc. There is no One Right Grouping for non-trivial GUI's. This suggests that TableOrientedProgramming may be better for managing the parts of involved GUI's, just like a relational database is better for managing say car parts for a manufacture than say a strict hierarchy. Sometimes you want to group by costs, sometimes by location in the car, sometimes by vendor, sometimes by weight, sometimes by material (metal, plastic, and so on), etc.

GUI's are not much different; you have event type, widget location (nesting and/or physical location), widget type, attribute properties (such as viewing all titles together), and so forth. However, TableOrientedProgramming is still experimental in terms of integrating behavior (event code) with GUI attributes. It may take a different language design approach to pull it off. A ripe area for research projects. See also PowerfulCodeEvalDiscussion. (Note that a "live" table may not necessarily be needed during run-time, it could perhaps be used for "static" GUI code generation, although a run-time table may offer more flexibility.) -t


See also:

IsObjectOrientationMoreComplex, OoEmpiricalEvidence, ArgumentsAgainstOop, PeopleWhoDontGetOo, OoIsPragmatic, LearningProgrammingLanguages


CategoryOopDiscomfort, CategoryObjectOrientation


AugustTen


EditText of this page (last edited October 9, 2014) or FindPage with title or text search