Oo Matches Thinking

This is really intended as a question: OO Matches Thinking??

Les Hatton wrote a fascinating paper in the May 1998 IEEE Software, called "Does OO really match the way we think?", in which he shows corrective maintenance data from two related projects, written by the same people, one in C and one in C++. He presents time-to-correct curves. The interesting graph shows the C++ system right-shifted compared to the C curve - - i.e., it took more time to correct. 60% of the corrections in C were done in less than 2 hours; only 30% of the C++ corrections got done in 2 hours. 3/4 of the C corrections were done in less than 5 hours, but the 3/4-mark for C++ was only reached around 18 hours.

He cites other data from Humphreys and has a nice discussion. -- Alistair Cockburn

CeeIsNotThePinnacleOfProcedural


For those of us without easy access to IEEE publications, does anyone have an URL reference so we can view this paper on-line?

Which project was done first, or were they concurrent?

No URL for free reading - have to pay IEEE or visit a library. The C project was done first.

Was the second project (the one implemented in C++) a real OO project or just a C++ project?

How many corrections were there in the C vs the C++ project? Some convenient measure like corrections per thousand LOC or corrections per function point (or something along those lines) would be fine.


While this study is an important indictment against C++, I disagree with it having anything to do whatsoever with the title of this page. I would say that if it is bad it is bad because it matches thinking, not because it doesn't. An OO coder can construct a nice mental model of the system and implement it using classes... which is useless if his model is wrong, and so he must debug both his code and his model. A procedural coder has much less "model" between him and the code. So even if OO matches thinking, the real question is - does oo-related thinking match the solution to the problem?


I'd first suspect bad C++. (IE: non-OO) There's plenty of really bad C++ coding going on out there; bad OO design done by C programmers who don't really know what they're doing.

I'd also look at the changes to user requirements: When the users change their mental models of the business, it invalidates what you hoped to be a stable OO model. A procedural programmer might tend to hack out a quick and dirty workaround, where an OO developer would be inclined to redesign, hoping that the new mental model will be stable. So, OO development may not help as much in an organization that suffers frequent reorganization and changes in business plans.

Which is the majority of organizations I encounter.

We also have to realize that powerful modeling tools can be seductive: It's tempting to invest in powerful abstractions and more general implementations of code, in the hope that we'll reap the benefits of reuse. But this only saves time if the code is actually reused. So you have to choose your investments carefully. -- JeffGrigg


That paper aside, for some people the claimed benefit of OO is superior dependency management. Whether or not OO matches the way we think (or, as is sometimes claimed, the "real" world), is beside the point. You can always get people to think in funny ways (consider mathematicians). You cannot build big software systems without dependency management. -- DaveHarris

To clarify, are you saying that OO is really about dependency management instead of matching human thinking? (Or, are they related somehow?) If so, RelationalWeenies are going to suggest that "managing boatloads of relationships are what databases do best."


This makes me wonder a bit. I know that in compiled languages we have to be conscious of dependency management so that we can avoid protracted builds. Do Smalltalk programmers have as much motivation to think about dependencies? I guess they'd need to for clarity's sake, or if they are deploying things piecemeal? But it doesn't seem that they'd have to be as concerned as C++ programmers are. -- MichaelFeathers

I've heard it put, and have come to believe, that AclassIsNothingButaCyclicDependency. Any time you spot a dependency cycle in a class hierarchy, you have found a class. It follows that smalltalk programmers ought to be as concerned as C++ programmers are about dependencies, not for reasons of build time optimization, but for efficient factoring. -- PeterMerel

In my (limited) SmallTalk experience I observed that in a conventional SmallTalk development environment build time is not an issue: Methods are tokenized when saved; the "compile" or "link" steps, typical of C++, are not done. Thus, one is primarily concerned about maintainability issues, not "cost of build" issues. -- JeffGrigg

I'd be grateful for clarification of what is meant on this page by "dependency", and for an example of AclassIsNothingButaCyclicDependency. Thanks! -- RonJeffries


A depends on B if a change to B requires a corresponding change to A.

That's a fairly general definition. In C/C++, the compiled forms of modules include many assumptions about other modules, so that a change to one module often requires a recompilation of another even if the source code doesn't need touching. In this case the changes are being done automatically. For large projects it can take significant time (like, hours or days).

When I first came across the notion, it was about bug fixes. You'd fix a bug in one module and 3 other bugs would appear in apparently unrelated modules. You can't make progress like that. In my view this is the basic evil which OO combats, with all its encapsulation and polymorphism. And it's largely successful, to the point where you youngsters who can't remember pre-OO days probably can't conceive that it could be a real problem. (Try reading the MythicalManMonth.)

Bug fixes aren't the only source of change, of course. Adding new features counts. Even with OO, I'd say dependency problems are the reason we can't throw money at software and have 1000+ programmers per team. -- DaveHarris

Example? Perhaps the procedural modules were just poorly coded. Also, sometimes there is a tradeoff such that if you group by X you have to ungroup by Y, and visa versa for the other paradigm. Code generally cannot be "sorted" by more than one major factor at a time. People sometimes remember successes at a different strength than failures under a given choice. Thus, I would like to inspect a specific example to see that there is not such a trade-off going on with regard to allegedly reducing spots of change. Anyhow, change handling may be off topic here. This is about human thinking, and I don't see a connection thus far. Any suggestions for where to move it? Perhaps OoReducesImpactOfChangeClaim? or the like. -- top


This topic is moot. If a new language paradigm matched the thinking of a dolphin and cut our maintenance time and bug rate by 2/3, we'd all use it. Development time is maybe 1/8 of the total lifecycle, with nearly all in testing, change management and maintenance. The language, paradigm, IDE, etc are all irrelevant next to the costs of the other 7/8.

But one could argue that if the code/techniques/model fit the domain better, then maintenance would be easier also because things would change with the grain of the domain. Or, are you arguing that problem-space abstraction does not help with maintenance and/or reuse?


Contributors: AlistairCockburn, DaveHarris, PeterMerel, JeffGrigg, RonJeffries, MartinZarate


See also: BorgGoldenHammer


CategoryPsychology, CategoryHumanFactors


EditText of this page (last edited May 5, 2014) or FindPage with title or text search