Oo Is Pragmatic

Moved from ObjectOrientedProgramming

A few attempts at extending this ProgrammingIsInTheMind/OoIsPragmatic train of thought: (For simplicity, I'm defining the term 'objectness' to mean 'combining state and behaviour'. If there's a better word for that concept, please replace it.):

Inheritance is useful only if you already use Objectness. If all functions were global, you could factor them just as well. Saying "This function only works on this thing" forces you to duplicate unless you can use inheritance.

I'd say the only required parts of OO would be Objectness and Polymorphism. Lots of OO languages don't have Encapsulation. Classes are a convenience, but not required. Inheritance is a close one, I'm not sure. If we look at OOness as a set of individual practices/technologies, we will be able to see the why instead of just the what. Just like ReFactoring is a lot less useful without UnitTesting, Inheritance is a lot less useful without Objectness. It starts to illuminate what the foundations really are.


I object to most of the claims above. In a nutshell:

"Objectness more closely matches human mental concepts": Not to my mind.

Yes it does, it's fact, there's a lot of cognitive psychology research out there that suggests people naturally model concepts this way. Well I suppose you could supress that to some extent, but there's only so much (i.e. very little) you can do with your conscious mind against the rest of your brain.

I am skeptical of that claim. People's thinking is too varied. I have also heard studies that suggest that culture greatly influences how people think, societies where complex social structures are key to ones whole life in particular. (ISBN hunt pending.)

OoBetterModelsHumanThinkingDiscussion

"Polymorphism allows you to hide decisions about type": This assumes that the world or model nicely divides into "subtypes" on which to polymorph. In my observation, it generally does not. STL's Stepanov correctly characterized subtyping as using a one-dimensional taxonomy in a multi-dimensional world.

He is right. Enter AspectOrientedProgramming.

{AOP is still in the experimental stage. It seems like a good idea for "meta" operations on code units, but using it for biz logic is in very immature phases right now.}

"Encapsulation limits the ways state can be changed so that simplifying assumptions can be made...": If one uses a database to store/manage state, this "gate keeper" viewpoint breaks down. Classes don't "own" that state, and in most languages you cannot limit the entity access to one class.

In that case, the database becomes the owner of the data, and imposes its own integrity contstraints on the data. What's the difference?

Databases are not necessarily "OO". See Also GateKeeper.

"Inheritance allows code to be factored to reduce duplication.": How does it do it better than shared subroutines? Also, what if change patterns are not really tree-shaped? Further, it is tricky to easily override 1/3 of a method. Sure, "refactoring" can "fix" it, but refactoring just means that your code failed to hold up to the winds of change, and can be used to "fix" just about any paradigm's faults.

A lot of these objections seem to be based on the universality of the claims. They were not intended to be universal. No one technique provides universal benefits in every circumstance. In fact, the title of the page implies that we should look at the pragmatic benefits rather than trying to abstract and universalize everything. E.g. "Objectness more closely matches human mental concepts": Not to my mind. I'm sorry it doesn't match your concepts better, but I've noticed it does so for many of my colleagues; enough to make the benefit worth it for the company in practice.

But to me those come across as being "universal" claims. Your argument seems to be that although it is possible BenefitsAreSubjective, they apply to the majority of software practitioners. I will agree that a large population may find them useful to map the external requirements to their mind or habits, but not necessarily a majority. Plus, like-minded people tend to hire other like-minded people.

One of the things I like about patterns is that they are a good vector for disseminating and discussing pragmatic ProblemSolutionPairs?. Perhaps if the above claims/counter-claims were in PatternForm, it would help focus the debate.

IMO, patterns are over-hyped (PatternBacklash)


Here's a question for people who don't like OO: If there were a language that let you code your stuff in the style you liked instead of OO, but then also let you decide to put a little piece of code into an object, would you find that experience useful? I'm imagining, for example, writing a 5000-line procedural program in Ruby, and then deciding to create a class to encapsulate 50 lines or so into one class, just for convenience's sake.

I am not sure what you mean here. There are multiple ways to partition 5000-lines. Some suggestions at ProceduralMethodologies.

I'm saying: What if you wrote code that was, say, 95% procedural, and then you used classes in the 5% of the code where classes might make sense to you. Much of the debate seems to make things seem all-or-nothing, but maybe that isn't inherent to the problem.

I find that something that might fit OO early on might change to not fit OO over time. But I guess I could live with 5%. But 5% is hardly enough to bother mixing in a 2nd paradigm unless the benefits are large. It makes the reader have to do a mental shift. (Related: MixingParadigms).

This isn't how most people learn OO, is it? Most people learn it in an all-or-nothing sort of way, don't they? -- francis

I'd say that's one of the biggest problems with OO. When I learnt it, along with Java, it was in an all-or-nothing way. In practice I've found this not to be particularly useful and making many designs overly complex (a nice example from Java itself is the mess of classes you have for file handling). When coding with the ExtendedObjectTcl I tend to find that a lot of the code is based around classes and objects, but there is still a lot of general-purpose procedural code involved. OTOH I've also found that OO is useful. Often I start a project in a procedural way because I think, "This is so simple I don't need OO", but in the end OO simply turns out to be the easiest way to solve and structure the problem. --Setok

I do it piecemeal, starting from procedural, creating classes when it is very natural (e.g., nodes, points and lines in a network simulator) and which shows noticeable improvements. Eventually, someone lend me the GOF book, and suddenly I saw to light :) Well, what I saw then was how I should have used OO approaches in much more places than I have been using it before. -- OliverChung.

This is sort-of similar to my conversion story, too: I came to OO after trying really hard to make procedural code really clean. I kept on thinking "Dang, if I could only pass in these six function arguments as one -- oh, hey, I'll make it a struct!" and then "If only I could associate this function with this struct -- hey, I bet that's what an object is." I wonder if part of the problem is languages that make the OO jump an all-or-nothing decision. Another reason why I'm becoming a foaming-at-the-mouth Ruby junkie: The transition from quick-and-dirty script to insanely objectified universe of classes is relatively smooth. -- francis

A good dynamic language lets you put function references in structures. LISP was born for such. However, with a little code rearrangement and experience, I find that one does not really need it that often.

My understanding of objects proceeded along similar lines, while learning, of all things, how to code for the X windowing system. Collections of related variables, such as window attributes, became structs. Modify the struct to be a union and the same data could be manipulated differently, in a sort of poor-man's polymorphism. Add function pointers to the struct, and things start to look vaguely like this: http://www.accu.org/acornsig/public/articles/oop_c.html --StevenNewton

I personally prefer to put boat-loads of attributes into relational tables, not code-bound thingies. I can view and query them much more easily that way. (see DataDictionary)You can't easily view code as rows and columns to see row-wise and column-wise patterns, and you cannot easily query code to customize your view of it. Attempts to query objects resembles the happily forgotten network database query languages of the 1960's. It died for a reason. (ModernDinosaur). I just find relational a more powerful attribute structure management tool than code-based approaches. The grid view and query-ability is something I find really helpful for non-trivial structures. I don't want it taken away from me just because it is not in style right now. [Perhaps this paragraph can be factored to CodeAvoidance]

Maybe my SQL is weak, but I've found a relational-centric view to fail me when I'm asked to do extremely complicated queries. In fact, this was what led to my disenchantment with SQL: I was asked to write a reporting app that would show demographic breakdowns of survey respondents by question, demographic traits, and over multiple time windows. For example: "For each of the 17 questions on this survey, show summaries (in number of respondents and percentage) broken down by age, income, nationality, gender, education and profession, both for the previous month and the year-to-date." My predecessors had used 70-line SQL statements to pull all this stuff out, but it proved to be tremendously brittle. When I was asked to change it, I decided to rewrite the whole thing in Java instead. Much much easier. Maybe that means OoIsPragmatic, maybe that means my math skills aren't killer enough for me to hold that much SQL in my head at once. Either way, I have to use what works for me. -- francis

70 lines of SQL is indeed not funny. However you can break it down using views. If you run a database with OLAP extensions installed these type of queries are easy.

The problem with views is that they are usually global in nature and may require DBA permission to create and change. What is needed is ad-hoc or temporary views. Some RDBMS offer this, but it is far from standardized. SQL is not the ideal relational language IMO. (SqlFlaws)

I know you can break down large queries into smaller queries with views and such. But just because something is decomposed doesn't mean it's easy to change. SQL, in my experience, is extremely concise when it's applied correctly, but that concision -- combined with a difficulty in UnitTesting and refactoring databases -- can make SQL very difficult to change in response to changing business needs. (I don't have any experience with OLAP extensions, but I haven't heard anything about them that would seem to change this dynamic significantly.) -- francis

Time to explore a specific example

Right. See DatabaseReportExample.


Over on ObjectOrientedDesignIsDifficult, somebody said:

OO gets your foot tangled in a bunch of different class protocols and rather arbitrary taxonomies that you must grok before effective use. Talking to the database is more consistent. OO seems to invite too much "creativity".

I try never to worry about whether I'll get my taxonomy right the first time. Experience shows that past a certain point, thinking harder about my class structure doesn't increase the chances that I'll get it right. And I think inheritance isn't really that important anyhow. I try to use CompositionInsteadOfInheritance, so most of my classes don't have to live in some gigantic Cheops pyramid of classes. I do what's easy -- in the short-term, and the long-term -- even if it's not "correct".

Then you are essentially creating a NetworkDatabase in code. I would rather org my system around a relational design than an NDB design. (See the second portion of http://geocities.com/tablizer/core1.htm)

Maybe the flipside of OoIsPragmatic is: OoIsntForPerfectionists?. -- francis

Well, in my opinion relational theory is is few steps closer to perfection than OO. It provides a platform of reasoning and structural consistency that I cannot find in OO.


Moved from ObjectOrientedDesignIsDifficult

More and more I've been moving towards a view that OoIsPragmatic (or ought to be). Ever since I've read ExtremeProgrammingExplained and ReFactoring, I've moved away from 'perfection' and 'purity' towards simplicity and usefulness. OO programming (and design, I suppose) don't have to be difficult if taught as a set of tools for solving real problems. I agree with Stan's ideas about little rules, and I've adopted KentBeck's philosophy of GoodProgrammerGreatHabits. Novices are able to learn OOP much easier if you can explain it in terms of the work it saves them (from reduced duplication in a high-change environment).

-- RobHarwood Same dial, different label. -- rh

And you get one more coupling for free. -- nb

There is coupling either way. If you use it, you are coupled.


Re: Novices are able to learn OOP much easier if you can explain it in terms of the work it saves them (from reduced duplication in a high-change environment).

I would like to see a specific example. In the past, such comparisons were poorly thought-out procedural or limits of a specific language, usually C.


Inheritance is a powerful tool in the OO toolkit, but it's complicated, and extremely easy to overuse. It can often lead to code that is clever but not elegant. To me it feels, also, that it's not nearly the most important principle in OOP: For that role I'd elect the way that OOP forces you to assign each responsibility to a distinct entity. That's the first question a novice OO programmer should be asking all the time: "Which object does this method belong to?" Not "Which object does this new object inherit from?" -- francis

But I find the need to fit every behavior with one and only one entity rather artificial and arbitrary. In the real world, multiple people and things may participate to get something done. Even in math, add(a,b) seems much more natural to me than a.add(b). Add is a joint venture. Plus, it can scale to things like add(a,b,c,d,e...) without a conceptual overhaul.

Of course you need to be careful; in math a+b and a + b + c +... are very different things. Then again, nothing in math is well reflected by a.add(b). I guess what I am cautioning is falling into the trap of using an OO system to construct objects that are syntactically similar to what they represent, but *not* semantically the same. Languages that allow operator overloading often encourage this failure (although it is easy enough to do without overloading, and most languages include these sort of gotchas in the language primitives).

I'm not an object zealot: I'd never want to use a language that forces me to type "a.add(b)" in place of "a + b". I think JavaLanguage string functions, for example, are completely atrocious. (Though part of the problem simply has to do with how good your SyntacticSugar is: RubyLanguage defines "+" as a method belonging to the Fixnum class, but you don't need to know that, and you don't need to call it with the standard noun-dot-verb notation.) There are always methods that don't go well with a genuine object: I end up just making them class methods, which are basically like non-OO functions, only attached to a certain namespace.

Overall, though, I'd say that most of my methods live comfortably on one stateful object. Maybe the ratio is 80-20 or something like that. When your methods work better attached to an object with state, attach them there. When they work better without state, make them part of a utility class. Do what works. OoIsPragmatic, after all. -- francis

If you make enough (or too many) classes perhaps. BigSoupOfClasses. And everybody's "do what works" is different. OO is just hacky to me. There is no consistent big picture. Being hacky is also practical in the shorter term, but makes maintenance difficult in the longer run. There are no consistent rules or guidelines about what to make methods, what to make classes, and what to put in the database.

Similarly, there are also no rules or guidelines about what to make procedures or data structures in procedural programming, or tables, primary keys or foreign keys in a relational database. It comes from experience and CodingStandards agreed by the team.

Database designs will be quite similar from experienced designers who know the concept of repetition factoring. The rules of table normalization generally lead to similar designs. And, procedural programs are usually grouped by task. True, people will divide tasks up somewhat differently, but they are still by task.


What paradigm does not claim to be "pragmatic"? The only exception may be MentalMasturbation languages or techniques.


I thought I'd add my two cents worth on Object Oriented Programming -- I've been around a long time and I've seen all these fads "silver bullets" as Fred Brookes might say -- and as far as I'm concerned they've all pretty much fizzled and OOP is currently fizzling too. I think a big part of the problem is the mind set that adopting something new has to replace the old stuff -- I was doing "Structured Programming" and now I've seen the light and I'm doing "Object Oriented Programming." That's the sort of nonsense, one dimensional thinking, that you find even among the technical people who ought to know better, but especially among the quasi-technical who often have enormous influence compared with their technical competence on tools and methodologies. Object Oriented Methods are tremendous with things that they match well too, and not so great when they don't. Deep inheritance is at least as bad if not worse than spaghetti code -- at least there you could go to the label and look -- here you have to wend your way through all kinds of classes. Tools can make a difference. I'm personally skeptical about anything that comes down the pike with all kinds of universal claims -- they've never been true before ... why now?

That being said -- Object Oriented Programming enshrines Abstract Data Structures and packages operations with their data and that's important. The ability to do function overloading is important since it substantially reduces the cognitive overload involved in programming. That being said -- I think functional programming is at least as intuitive as object oriented programming, maybe more intuitive. Data is a passive thing that you operate on, not a dynamic thing that operates on itself -- that's the difference. What is a clear way like a flow chart to represent an OOP's architecture -- UML??? not in this century. I think comments like "we lack a good theory" and stuff like that are right and need to be addressed. The title of this page OoIsPragmatic is true of almost all of computer science where there is relatively little theory especially in the area of design. --RaySchneider

I would like to see demonstrations of "enshrines Abstract Data Structures and packages operations with their data and that's important" for my usual domain: custom business applications. I think there is often value in decoupling data and the operations on them. Data and uses of the data are often different things. Abstraction should include the ability to see the same items in different ways. One cannot anticipate all uses of data up-front, and even if we can, putting all possible different interfaces around that data can get cluttery.

Note that operation overloading is not necessarily part of OO and is contraversial in the OO community itself.

I find myself in semi-agreement with the above. The rationale for OO is not in any of the techniques it supports, but it is intended to reduce the complexity of the source code. In my own experience, largely comparing C and C++ programs, I have noticed C++ programs lack the huge methods that occasionally turned up in C programs, even among the inexperienced. Among more experienced developers, OO can lead to reductions in: method length, duplicated code, and control logic. I believe OO has some implicit value, in that it helps even inexpereinced programmers produce shorter methods. I also recognize there is a substantial learning curve to reap some of the other benefits.

Is that the fault of the paradigm, or the fault of the language? CeeIsNotThePinnacleOfProcedural. I too found my C programs bloated and long-winded. The fix was not OO, but better languages and database tools. Converting to a dynamic language and table-oriented tools resulted in code that was about 1/4 the original size of the C version.

(Note that some people like C because it allows you to use its CPU speed to do highly abstract things that perhaps might run too slow on higher-level languages. Thus, it might be smaller in some cases because you don't have to tweak it for speed.)

Actually, the problem is the standard procedural structure break down. There are limited guidelines as to when to decompose a method into submethods; it is pretty much a gut-feel approach. I have seen this problem in many procedural languages, including C, assembly, FORTRAN, COBOL, Unix shell scripts, and database scripts.

It is no more a gut feel that OO design. You generally divide procedural code up by task. Even old COBOL programs from the 60's do it. In OO, some code may be by task, some by a taxonomy that somebody thinks they see in the domain, or some goofy pattern they found in a book, and all kinds of other hodgepodge. In procedural you have just one overriding grouping: task. In OO there is none. The closest is probably grouping by entity, but that is often not closely followed (OoGroupsBetterClaim?).

I have seen no evidence that if we switch a specific programmer from one particular procedural language to another that he will limit his method length. Better programmers tend to write shorter methods in all environments, but an OO language seems to provide more of a bounds for the inexperienced as well.

I know of many OO proponents who agree with each other that good OO is hard to learn properly and that they would prefer bad newbie procedural code over bad newbie OO code. Thus, I am skeptical that your claim that OO is easier for newbies is widely accepted among OO proponents.

As far as procedure size, semi-long routines are not necessarily that bad of a thing. Not that I recommend them, but they are not the evil that some make them out if the code is indented properly and commented to a fair extent. They tend to only be a big problem if there is no factoring such that similar code is repeated instead of factored into a subroutine and then referenced instead. I don't see how OO encourages people to factor out duplication. If anything it makes it harder because there is usually more code to instantiating an object than there is to call a function for the most part.

[OO adds a shared context for methods in the same object. The lack of that context in procedural programming encourages longer methods and/or global variables.]

I would like to see a coded demonstration of this. I am skeptical.


There seem to be at least two different conceptions of OO style in play here, and it's worth it to tease them out and compare their differences. The first might be called an essential style, characterized by an implicit belief in the utility of Platonic forms and essential types. You believe that in your problem domain there is a deep, densely connected catalog of types that are relatively fixed and unlikely to change over time, and you code towards that ideal. This style is characterized by deep inheritance hierarchies, up-front analysis, StaticTyping, and (occasionally) the separation of roles of analysis and implementation.

The second might be called a provisional style, in which you believe that no forms are permanent or fixed. You simply create classes as they make your job easier, and you accept that the types you're working with might change tomorrow as either your understanding of the problem deepens, or the problem itself changes. This style is characterized by avoiding inheritance when necessary (UseCompositionAndInterfacesWithoutClassInheritance), ContinuousDesign, DynamicTyping, and the merging of the roles of analysis and implementation.

Seems to me that OO appealed first to bigger companies because those companies were more likely to dig the essential style. And a language like Java -- in which you'll accept that language's premise that everything must be OO, regardless of whether you want it, regardless of how half-assed Java's implementation of OO is -- seems to play along with that. Personally, as an Agile zombie I favor the second style over the first. I hate getting bogged down in semantics or design of code, and it seems to me that if I spend more than 10 minutes thinking about a problem without writing any code, any further time is just going to waste.

I'd be willing to guess that many people's opinion of OO is influenced by how they're exposed to it. RubyLanguage deepened my personal convinction about its utility; I suppose that exposure to SmalltalkLanguage or PythonLanguage would've done the same for me. But if I thought OO was all Java and dense inheritance hierarchies, then I'd hate it, too. -- francis

Oh, but your analysis is fundamentally flawed if not to say borderline presumptuous in the way you clump together for example StaticTyping with deep inheritance hierarchies, and you'd be almost saying BigDesignUpFront, but you put it nicely as up-front analysis. I'd appreciate if you'd stick to what you know best (dynamic typing, XP, whatever) without denigrating other -- possibly better -- ways of doing OO.

And from a different angle, Java with all its problems is soo much better than Ruby. Really, I'm not kidding you. As for the use of delegation instead of inheritance, that is at best patchwork and not a good OO design pattern, it certainly breaks some very fundamental principle of OO (see Cook Object Oriented Programming vs. Abstract Data Types http://www.cs.utexas.edu/users/wcook/papers/OOPvsADT/CookOOPvsADT90.pdf).

So all in all, I guess you might be right, you'd be turned off by Java , but that doesn't make Java flawed, nor does it make StaticTyping flawed, it just suggests you have a flawed understanding of it. So stick to your Ruby. --Costin

I would agree with francis analysis and found it helpful. I would fall into the "provisional" category and I adopted OO approaches as they seemed to solve problems I was seeing in procedural coding. I ended up adopting OO in an incremental manner, only adopting the parts that I found useful at the time, however, my overall appreciation of OO has grown. I am also opposed to the "essential" approach, and as soon as I see someone with an office plastered with UML diagrams ranging from "main()" to primitive data classes, I feel I have found someone who missed the underlying rationale for OO. The basic point is that software of a useful nature is too complex for any individual to fully understand. Instead we substitute understanding at the class level, rather than the program level. This is where the "essential" approach starts to fail. -- WayneMack

I don't question your and Francis' experience or the fact that the "provisional" approach is good for you. I do question however the boldness with which Francis generalizes his experience and makes the alternative bad in a very simplistic way (Java's half-assed implementation of OO). It is also unwarranted to lump together in this "methodological pooh" totally unrelated things (like StaticTyping) as to spread some kind of guilt by association. OO is also very pragmatic in statically typed environments, and even in Java. Java may not be the greatest example of ellegance, but is one great example of pragmatism. --Costin


What paradigm is NOT claimed to be "pragmatic" by its proponents?


See Also: OoVsFunctional, OoEmpiricalEvidence, BenefitsOfOo, GateKeeper (another claim), MixingParadigms


EditText of this page (last edited October 3, 2007) or FindPage with title or text search