Benefits Of Oo

ObjectOrientedProgramming provides a different view of programming than ProceduralProgramming or FunctionalProgramming. On the one hand it's about bundling the state and behavior together, but in a larger sense it's about a mindset--a way of looking at programming.

Below is a list of benefits often assigned to or claimed about OO. If you know of any evidence or detailed argument for any of these, feel free to link them in. Please use the discussion area below the list for long content.


EditHint: It would be good to list how you can achieve these benefits using OO. Certainly you don't get beneficial effects magically, but by doing good work with the technique.

It has resisted "external" analysis. It seems ProgrammingIsInTheMind and the benefits may depend on the individual doing the thinking about the program.


EditHint: There's still plenty of material to mine from BenefitsOfOoOriginalDiscussion


Here is an "anecdotes are good enough" viewpoint:

http://lambda-the-ultimate.org/node/view/893?from=200&comments_per_page=200

Those of us who criticize OOP do not necessarily disagree with personal or subjective benefits; it is the extrapolation assumption that they are universal or "best practices" which is the real problem. The article also suggests that OO better models the real world, but this does not seem to be a universal primary claim even among OO proponents. [insert link to existing real-world debate when found]


OO is good because it makes testing easier.

Example?

You can often test each class in isolation.

Same with modules.

[I'd go further and say that it's no more true for classes than for other units of code organization. Classes that don't depend on any others are in the minority, and often not very interesting.]

False for any kind of modular language I can think of (C, which lacks a formal model of modules, still falls into this category, but so does Oberon(-2). Not sure about Modula-2, but I'm pretty sure it also suffers the same problems). In my OpenAX.25 C project, I was unable to test a module that used the BSD sockets API because, obviously, it would attempt to link against the BSD sockets library. I couldn't create my own module which defined the sockets API myself because of duplicate symbol definition errors. I had to resort to creating a thunk module called IPC, that defined IPC_recv(), IPC_select(), etc. These functions added a layer of indirection which allowed my test code to not only isolate the module under test, but also alter the implementations of the respective IPC functions depending on the expected state of the application (StatePattern). This made unit testing much easier, but at the rather significant cost of many lines of pointless code.

In an object oriented (or, more accurately, polymorphically-oriented, which OO definitely is), this problem simply doesn't arise. I can mock/stub out the IPC objects as I see fit in my unit test code trivially.

To get this same convenience in a modular language, your language simply must implement run-time-dispatched generic functions at the very least. Only CLOS implements this to the best of my knowledge. But, then again, it isn't called the Common Lisp Object System for nothing.

In languages that support HyperStaticGlobalEnvironments, such as Forth, you can "fake" run-time rebinding of module interfaces by simply loading/compiling the mock/stub module before compiling the tests. This wastes a bit of RAM, but since the tests occupy RAM only transiently, it shouldn't prove to be a burden. The only thing that would happen is it'll somewhat lengthen the time it takes to compile and run the unit tests.

This seems a very language-specific thing. CeeIsNotThePinnacleOfProcedural. I agree that introducing OO into a language probably gives one more name-space management options, but this is simply because the more paradigms you have, the more options you have. But, paradigm potpourri has its own downsides.

Whether C is a pinnacle or not is utterly irrelavent (please re-read where I also said the problem applies to every other modular language I am personally aware of, including Oberon, Python, Haskell, etc.) Show me even one modular programming language that exercises late-binding at the module level. It can't be done; as soon as you do, you all but by definition have an object system. --SamuelFalvo?

I first need to see if you are claiming that non-OO paradigms will *always* prevent the implementation of such a feature, or merely that it is not common in practice. For example, very few procedural languages I've encountered support nested subroutines (along with nested scope). Only Pascal. But, this is not a fault of the procedural paradigm. It is some kind of industry habit formed by who-knows-what, for I found nested routines and scope to be useful.

Let me attempt to explain it via a bullet list to document my thought pattern. Maybe then my reasoning will become clear.

A module, by all definitions currently known to me, is a static entity: it is, from the point of view of the CPU actually doing the job of running the code, just a chunk of code. CALLs made to it are done with absolute, or at the very least, jump-tables with absolute, addresses. Everything else, such as ensuring the module's interface matches your expectations prior to compiling, is a compiler-offered feature that has no concrete run-time representation, and even if it did (e.g., in the form of in-core type descriptors), it'd have no bearing on actual execution of code. Once the binary image is loaded into memory, it's fixed.

As soon as you say, "OK, this is stupid, let's add support for polymorphic module interfaces," well, you've basically re-invented object oriented architecture. More precisely, you've reinvented interfaces, a feature that basically turns your module into a kind of class, because now you've introduced the ability to have multiple concurrent instances of your "module" co-resident with each other, each offering their services to objects that they own.

Languages which I know for a fact require the "modules are static entities" invariant to hold:

Languages where this is not the case:

I just can't think of any other languages in mainstream use that offers comparable capabilities, from the sole point of view of unit testing alone, as what OO provides.

If someone can prove me wrong, I'd love to hear from you. :) --SamuelFalvo?

I'll have to think about this more from a compiler/interpreter perspective. But generally even IF this was an inherent flaw of procedural, one still has two options:


RE: As soon as you say, "OK, this is stupid, let's add support for polymorphic module interfaces," well, you've basically re-invented object oriented architecture. More precisely, you've reinvented interfaces, a feature that basically turns your module into a kind of class, because now you've introduced the ability to have multiple concurrent instances of your "module" co-resident with each other, each offering their services to objects that they own. -- SamuelFalvo?

Is this truly object-oriented? as in: results in ObjectOrientedProgramming? In my own experience, if you squint hard enough, anything is an object: functions in funtional programming, whole processes in operating-system programming, whole operating-systems in distributed systems programming, etcetera. DLLs are objects to the dynamic link-loader. It doesn't surprise me that you can consider parameterized modules and such to be objects. Of better question is whether that view is sufficient to make the programming object-oriented. I'm not particularly keen on trying to answer this question. (To me, ThereAreExactlyThreeParadigms in use that are truly 'meaningful', and 'object-oriented' is not among them.) But I do note that mere support for parameterized modules would not meet AlanKaysDefinitionOfObjectOriented, would not meet the NygaardClassification for ObjectOriented, does not imply inheritance or any sort of implicit support for delegation (for PolymorphismEncapsulationInheritance), and fails many other various DefinitionsForOo. Of course, NobodyAgreesOnWhatOoIs, so perhaps SamuelFalvo?'s definition is as good as any other.

There are languages that provide parameterized and abstract modules with concurrent existence. Consider: if a programming language is modular with neat, polymorphic, parameterized modules and such, but did not provide modules as first-class entities that can be produced or replaced on-the-fly from within a language, would this really benefit testing? Perhaps it is having 'first-class' component-configurations that is most relevant to making units easier to test. Component configurations are necessary when dependencies go two directions (i.e. component A depends on something from component B, like a callback, and component B depends on something in component A, like a call.)

What means "module" in a programming language depends really upon how processes are glued together from constituent parts. Merely having "modular" components, be they parameterized or polymorphic or not, doesn't make easy the task of configuring these components.

SamuelFalvo? says: This is what I am talking about. You can have support for parameterized modules, but the fact is, you're still invoking that module, not your mock module. MlLanguage supports parameterized modules, for example, but they're utterly useless from a unit testing perspective.


Structures and State

Moved from ObjectOrientedDesignIsDifficult

Some opinions on OOP from a structured programming perspective (not using a true OO language):

Although some of us hate the hype behind OOP, we have found some ideas from OOP to be very useful in structured programming. OOP techniques can save typing, shorten the code, reduce copy and pasting or include file tricks, can make code more readable and less error prone. However, the opposite is also true: when not used carefully, OOP can bloat up code, overly complicate code, cause more errors (especially if dangling objects exist), cause a lot of uneeded line noise (free/create/new/destroy, etc). This is why some prefer to use the stack, along with modules, and not just objects. Some are not fond of purely heap based OOP languages.

Consider a case where some OOP techniques are useful: you have a struct/record that you wish to fool with. Say you need a new experimental structure based on an old one. You could copy and paste this old structure into a new file and play with it, or you could cast the struct/record using aligment tricks in C/Pascal. With old procedural coding you end up having to do these dangerous tricks, or copy and pasting to reuse that old structure. Sometimes you can write some procedures to wrap the old struct/record - but then you end up reinventing inheritance and writing boiler plate code! With OOP it hides this boiler plate code, it hides the "self" or "this" or "me" parameter you would have otherwise sent in to the procedures as a pointer or VAR param. In OOP you can safely inherit and play with a struct/record (class) without copy and pasting the old one, or without writing procedural wrappers which emulate inheritance!

CeeIsNotThePinnacleOfProcedural, nor is Pascal. For one, their structures are static at run-time. If they were based on SetTheory, then one could use set operations to add or remove elements from an existing set of fields. Set theory is far more flexible for managing variations-on-a-theme than sub-typing. If you don't understand why this is so, we will never agree. LimitsOfHierarchies gives some examples. Without more details about what actually you are trying to acheive (business case), I cannot suggest an alternative language or solution. Related: DeltaIsolation.) --top

What? Of course you can create dynamic structures at run time in Cee or Pascal. You just can't query them using a relational language so easily. But it could be done using parameter tricks, or a macro preprocessor. Consider an Array of Records or an Array of Structs. You can add and delete items from the array. A record or struct is in fact a Tuple in disguise. An array cannot be queried easily, but one could do it using tricks. Items in an array or list can indeed be inserted and deleted - at run time. Arrays or lists can expand and be saved and loaded from a storage medium too (usually via files, but we don't have to see them as just files.. they could be relationally stored in several different files, split up for optimization - that's the physical issues we shouldn't worry about). The values of arrays or lists can be changed at run time. The trick would be making the items in the array more query-able then what is currently offered, perhaps reinventing RelProject or TQL ideas. Lists and arrays usually only offer ridiculously simple operations such as remove, delete, append, insert, etc. There is no query language available for the array, or list. Databases are actually a lot about queries more so than people realize. What's missing in Objects, Structs, Records - are queries.

As for requiring Business Cases, I've provided plenty that you've missed. More concretely, consider a GUI button widget where one wishes to modify the button to have a border around it. Inheriting the button without damaging the old code or using copy and pasting allows us to muck around with a new button. The button does not need to be stored relationally - are you going to store the button's' caption and click behavior in a table? And how are you going to inherit this button without resorting to WinAPI style coding using procedural code, if not using relations?

If you don't mind, I'd like to stay away from GUI engine design here and focus on biz domain objects (customers, accounts, invoices, etc.). For one, most custom app developers do not develop GUI widgets from scratch. Second, GUI's are a very long and involved topic. The answer to the above would likely depend on the details of a specific GUI engine: one GUI engine may have a difficult spot where another doesn't and vise verse. (A set-friendly GUI engine would generally start out widgets with all possible behavior and features and filter-based "elimination" would be used to exclude a border from a button, not the other way around as often found in tree-oriented thinking.) I'd suggest you visit ProgrammingLanguageNeutralGui if you are interested in such.

Moved response to DomainNicheDiscussion.


One problem with OOP is that the algorithms become welded into the class, even though we aren't sure that algorithm will only need to be part of that class! This leads to messes such as multiple inheritance and overly complex solutions (IMO). Some algorithms simply need not be welded to a class and we can't decide on this up front immediately! As a structured programmer speaking here, I therefore do not use pure OOP languages or any language that tries to be pure (Java, etc). However, even in these so called pure languages - one can escape the object system by using the global class (i.e. in Ruby one can DEF a procedure without it being tied to a specific class). Many so called pure languages still have ways that you can escape the OOP model when you need to.


From AspectOrientedProgramming

If we look back, using POP (Procedure-Oriented Programming), we must deal with all the concerns in a line. Though we can outsource the code into different functions, the main stream still controls all the process. This is the linear model. When OOP is introduced, we can present the world in a more natural way by describing different objects and their functions. Connections between different objects form a network, a matrix of type vs. behavior. This can be called the two-dimensional model.

How is "natural" being measured here? I am bothered by that claim.

"Natural" is somewhat informal, but the improvement mentioned above could be measured in terms of the degree to which unrelated concerns can be expressed (and developed & tested) independently of one another. OOP, when introduced, was a step up from POP languages of the same era due to the indirection from interface to implementation (message passing or virtual functions). One may encapsulate unrelated concerns in different objects, and these objects may then interact blissfully ignorant of the concerns encapsulated by their companions. Compare POP, in which the calling procedure must be aware of and test for 'unrelated' concerns so that it can properly select a procedure to call, forcing the system to be coded with a much more 'global' policy and more 'global' data available to procedures.

Of course, OOP still fails grievously when dealing with related concerns such as concurrency management, persistence, logging, optimization decisions, memory management, etc. As a class, these are called CrossCuttingConcerns. If OOP succeeded at these, there would be no need for AOP. Similarly, OOP also runs into the ExpressionProblem - adding new 'verbs' to an interface requires touching every class in the project.

Re: expressed (and developed & tested) independently ... step up from POP languages of the same era ...

I'm skeptical. I'd like to see semi-realistic coded examples.

The simplest examples are available in procedures containing things such as 'baker.bake(ingredients)'. In procedural, the calling procedure must contain the information to locate the correct bake method, such as 'if isBreadBaker(baker) then bakeBread(ingredients) else if isPieBaker(baker) then bakePie(ingredients)'.

In a unit testing framework, or in independent development, it would be easy to create a 'mock' baker that can report which ingredients it has baked. This mock baker could be queried after tests to ensure the caller is working correctly. These tests, and the mock baker, would be able to coexist with the final application code. In the procedural methodologies, independent testing would require mock-implementations of 'bakeBread' and 'bakePie' and so on, and the testing framework would need to be built and maintained independently of the application.

And to stem a likely objection: I fully agree that one could use ObjectOriented techniques even in a ProceduralProgrammingLanguage?. One is free to call a 'bake' script or 'bakefn' function associated with the baker. One could even put such a script into a database. But most people would just say you're reinventing OO. ObjectOriented programming languages only aim to make it easier and sometimes safer (than in procedural languages) to use these techniques. (page anchor OOP_techniques_impl) {My objection is near that anchor also.}

That's reinventing Lisp, not OOP. And, it only "makes it easier" in simplistic textbook examples or device drivers, not production design.

It is OOP, which was later reinvented in Lisp as CLOS. And read the above again: "make it easier" refers to the use of OOP techniques in OO languages (relative to the use of OOP techniques in procedural languages) - it isn't something I'd expect you to contest. Refer to prior sections of the page if you wish to contest claims about the OOP techniques themselves providing benefits over procedural methodologies.

I don't believe this is correct. That ability existed in early Lisp.

[Object orientation (in Simula I) dates from 1962, which is roughly five years after the invention of LISP, but CLOS and its direct predecessors date from the 1980s.]

The ability to implement CLOS existed in early Lisp, if that is what you mean, TopMind. But, by that reasoning, even SnuspLanguage is OOP. Please don't appeal to the lowest of the FourLevelsOfFeature.

No real effort to support for ObjectOriented programming techniques in Lisp existed until the late 70s, well after SmalltalkLanguage and SimulaLanguage had some time to gestate among the people at MIT. (Well, there was at least one other effort, LOOP. Not sure when that one started, or what happened to it.) Anyhow, CLOS is another case of building one language inside another - a rather well developed feature of Lisp.

I thought it was Simula II that introduced the OO features, not Simula I. Anyhow, since Lisp makes it easy to mix data and code, any data structure can also contain code. Thus, it may have:

 (bakers
    (baker01 (attributes...) (code for baker01...))     
    (baker02 (attributes...) (code for baker02...))     
    (baker03 (attributes...) (code for baker03...))     
    Etc...
  )

Although I agree such may be an "OOP concept", it was not "invented" by OOP nor is it exclusive to OOP. Other paradigms may rightfully claim it as one of their techniques.

Code in structures without any extra support is FunctionalProgramming. It doesn't help with constructors or inheritance, with automatic 'self' reference, with dispatch, etc.

Constructors could also be considered a form of EventDrivenProgramming. It's an "on-create" event. Again, it's a "shared feature". The others may be also upon further analysis.

Ah! I think I understand. You're objecting to the idea that OOP perhaps "owns" certain ways-of-organizing-code, and they can't exist within other orientations or designs. Who is promoting this idea to which you are objecting?

I suspect you'd find many that people do, in fact, believe in the existence of MultiParadigmProgrammingLanguages. I certainly do. Since I believe in MultiParadigmProgrammingLanguages, I don't consider features to be "owned" by paradigms/orientations; rather, I say that features are "necessary" to paradigms/orientations, or that a paradigm/orientation is "supported" by a language having a certain non-exclusive sets of KeyLanguageFeatures. Any given language feature might be necessary to supporting many paradigms.

Actually, mixing data and programming code was more or less how things started out in computers. They were separated to help manage code, ironically. Now we are shifting back that way as modern techniques allow the power of combining but hopefully without most the original downsides that made it frowned upon.


Re: Reduces method length

I'd like to see a demonstration. The only example I can think of that made such claim is related to the controversial SwitchStatementsSmell.


Re: Reduces the impact of requirement changes on code.

Moved discussion to OopAndChangeImpact.


Contrast ArgumentsAgainstOop


CategoryObjectOrientation


EditText of this page (last edited December 15, 2012) or FindPage with title or text search