There does not seem to be a consensus about what OO is. There are multiple proposed DefinitionsForOo. The issue often revolves around whether common usage should define it, or the originators. If the originators define it, then do we go with Alan Kay or KristenNygaard? Are their definitions out of date? After all, over time the industry learns more. If we go with more recent practitioners, which celebrity deserves to define it?
There does not seem to be a consensus about why we need/want such a definition, too.
This phrase is a joke. "I agree on what OO is." As a matter of fact it is defined by its inventor, KristenNygaard in NygaardClassification, distinguished computer scientist, and receiver of the famous Turing Award. It also coincides with the informed opinion and majority agreement of most computer scientists. For example it is basically the same as the definition given in StructureAndInterpretationOfComputerPrograms (a definition given from outside by two most reputed functional programmers), as well the definition given by Luca Cardelli and Martin Abadi in their seminal theoretical book "A Theory of Objects", and of course these authors acknowledge Nygaard. It is also in agreement with the basic intellectual principle that if KristenNygaard invented OO, he is recognized by the scientific community as such - including with a Turing Award (the equivalent of Nobel prize for computer science), and no amateur on wiki or elsewhere gets to redefine it. The rest really doesn't matter, let them be in error. -- CostinCozianu
The truth of the matter is that OO being the most popular kid on the block gets its fair share of wannabes and AuthorsDontRead, so it is easy for one to find other definitions of OO. For example, RobertCecilMartin on comp.object who every now and then claims that OO is about managing dependencies through polymorphism and inversion of control (the definition varies slightly). Most of the folks who held such "diverging opinions" have not very much of a clue of programming in alternative paradigms, and are therefore unaware that such definitions are easily refuted. But the core of the issue is that no amateur gets the privilege to change the basic definition of OO as given by its inventor, one of the most distinguished computer scientists and pioneers. It is as simple as that.
The appropriate title should be " a lot of people don't know what the proper definition of OO is".
Definitions are generalizations and therefore it should be expected that people will have different levels of agreement on definitions. People may disagree with certain details that may come from the concept while still agreeing on the concept. Every definition is ambiguous to some degree and ambiguity increases as generality increases. OO is no different and many people will have differences in interpretations. What I would find of interest would be to compare and contrast how OO principles affect design in relationship to other, specific approaches, such as procedural. -- WayneMack
Do you have several definitions for "algebra"? How about for "Banach space"? It's as simple as this: the inventor of OO established it, the vast majority of computer scientists acknowledged, accepted it, and used it as the reference definition. Plus, for all the other definitions are easy to see how they are technically flawed in the sense that they do not distinguish OO from, say, functional programming.
Or looking at it the other way around, there is no uncontroversial yet nontrivial and rigorous definition in the field of mathematics for the word "mathematics", nor of "word" in linguistics, nor of "science fiction" in writing, nor of "consciousness" in cognitive science, and on and on, yet this doesn't slow down progress in math at all, doesn't prevent science fiction from being written, etc. (One might now make snide comments about cogsci and linguistics, however a noncontroversial definition wouldn't improve matters.) -- DougMerritt
Well, you have to make a case that this alternative view is warranted. We have a clear paradigm shift in the design of program, we have an widely acknowledged and acclaimed inventor, we have his authoritative definition on what he invented (acknowledged and used by his peers), so what more do you need ? On the other hand can you tell me who invented "mathematics", who invented "science fiction", who invented "word" in linguistics, etc?
You misunderstand, I'm not arguing with you, I'm adding a second point. To summarize, (1) There is an unambiguous primary definition of OO, regardless of secondary definitions and regardless of confusion on the part of non-experts. (2) Even if (1) were not true, it wouldn't matter, just as mathematics proceeds without a rigorous definition of "mathematics".
I think it's possible for there to be multiple meaningful definitions at different levels of abstraction (from different perspectives). These definitions aren't necessarily in conflict.
Certainly. But in the context of a page called "NobodyAgreesOnWhatOoIs", it is useful to point out that there is reason to think that the page title misunderstands the situation.
Absolutely. The page title is an obfuscation. "I may not be able to define it, but I know it when I see it."
Most people that understand and use OO, mostly agree on what it is. The people who don't agree, are generally the people who don't like OO and are looking for something to attack.
I see definition fights on comp.object all the time among OO proponents.
OO is merely a clever and successful MarketingScheme? which has made GradyBooch a wealthy man. -- DanielBrockman
I largely agree. While OOP has use under certain circumstances, the buzzword engine borrowed OO-influenced ideas and created a vast bullshit kit around it from roughly about 1988 to 2010. This is not saying OOP has no use, only that it was heavily misused for hype/marketing purposes. I smell similar with FP today (others disagree, a controversial topic). --top
The discussions below as well as on related pages is riddled with amateurish definitions that can only provide some confusion to newbies, and all in all has no value. Such non-sense as "inheritance is a form of delegation", that OO is defined by encapsulation, the definitions given in EncapsulationForDummies, ObjectOrientedForDummies, should be either deleted or isolated to a "discussion" page to avoid confusion and cluttering . As a matter of fact this page could be safely deleted unless some authors have some particular hubris about the importance of their writings.
There's no point however in having lots of more or less accurate definitions spread endlessly over multiple pages.
A constant problem with many discussions is that nobody agrees on what the "essence" of OO is. Perhaps we can find some working classification system for different OO viewpoints. For example, some think it is merely about organizing code to reduce change rework, but others see it is a way to model physical objects or put a social tilt into designs so that humans can better relate to it.
Is it necessary? Can't we just talk about how accomplish certain ends without endlessly arguing about what something "truly" is?
It is hard to talk about a "chair" if nobody agrees on what a chair is. There is enough of a common example base in OO, the shape, animal, and device-driver examples; that one can start, but beyond that the nature of OO diverges from person to person.
I'll take that challenge. Find a definition of chair. For any said definition of finite length, their is either an exception to the definition or a thing that is a chair that isn't covered by the definition. And yet, we can still talk about chairs.
[A lot of this is because OO is a broad church embracing everyone from the prototype-based (Self, Io, JavaScript) to the class-based (Java, SmallTalk, etc ) to those who have built OO systems on top of other paradigms (CLOS, OCaml, various Scheme dialects, Python, Perl). Each of these have various flavors of usage as well so talking about OO without qualifying it usually becomes a meaningless debate about whose definition we shall use.
We can only talk about chairs if we first state that we're only interested in wooden 4-legged chairs.]
I suppose we have beanbag chairs that are borderline "mini-couches". But, this gets back to the need for a working classification system for OO. I don't know if "modeling" can be separated from language or not.
That's why it's more constructive to talk about problems and solutions than it is to try and classify the solution. Whether we agree on the totality of OO or not, we all know it when we see it. It's not the OO guys who are pig headed, when faced with a better solution, they always seem to listen. It's the procedural guys presenting old failed solutions as better that ruffles the OO guy's feathers. {Failed? Examples?} I don't see the OO guys slamming the functional guys, and it's because they know there's some cool stuff there and some nice solutions that are sometimes superior to OO.
If you are not on the "winning" side, it's easy to be defensive. I would like to see, for example, what is the benefit of pattern matching and how i could make use of it. Is that a functional issue? Don't really care.
Re: Whether we agree on the totality of OO or not, we all know it when we see it.
Sometimes "code in tables" and "lambdas" or "closures" are called OOP by some. Others see that as too wide.
There's a reason for that. Lambdas are used to make closures, and an object is nothing more than a collection of closures. Anyone who finds objects useful will find closures useful for the same reason and vice versa. Lambdas let us make quick single method one shot objects without actually having to declare a class, and thus are enormously useful for object oriented programming. They're great for tying together existing objects via events. They're also great for passing up code to higher order routines. Were it not for the mutable state, much of this would be considered functional programming, but there's a lot of crossover between OO and functional. SmallTalk, the prime example of OO, considers closures a standard idiom to the point of making them language elements [], so they are definitely a feature of OO. They are also a feature of functional programming, but that's not relevant to this argument.
I've always thought it was pretty clear. Basically, an Object-Oriented language should at least include Encapsulation, Polymorphism, and Inheritance.
Encapsulation
Encapsulation is also a characteristic of good functional programming and procedural programming.
Data structures can restrict the visibility of their attributes and associated operations. The implementation of these operations are hidden. Happens all the time in Haskell as well as say Modula-2. Are these languages object oriented ?
Encapsulation of what is performed in procedural decomposition? What procedural methodology performs encapsulation?
This definition depends on the definition of "visibility" (remember SmallTalk) and object. In other words, it uses itself to define itself.
No, I don't think it does. I think it depends on the Webster's definition of visible. In other words, can things be hidden and thus protected.
[That's the definition of data hiding, not encapsulation. Encapsulation is "the technique of keeping together data structures and the methods (procedures) which act on them." Closely related but distinct concepts.]
This has already been argued at great length on EncapsulationForDummies. Let's not clone that argument here.
Agreed. But it supports the assertion that nobody agrees on what OO is.
In my view, it supports the assertion that a vocal minority can create confusion by continually challenging concepts that the rest of the community have long since accepted. The definition of Encapsulation offered at the beginning of this section - something along the lines of "separating interface from implementation" - is the commonly-accepted meaning of the term. In particular, this was the meaning that "Encapsulated" had when it was adopted as one of the three or four fundamental characteristics of an object-oriented system. While "keeping together data structures and the methods which act on them" is, in my view, perhaps the most basic meaning of "object oriented", the industry uses the term "Encapsulation" to mean something different from that.
Here's the definition of "Encapsulation" used by Alan Snyder in "Encapsulation and Inheritance in Object-Oriented Programming Languages (OOPSLA 86 proceedings)":
Encapsulation: Encapsulation is a technique for minimizing interdependencies among separately-written modules by defining strict external interfaces. The external interface of a module serves as a contract between the module and its clients, and thus between the designer of the module and other designers. ... A module is encapsulated if clients are restricted by the definition of the programming language to access the module only via its defined external interface.
Here's what AdeleGoldberg and KenRubin? had to say about encapsulation in "Succeeding With Objects (AW 1995)":
Every object has a well-defined interface that specifies the behavior of the object in a manner that is independent of its implementation. This interface defines the collection of services that can be invoked by other objects. The implementation of an object describes how to carry out its services. This includes information private to the object, accessible to other objects only if services exist to provide such access. Similarly, the algorithms that implement services are private to the object. No other object can rely on how another object implements its services. This ability of objects to hide internal structure, thereby defining services independent of implementation, is called encapsulation.
Instead of "Encapsulation", DanIngalls used the term "Modularity" to describe the same concept in his contribution to the seminal August 1981 issue of Byte magazine. Here's his definition:
Modularity: No component in a complex system should depend on the internal details of any other component.
The sources cited here (AdeleGoldberg, DanIngalls, AlanSnyder?, etc.) created object-oriented programming. These definitions are their own description of what they meant when they used the term "encapsulation" as a fundamental aspect of "object oriented. This definition - separating interface from implementation, hiding internal details, minimizing dependencies on internal details - is what the world means by "encapsulation", whether or not a contributor here agrees.
Polymorphism
Also a characteristic of good functional programming and procedural programming.
Identical (identically-named) operations can behave differently in different contexts. The operations that can be performed on an object make up its interface. They enable you to address operations with the same name in different objects.
Depends on the definition of "methods" and "classes". Also depends on Dynamic or Manifest typing, since interfaces aren't required with Dynamic types.
No, it DOESN'T depend on anything like that. It means just what the definition above says: "Identically (identically-named) operations can behave differently in different contexts." If that isn't true in a given system, then that system does not exhibit polymorphism and is not object-oriented. It isn't complicated.
By that definition a function can easily be "polymorphic" if it acts differently because of parameters, database content, etc. A "context" does not tell us enough.
Read the definition again. It uses operations as a plural, not a singular. The difference is significant, because when passing a mode parameter, the caller is selecting the sub-operation to be performed. With polymorphism, there are multiple possible operations that might be performed, but the caller is unaware of precisely which one will be called.
function getPay(employeeID) // note: this is not meant to nec. be good code, only to illustrate concept record = getEmployeeByID(employeeID) pay = base_pay_constant if record.isManager pay = pay * 1.4 else pay = pay * 1.1 endif ... return(pay) end functionHere, the caller may have no idea whether the employee is a manager or not. Further, I don't see where the plurality matters. Also, the above definition depends on the definition of "object", which is what we are trying to define in the first place, and thus the definition appears to be recursive. One is trying to use polymorphism to define objects, but also using objects to define polymorphism.
Isn't the "if record.isManager" line requiring the caller to know whether an employee is a manager and selecting one of two possible operations?
No. That is possibly determined at run time, not a programming time. For an employee's status may change between programming and running.
Aren't do_something() and do_something_else() different names?
{That was originally to simplify the illustration, so I rewrote as to not make it an issue.}
"Manager" is an attribute. There are multiple orthogonal attributes that can be assigned to an "employee" entity. Yes, you can artificially make any given attribute into a "type", but whether that is good or not is another matter.
Note the difference if we rewrite to a polymorphic class. The getEmployeeByID(employeeID) function is also assumed to become a class factory that returns either a Manager or Employee class derived from a record class.
function getEmployeeByID(employeeID) if isManager return new Manager(employeeID) else return new Employee(employeeID) endif end function function Manager::getPay() ... //equivalent operation to do_something(...) end function function Employee::getPay() ... //equivalent operation to do_something_else(...) end functionI don't see how this relates to the original definition and example. I am seeking a clearer text-based definition, not a "how-to" coding lesson, I would note. Under languages with different characteristics than the typical "object.method" syntax, it can become an issue. Otherwise, one would seem to be suggesting that polymorphism is only a syntax convention.
Maybe the definition should say "code things" instead of "physical things". In other words, polymorphism is different code things answering the same message in a way that is appropriate to the given thing. (I use "thing" instead of "object" to avoid making the definition recursive, but still need to define "code thing".)
However, that still does not distinguish from two different functions which can take the same parameter.
You can use an existing type to derive a new type. Derived types inherit from or delegate to the data and operations of the super-type. However, they can overwrite existing operations or add new ones.
Inheritance also isn't required in a prototyped based object system. Inheritance is really a form of delegation, so are prototypes, so delegation may be more a base requirement than inheritance. Inheritance is certainly not required to write object oriented code, but delegation is.
Inheritance is not a form of delegation
There are several forms of "inheritance;" delegation is one and there are others.
I would disagree with the initial statement. Inheritance and Delegation serve similar and overlapping purposes, with the caveat that I am assuming delegation is implemented by encapsulating a "base" object within a higher level object. Depending upon the operational characteristics I may need, I may choose either approach or combinations for an implementation, but as far as program decomposition, I would do the same method allocations for either approach. -- WayneMack
(See EmployeeTypes for complaints about modeling employees with subtypes.)
Can't remember where, but I've seen the distinction made between object systems and object orientation, where the latter is a code reuse solution in the form of language features (inheritance, and its consequent polymorphism). The former is any system in which logically cohesive clumps of state are each accompanied by a finite number of defined operations which preserve state invariants. Object form leads to formal analysis in the language of object. Strictly speaking, object systems need not classify their objects, though, need they?
As a consequence of the above distinction, if you buy it, it is possible to build object systems in pretty much any language, if you're careful. And it's possible to create systems which are not object systems and code that is not object oriented, both using Java. The J2EE design patterns are death to object and OO concepts, as is the indiscriminate use of patterns like CommandObject.
How a vocal minority can create confusion by continually challenging concepts that the rest of the community have long since accepted.
Unfortunately the rest of OO community is ill-informed, does not have a broad understanding of Computing Science, and and can not even be bothered to read Nygaard. Therefore they perpetuate the wrong equation OO = Encapsulation + Polymorphism ( + Inheritance ) , a definition that is easily refuted.
This surely merits the CategoryPerpetualArgument badge added below. Please do not remove it again.
Perpetual arguments are perhaps a clue that the discipline lacks sufficient formality.
Perpetual arguments have to do with lack of communication and understanding.
Perpetual arguments can be because people value different things and thus choose differently. You can argue all you want that chocolate is better than vanilla, but I'll never agree. Many arguments are in this category.
...and continuing to argue that chocolate is better than vanilla represents a lack of understanding.
Or a lot of extra time. Or an immense need to control those around you.
[In other words, IfYouDontLikeItYouDontUnderstandIt]
It's human nature for some to believe that they know something others don't or that they appreciate some subtlety others don't see. As a result, we can basically argue about anything. Does red really look red? But the basic concepts of OO are pretty well known despite the semantic and shading arguments most people like to have.
Software development is not art appreciation. Precise definitions are possible and desirable in engineering pursuits. People's livelihoods, and sometimes their lives, depend on it.
Say's you. At this point in the state of the industry, software development is far more of an art than it is engineering. It should be engineering, and eventually might be, but as it stands, it is an art. [Adding to that even within established engineering disciplines aesthetics is important. The Chrysler Building or Golden Gate bridge are no less safer for being beautiful structures. See also BeautyIsOurBusiness] Those are all interesting and true sentiments that don't make precision in software development one whit less desirable. If it "should be" engineering (and I note that some practitioners already treat it as such), then help make it so by giving vague terminology the boot.
[It should be noted that in structural engineering/architecture; aesthetics is always secondary to safety; anything else is considered professional malpractice. That isn't always true in programming, unfortunately; witness the common obsession here in WikiWiki with the "beauty" of source code and the occasional repetition of screeds (easily disprovable) that "beauty" implies soundness. Many here further claim that programming ought to be an art and not a science. At any rate, admiring the "beauty" of a program's source seems analogous to architectural critics admiring the "beauty" of a building by gazing at its blueprints--both miss the point. Beauty - and functionality, safety, reliability, etc.--should be evident in the finished product; if it isn't there than nothing else matters.]
The term "object oriented" is vague enough that it's really only useful when thinking about designs at a high level of abstraction; for detailed design decisions and actual coding, software developers had better have a firm understanding of the more concrete concepts (polymorphism, overloading, inheritance, subtyping, etc.) that may or may not be part of OO, depending on your definition of the latter.
If OO is as vague or immature as you suggest the SoftwareEngineering Institute would not be considering it for their roadmap. See http://www.sei.cmu.edu/str/descriptions/ooanalysis.html
[Fundamental to becoming a mature discipline is to be able to follow (and contribute to) guidelines of professional bodies as Electrical Engineers do the IEEE. Note other pages in the site ie for three tier architecture modified 2000 http://www.sei.cmu.edu/str/descriptions/threetier.html they refer to OO design http://www.sei.cmu.edu/str/descriptions/oodesign.html so they are not only concerned with high level analysis. They explicitly talk about language dependencies and under "Maturity:
"Many OOD methods have been used in industry since the late 1980s. OOD has been used worldwide in many commercial, Department of Defense (DoD), and government applications. There exists a wealth of documentation and training courses for each of the various OOD methods, along with commercially-available CASE tools with object-oriented extensions that support these OOD methods."
These pages are introductory of course they don't need to change much the principles and industry practice are in fact well known.]
((Parts of the practice of software are mature enough to call engineering, most are not. As an overarching discipline, "Software Engineering" doesn't exist, and apparently cannot so long as strong AI and a complete cognitive science don't exist, since software is the art of turning thought into algorithms, yet we are still largely quite far from understanding thought.))
Interestingly the IEEE itself is all over OO, even for RealTime programming - see http://www.vmars.tuwien.ac.at/isorc2004/cfp_isorc2004.pdf circa 2004. Electrical Engineers are also are involved in many aspects of software development.
"... since software is the art of turning thought into algorithms,..." A poetic turn of phrase, but not a definition of software that I'd ascribe to. There's nothing mystical about writing software, and it's no more a direct embodiment of thought than, say, a contoured cam that coordinates the motion of several parts in a machine.
The only thing that sets computers and software apart from other machines is the scale of complexity, the sheer number of "moving parts". It is exactly this tendency to wrap the writing of software in poetry, philosophy, and mysticism that drags the discipline down. It's really about logic, mathematics, and organization. The "art" comes mainly in the latter aspect, organization. How do you arrange millions of lines of code in a fashion that allows you to keep track of its various functions for understanding, maintenance, and extension? It's a hard problem with many partial solutions, but the fact that there isn't a single canonical way to solve it doesn't mean that software development is not engineering.
AI is an interesting subrealm of software development, but does not stand as model in any way for the entire discipline. And a cognitive science is no more necessary (nor less useful) to software engineering than it is to mechanical or electrical engineering.
To clarify my statements above regarding OO, I do not object to it as a method of organizing information and programs. But the interminable discussions about what constitutes a canonical set of OO language features is a waste of time, and worse, often obscures the individuality of those features and leads to neglect of other useful alternatives. I've been working with OO tools for a long time, and they're useful, but there's a lot of other useful ways of organizing code that get short shrift, and it's a real pity.
You said: "..no more a direct embodiment of thought than, say, a contoured cam ..." Exactly! And I didn't say anything about "mystical", but it is exactly as much an embodiment of thought as is numerically controlled machining. They both do that. The only real difference is that machining is limited to the things we know how to lathe/cut/weld/etc, and by now, that's a pretty solid engineering discipline, in large part because it has strict limits in what it can do.
Software is very much the same, except that we are in the unfortunate position of having universal machines, so we can do anything we can figure out how to do; we're not limited by our machines. This is unfortunate because it takes away the boundaries that make it possible, to date, to 100% figure out the discipline.
I wasn't being poetic at all; I meant it quite literally, and I stand by that.
And that's why I mention AI and cogsci. Not because they're all that important to the breadth and depth of software right now, but because they will be relevant to figuring out the entirety of thinking, which I'm arguing is the basis of software.
"... and by now, that's a pretty solid engineering discipline, in large part because it has strict limits in what it can do." Engineering is usually defined, loosely, as applied science, i.e. scientific knowledge applied to practical problems and useful solutions. It is most decidedly not defined as "an area of research that is 100% understood" (to paraphrase your apparent thinking). By definition, engineering adopts scientific knowledge that is well-enough understood to be applied to solving problems, but that doesn't mean that engineering disciplines are static - as new, relevant, knowledge is gained from research, it is incorporated into engineering practice. I don't think any of the major engineering disciplines could be considered to have defined all its possible applications, or to have incorporated all possible relevant scientific knowledge into its practice.
Beyond that, I'm not sure anymore what your point is. Developing software for practical purposes right now can be, and often is, considered an engineering activity. Another commentator's point above about aesthetics versus design integrity is very relevant; when building practical products, engineering practices and mindsets are valuable.
I guess an interesting related question is what the debates of OO-ness of languages are really about, especially here on WardsWiki. Are they idle musings among casual practitioners of software development? Are they discussions trying to codify terminology and practice? Are they the sort of discussions that examine concepts from pure research with the goal of choosing and refining for application to engineering activities? Or are they just a complete waste of time? I find some of the discussions on OO analysis and design to be interesting, but when it comes down to OO coding details and language features, I find them mostly in the latter category these days - a waste of time.
RE: It is my understanding that Nygaard did not coin the term "object oriented", but that AlanKay did. But Kay admits he was heavily influenced by Simula-67, Nygaard's area of specialty. Thus, the question remains, does Kay get to define the term or does Nygaard? -- AnonymousDonor
(See: HeInventedTheTerm, HeDidntInventTheTerm, DefinitionsForOo. Essentially, Kay was the first (on record) to say "object oriented"; Nygaard spoke of "objects" in the Simula literature but didn't add the "oriented" part. As far as who should get the privilege; that's part of this particular PerpetualArgument. AlanKay is still alive; Nygaard, alas, is not.)
To what extent does object-orientation require reflection, and objects as the only first-class objects? My understanding is that only a few languages (especially Smalltalk and Java) are truly object-oriented, and C++, Python, and Javascript, for example, are just object-based.
You are not likely to get a consensus answer on that, because NobodyAgreesOnWhatOoIs.
Well, shoot. Who would've thought that the page title reflected the actual state of affairs. :)
Because it rarely happens on this wiki :) I would suggest wording the question something along the lines of, "How would each of the different viewpoints or definitions treat the following features/aspects...".
See also PrinciplesOfObjectOrientedDesign, ObjectOrientedForDummies DefinitionsForOo