People Who Dont Get Oo

About people who don't seem to see BenefitsOfOo after what should be a sufficient training or exposure period.

As opposed to people who understand OO deeply, but aren't very impressed with it?

If so, how does one tell the difference? Is there a "get it" test?

Reading the objections on this page and on OoGroupsBetterClaim?, it's clear that for at least the subset of criticisms raised within those pages, any understanding of objects is shallow, at best.

I suggest you fix them at the source using reasoning and links instead of making a general statement here.


Regarding suggestion to delete merge this into LearningObjectOrientedProgramming

I kind of disagree with the merging. It is often claimed by trainers that some people *never* seem to "get it". This topic was meant for dealing with those people. The merging assumes it is all about training. However, there may be some fundamental brain differences. Whether you agree or not that the problem is due to lack of yet more training, some people believe there is a permanent barrier. This can also be the place to debate about whether there is such a permanent barrier.

(Maybe this should be FrustrationWithOo? or NotGrokkingOoBenefits?. Some topic that conveys the sense that some people cannot figure out the popularity of OO despite long tries.)


Things that "bother" me about OOP (cf. ArgumentsAgainstOop):

OoIsPragmatic has a similar "complaint list".

My dislike for OO can perhaps be summarized to 3 things:


What should such people do? Is there enough procedural or functional work for them? Should they change careers despite being good at their favorite paradigm? Is excess OO hype doing them injustice?

First I would try to find out whether I can be any good at doing OO even if it's not my favorite paradigm (PlayHurt). After all, most programmers have to work in a different language in their job than what they would pick if there were no restrictions. Also, I would look into AspectOrientedProgramming, and see if that doesn't fix some of my gripes (one-dimensional vs. multi-dimensional etc). Otherwise, I would try out functional for a change, best with a language where you can mix and match styles (e.g. Python). You might want to read LanguageTrends, and make your own guesses where the market might be going next. If all else fails, I would look at other domains (e.g. EmbeddedSystems) before looking at other careers. Which others have done before you on AlternativeJobsForProgrammers. Oh, and: now that both Cobol and Fortran have features to support OO, the OO hype is officially over. ;) -- FalkBruegmann


Re: What should such people do? Is there enough procedural or functional work for them?

Become a database administrator. OODBMS don't look like they will "interfere" much with that niche anytime soon, and many non-OO-getters seem to like relational.


Who is more dangerous? PeopleWhoDontGetOo, or PeopleWhoThinkTheyGetOo?? I think I've spent some time in each category, and the latter is pretty scary.

-- JoelRosenberger


Moved from OoGroupsBetterClaim?

OK, my apologies, structs are standard practice in a lot of procedural shops. I guess my point is if procedural guys understand and use structs, why not just take the next logical step to objects. I only mention this because I'm the lone OO guy in a procedural shop, so this issue is dear to me. I really am trying to understand what the mental block to objects is. I feel constantly held back because I'm always defending things like objects, closures, abstraction, interfaces and pretty much anything that steps outside the procedural norm. I get bombarded with silly arguments about overhead and complexity, even though we only write simple business systems. Worse, procedural guys, while arguing about complexity, seem perfectly willing to write 3 page long functions full of nasty switch statements and 6 levels of nested if's. I could certainly be wrong, but I've found very few if any procedural programmers who understand the value of simplicity, abstraction, and an open mind.

[That is an interesting statement, because I feel the same about OO. I don't really believe in AbstractDataTypes for reasons described there. In practice behavior and data are not naturally tightly coupled. If you cannot turn your claims into something externally measurable, then you may not get very far. Does OO make less code? Does it reduce the quantity of code that has to be changed when requirements change? If you think so, then you have to demonstrate this side-by-side with comparable code of the other paradigm. If you just go around saying, "my code is more abstract and high-level than yours, neener neener", then you won't have much credibility with them. I personally now believe that paradigm and tool preferences are largely subjective. We pick tools that best fit our own minds. Assuming this is the case, you cannot convince others to think like you unless you can show some external, quantifiable benefit.]

I agree with on that point, we do all pick a paradigm that fits our own minds. But how do you know what fits your mind unless you actually learn and try all the major paradigms. I try to demonstrate the benefits to them on a daily basis, the problem isn't OO per se, they have to work with the code so they see the benefits. Mostly that's through seeing methods that rarely have more than 10 line of code and can be understood at a glance. Fortunately I write most of the code so I tend to shape it the most. But back to my point, the problem isn't OO per se, it's that procedural programmers tend to have a prove it to me attitude. With that attitude, it's very hard to learn. They don't seem to understand that to learn you must put aside what you think you know, adopt the new practice, learn how to do it the new way, and only after learning the new way are you in a place to judge the merits of it. If it doesn't pan out after that, then fine, it's not like the extra knowledge hurts you or anything. But you can't effectively learn anything new if you question every step of the way. I think it comes down to MappersVsPackers. I'm a mapper, packing is too slow.

I had a different experience. Before I learned OO, I was using OO techniques in my procedural code. I used C headers to define the public interface to my code. I passed pointers to context structures to the exposed functions, just like C++ does. C++ just added language support to the stuff I was already doing. I could see immediately how it would simplify my life. The procedural programmers I was working with at the time had the same experience, since these techniques were part of our shared "best practices". We'd been burned too many times by tightly coupled spaghetti code and had drifted to these techniques out of necessity.

{{I would like a demonstration of such techniques that allegedly fixed "tightly coupled spaghetti [procedural] code". I am skeptical. Most prior such demonstrations simply had different views/estimations of future change patterns, or had no database to use.}}

You won't get a demonstration out of me. It's been 12 years and the code belongs to previous employers. It has nothing to do with database use. Instead of letting every function call every other function (the procedural equivalent of using GOTO), one groups related functions and the data on which they operate. High level functions are exposed while low level functions can be hidden. That let's you limit the number of entry points without having to write 3 page functions and defines a modules interface to the rest of the program. It's called encapsulation and it's a proven technique for organizing code.

((Not all procedural languages make every function global. It sounds like a case of CeeIsNotThePinnacleOfProcedural. Generally I try to keep each task independent from other tasks, except for shared libraries, by communicating between tasks mostly via the database. This greatly reduces issues of "naked global stuff" that I often hear about procedural. It is EventDrivenProgramming in a way. Each event is treated as an independent event more or less. Thus, you don't have one big EXE, but little "programlets". DivideAndConquer. Show me a good example if this technique failing for typical custom business projects, and I will reconsider my disdain for OO. Perhaps the OO crowd just needs a good dose of modern procedural techniques, not the tree-happy crap of the 1970's that gave it a bad name.))

An example of your technique succeeding or failing has no bearing on the merit of encapsulation or general OO techniques. You're creating a FalseDichotomy. I have no idea what "tree-happy crap" means.

((Some of the "top software engineers" of the 70's proposed dividing everything up into a hierarchy. JacksonStructures. It was sometimes called the "top-down" model. Anyhow, I doubt we can settle this without specific code to look at. We just may have to AgreeToDisagree until code can be found to co-study. You seem to define encapsulation as "good" by definition. To me, it is an artificial IS-A and PrimaryNoun grouping. Almost any one-size-fits-all taxonomy is smell unless it is local, light, and temporal IMO. It is weak at unanticipated requests (such as ad-hoc queries), concurrency, and multi-noun interaction. If you want, we can start a DrawBacksOfEncapsulation? topic to explore this further.))

I'd love it if you would agree to disagree. Instead you're skeptical of my experience.

((For one, it contradicts others' experience, and second if it is a personal preference, then experience does not extrapolate very well. I would like to see something more concrete, such as simpler code or simpler handling of requirement changes without side-effects. The examples in the text-books often give one-sided change scenarios.))

My experience can't contradict others' experience, because it is mine and has no bearing on theirs. It isn't a personal preference; it's what really happened to me. I'm not lying to advance the international object oriented conspiracy.

((I don't mean to imply that you are purposely deceitful. I meant that your experience is that OO maps well to your own mind. I don't dispute that because I can't really measure that. The alleged benefits of OO seem to escape external measurement or clear demonstrations via code. My theory is this is because OO is a psychological view, not a technical one.))

Then why are you skeptical of my experience?

((Experience that you are more comfortable and productive with OO, or that it makes everybody else more productive?))

Neither. The experience I shared above in the statement that begins "I had a different experience."

((I don't question that OO may map better to your own mind. I am not really skeptical of that. I generally listen most to those who can carefully articulate their claims of betterment using examples and careful reasoning. Saying it is "cleaner" or "better maps to human thinking" is too imprecise to analyze carefully.))

I didn't say those things. I said I was already using OO techniques in my procedural code before I learned OO. Go back and read it.

[Almost every technique that I have adapted over the years that I liked in the end had some clear early benefits. Other more subtle benefits may appear later, but after understanding the basics there were definitely some early good points to focus on in order to motivate one to stick with it before the minor or subtler benefits made themselves visible. Why would OO be different? Why would only it have a "U" shaped "get it" curve? Generally, the harder it is to articulate benefits of something, the more the benefits are probably a subjective mind-specific thing. Or, the benefits are so minor that they are hard to see. Also, how come you are not able to articulate why your methods can be "understood at a glance" while their routines cannot? Maybe to them, their own routines are also "understood at a glance", to them.]


Let me propose one class of programmer who does not seem to "get" OO. These are programmers who rely on the debugger and single step through code.

From this perspective, code structure has no meaning; as long as the computer can get to the next line of code to be executed, it does not matter where the line of code is placed. As long as the computer can find a variable or method, it does not matter what they are named. A global variable is really no different than a local variable; they are both just variables to be updated.

It is almost impossible to have a rational discussion between someone who is a "code reader" and someone who is a "single stepper." These two types of people seem to have completely different views of the software.

--WayneMack

I'm a single stepper and I "get" OO. -- EricHodges

Eric, I would greatly appreciate hearing what helped you understand OO. I have been having difficulty in reach several of my programmers and I would love to borrow any approaches you may have found useful. -- WayneMack

I'm not Eric, but I am another single-stepper who "gets" OO. One of the ways that I've helped "single-steppers" understand OO is to explain to them how an "object" can be modelled (in C or assembler) as a struct (or even array, since the two are so similar) with a pointer to an array of function pointers (a "Class") to "methods", where a "selector" is an offset into the array. Since every "Point" (a good starting point) shares the same array, they all share the same methods. Those arrays can be chained together with pointers and - voila - we have inheritance. This is the basic idea behind CeePlusPlus and ObjectiveCee, and single-stepping through the code can be enormously revealing. It typically doesn't take long for the subject to begin to appreciate, in this context, how stack frames, temporaries, and literals fit in. The most common reaction I've heard, especially from UI and Game programmers with backgrounds in assembler and C, is "Oh, I've already been doing this for a long time". CeePlusPlus and ObjectiveCee are basically syntactic sugar (more or less) around coding patterns that many of us already "have". -- TomStambaugh

The low level approach Tom recommends helps a great deal. Single steppers will want to know what instructions all of this object froo-frah will generate. It may also help to explain how an object provides shared context for its methods and can optimize the number of decisions made during dispatching. Single steppers are often premature optimizers, so these can get them over their objections to additional levels of indirection and object creation time. It also helps to understand how a good garbage collector can optimize memory allocation time by keeping the heap contiguous and all of that. If you know of any particular obstacles these programmers are facing, let us know and we'll see what we can do. -- EricHodges

Tom and Eric, thanks for your responses. The approach you describe largely reflects mine, starting from an assembly and C base line. My personal revelation came from reading "Class Construction in C and C++" by RogerSessions. The particular group I am trying to reach consists of Visual Basic programmers who have never declared a structure beyond the rare collection. When I describe "single-stepper" I mean someone who really does not see any overall code structure. [Thanks again for your suggestions. Please recognize any terseness you may see in my response as being due to my frustration in applying suggestions and is not a rejection of the suggestions.] -- WayneMack

Oh, I didn't know they were VB programmers. Start by teaching them a good assembly language. Make them use that for big projects for 3 or 4 years. Then teach them C and make them work in that for another 3 or 4 years. Then they'll come running to use an OO language. :-) Seriously, I've never tried to convert a VB developer. Most of them seem happy with the tools they have for the jobs they do. Perhaps they should be left alone. -- EricHodges

<Tongue in cheek>Even VB "programmers" can be given remedial instruction.</Tongue in cheek> I seem to recall (though it's been many many years) that even VisualBasic has operations for constructing and indexing arrays and types - things like "DIM", "Type", and so on. The following example (copied from a google hit, http://www.other-space.com/vb/part2/files.html) allegedly constructs a type in VisualBasic:

 Type Info
  name As String * 20
  address As String * 50
  city as String * 20
  state As String * 2
  zip As Integer ' 2 bytes
 End Type

This type has four string variables, sized 20, 50, 20, and 2 bytes, and an integer which is 2 bytes in VB. The resulting type has 94 bytes for each instance. If I make an array of these, as follows:

 Dim xyz(1 To 100) As Info

I thus consume 100 * 94 or 940 bytes of space. It takes only a little bit of handwaving to explain the relationship between "records", "types" and structs, and only a little more to explain how arrays and array indexing works. At this point, we're really talking about "indirection", another abstraction that is often hard for people to grok. Many of the PeopleWhoDontGetOo that I've worked with don't get indirection. They don't get pointers, they don't get arrays, they don't get JSR, they don't get the distinction between the name of a function and an invocation of that function - they simply don't get indirection.

I write all this to emphasize that even if you have to use the vocabulary of VisualBasic, the basic ideas can still be described. Many of your programmers will "get" what you're saying. Some will not. -- TomStambaugh

I've used a lot of indirection over the years, but don't seem to "get" the benefits of most OO still. The above is merely using VB to make a RAM database-like table. What does it have to do with OOP? OOP is not about DB-like tables. Note that the above can also be modeled via a nested or 2D associative array in a more scriptish language than VB. -t


I suggest that some people have a natural talent for abstraction. Others find abstraction enormously difficult. For those who are facile with abstraction, OO feels natural and obvious (although not always simple). For those who are challenged by abstraction, OO feels artificial and opaque. I think a similar split occurs regarding recursion -- some people get it, and some do not.

This claim is way too general, and needs to be debunked. For example if you don't feel OO is natural and obvious you infer about me that I have a problem with abstraction. Oh, well. Do you want to shame me into not using other paradigms ?

P = "for those who are facile with abstraction." (x is facile with abstraction)
Q = "OO feels natural and obvious." (OO feels natural and obvious to x)

Your statement: P2 --> Q2 -Q2 = "you don't feel OO is natural and obvious." (OO does not feel natural and obvious to y) -P2 = "you infer about me that I have problem with abstraction." (y insults z)

While (-Q ==> -P) <=> (P ==> Q), (-P1 != -P2) and (-Q1 != -Q2). --AnonymousDonor

From the (large or small ?) category of programmers who have a natural talent for abstraction, some of them embrace OO. Others don't. And yet other group of "abstraction enabled programmers" are just about neutral. OO is just one way of several good ways to abstract.

I would suggest that there are some very apt musical analogies. Almost all of us are either right-handed or left-handed. Those of us who are right-handed experience playing a piano score in a completely different way than those of us who are left-handed. Because of the physical layout of the piano keyboard, and because of the different roles that bass, tenor, alto, and soprano play in music, there is a fundamental connection between the innate nature of the performer and the music that results. Right-handers have a far more difficult time "getting" a bass line then left-handers. Left-handers have a far more difficult time getting a descant.

Some of us are, for better or worse, wired so that we naturally think abstractly - in the same way that some of us are left-handed. Some of us are not. Big deal.

Now how convenient for you to equate thinking abstractly with using OO. Big confusion.

[Agreed. OO use is not good indicator abstract thinking ability. Problem solving requires abstract thinking. OO is, above all else, a convenient way to organize code.]

I'm talking about PeopleWhoDontGetOO - let me emphasize get. I've said nothing about those who "get" OO and still prefer other abstractions. I never said that OO use is a good indicator of abstract thinking ability. Some programmers may well "get" OO, "get" relational, or "get" functional abstractions - and may choose some mix of all of them.

Let's try inverting the assertion. People who get abstraction get OO - whether or not they choose to use it. I believe that I "get" the relational paradigm, I "get" the OO paradigm, and I "get" most other software paradigms. I suggest that PeopleWhoDontGetOO also don't "get" the other paradigms. They may use them - but they don't "get" them. There is a difference.


Re: Others find abstraction enormously difficult. For those who are facile with abstraction, OO feels natural and obvious....

As a RelationalWeenie who does not see benefits in OO, this is a rather sweeping brush. Relational is a powerful abstraction. Its power is that it is more consistent and better understood and defined than OO. It is closer to math. OO has the chaos of NavigationalDatabases. (Not that I like all current RDBMS products on the market.) Also, some of OO's abstractions often seem artificial to me (LimitsOfHierarchies and PrimaryNoun.) A bad abstraction is often worse than no abstraction.

PeopleWhoDontGetOo find it impossible to understand, chaotic, artificial, and poorly defined. This is not a statement of value or judgement, it is a simple reflection of your own words. QED.

I have yet to find a consensual way to measure abstraction. It tends to be one of those terms that triggers a HolyWar. Lacking a decent ruler, my subjective statements about abstraction should be considered on par with your subjective statements. People's favored abstractions tend to be those that map to one's way of thinking about things.

The problem with RelationalWeenie's is that they think there should only be one overall guiding abstraction. OOWeenies like to have the abstraction fit the domain of the problem.

If the tables are not fitting the domain, then what the heck are they fitting? All those attributes are just random column names? Tables contain facts about entities of the domain.

Consider the following tasks

Build Coke machine simulator

Build Shopping cart Build Accounting Package Build Reservation System No matter what you say, the relational answer is always a table, the OO answer will always be Domain abstractions. Who's right? Who knows, but I'll say the OO code is sure a lot more fun to program!

I disagree. He uses schema to build abstractions. (Let's be honest. There's only one RelationalWeenie. {see topic if that is supposed to be a vote}) He makes reservation tables, guest tables, etc. The difference is that he doesn't encapsulate behavior with those abstractions. I think that's because he's never needed to, due to the sort of work he does.

Come on now. That above is lopsided, guys. Here is the normalized version:

  Build Coke machine simulator
Yeah, but lets see you do an ad-hoc query on a Coke-machine :-P

You assume that requirement, not everything needs the ability to do ad-hoc queries on its data. A coke machine takes money and spits out drinks when a certain amount is input. That's all behavior, and requires no need for ad-hoc queries, just definable behavior. Putting a coke machine simulator in a database would be an exercise in ignorance.

{Behavior and "records" are often not one-to-one. See PrimaryNoun. OO often falsely assumes or forces a one-to-one relationship the two. It is a lie about reality.}

It was meant mostly as humor.

As far as "virtual machines", see ClassicOoModelingApproach?. -- top [link is no longer valid. topic deleted or moved.]

[That page has nothing to do with virtual machines. Every program is a virtual machine, whether it imitates an existing real machine or not. The reasons we create virtual machines instead of real machines is because a) the real machine would be physically impossible or b) the real machine would be more expensive.]

But the real world is limited. I would rather go beyond its limitations if possible. Dewey Decimal was limited because it had to match the physical placement of books. Now we can go beyond it and search multiple orthogonal categories without forced, artificial nesting. I have seen computer automation that tried to recreate the manual approach, and it was not very effective. Sure, it probably reduced training, but did not result in much productivity gain. (I would note that user interface and internal structure are not necessarily related.)

[You aren't reading what I'm writing. The real world is limited. That's why we don't use the approach described on ClassicOoModelingApproach?. We create any virtual machine we can describe, not any virtual machine that could exist as a real machine.]

Make up your mind. If we are able to divorce from real-world limitations, then how about a relational machine? Relational is consistent and powerful, and tables are easy and compact to browse eye-wise. It is friendly on the brain (especially if we can replace SQL with something better), and friendly on the eyes; what more can you want? It fits most apps I have encountered. I can even envision a table-oriented interpreter. It would make debugging easier in my opinion. One's view of the internal state (variables, routines) is not limited to what the original product makers were able to conceive. If you are going to argue that relational rules are too restrictive compared to OO, then present an example at WhenAreStandardsRestrictive.

[Read what I write. I've never advocated limiting ourselves to real world machines. I already use relational machines. I also use maps, factories, trees, proxies, threads, priority queues, and many other imaginary machines. I'm not arguing against using relational databases. I just see no compelling reason to use them more than I already do. I've written many apps that used relational databases, but they needed much more than that. If all they needed was a relational database I wouldn't write the app. I won't argue that relational's rules are too restrictive compared to OO because they are orthogonal. OO and relational happily coexist in my work. I don't have to pick one or the other.]

Good. Then all we have to do is agree on WhenToUseWhatParadigm. Note that I have used NimbleDatabases such that I never had to use dedicated maps, trees, and priority queues because they were so easy to make with the existing table engine. Instead of threads, I would spin off a separate process that communicates with the current one via tables, which have locking and/or transaction handling built in. I do consider relational a SwissArmyKnife. No, it cannot do everything, but it does a good portion. Perhaps OO is a consolation prize for when the boss won't let you have your favorite table tools.

Further, trees, maps, queues are or can be merely viewpoints of given information. You get much more adaptability in a system if you can use HAS-A instead of IS-A. I've written many queries, reports, and processes that produced a non-tree view of what may initially seem only to be a "tree". Relational helps one transcend your physical, mechanical view of things where each part is and does one and only thing. The relativism aspect of relational is not only a nice bonus, but often a necessity in an org where the same info must be used by different parties interested in different aspects at different times and uses that often cannot be anticipated up-front. (Added after material below was written.) -t

[I've done the same as you, when performance permits. When you can't afford to keep your maps, trees and priority queues in a NimbleDatabase, don't use one. When using a NimbleDatabase complicates code more than using a class, use a class. When separate processes introduce too much overhead, use threads.]

But this topic is about "getting" OO benefits. Is OO all about working with small chips and 1980 RAM/Disk portions? OO is rarely associated with small and quick executables anyhow. I suggest we focus on code design and maintenance rather than CPU usage. However, if you have a specific example, it would be interesting to see how it stands up to a RDBMS or FoxPro. Related: AreTablesGeneralPurposeStructures.


Rereading the above debate, one question has never really been answered. Does OO allegedly fit the real world better, or fit the human mind better? And when this is answered, how does one tell if it fits such better? It is unclear whether the alleged benefits are psychological in nature, or mechanical. --top

The real world can only be known through the human mind. We are not in a position to differentiate those views. -- MichaelSparks

Top I agree with a lot of your comments in various wikis but disagree with your premise: the primary utility of a paradigm is how closely it maps to reality. With the advent of refactoring tools, mature SourceControl? and NUnit tests and so forth, the cost of change is IMO the primary consideration. OOP code when done well with lots of tests is very easy and safe to change. Tables are not. The problem is a lack of sophistication among tools to integrate data and code.

In OOP if I want to change behavior I make the change in the code and check it in (say to add a behavior attribute and implement it). In relational, I make the change to both table and code. I have to generate SQL scripts for the difference, check those in, others have to run those scripts and it has to be deployed. If there is an error, rolling it back is a nightmare (as opposed to one-click undo in OOP source control). --BrianG

First of all, I never claimed that "the primary utility of a paradigm is how closely it maps to reality". In fact, it appeared to be OO proponents that claimed that, not me. As far as code changes, I'd have to look at how it's constructed. I've used DataDictionary-based designs such that zero app code had to be changed to add or change most column-related info, for example. OO-ers are too quick to create a MirrorModel, generating OnceAndOnlyOnce problems. It's a lot easier to read and compare a table than OOP set/gets, at least for my eyes (see FastEyes). But I do agree that existing tools and techniques make it difficult to map between the text file source world and the DB world. But this is probably an incomplete tool issue and not an inherent fault of the paradigms being discussed. (FileDatabaseImpedanceMismatch??) In this perspective, your argument is a case QwertySyndrome. Much more thought and effort has been made for OO-centric and file-centric tools than data-centric tools out of industry focus/habit. Electric car technology may be more viable at the moment if the same effort had been made to improve them as has been done with gasoline-powered engines. Most of the incremental innovation has been for gas engines because it has been the de-facto standard. -t


I'm actually opposed to OO for broad spectra of applications on the premise that it 'fits the human mind'. That isn't to say I disagree: the whole concept of 'object' does fit the human mind. It's just a shoddy justification for using OO for programming.

PageAnchor: MultipleViewsOfObjects?

Humans model the world via use of conceptual 'objects'. Note the emphasis on the word model. Models, even those in the human mind, are best used for prediction and simulation (computational tasks that are very closely related). But ObjectOrientedProgramming isn't about modeling the world. ObjectOrientedProgramming creates real objects in a computed world. But 'objects' aren't supposed to be real. 'Objects' are supposed to be in the mind - patterns over percept that help us make predictions. In ObjectOrientedProgramming, we're forced to transform a particular 'view' or 'model' of reality (an object model) and make it reality (in the computer). And that is where it fails: despite us understanding 'object' we also benefit from multiple simultaneous models and views. Being stuck with just one 'view' is rigid, inflexible, and not particularly well designed for the broader possibilities of modeling and simulation. We can easily switch from thinking that "all clouds part of one entity that spreads out and gets thin in places" to "each cloud a separate object" to "there just one atmosphere with spatial distributions of water-vapor". ObjectOrientedDesign cannot do this.When programmers impose a particular object model upon their program, they enforce properties upon the 'reality' of the program that should instead be attributed to a 'view' of the program. On his website and in various pages on this wiki, Top has done a very good job at showing many of the pitfalls to enforcing particular views on data.

ObjectOrientedProgramming based on 'fitting the human mind' would probably work well enough for simulation purposes. Our minds 'simulate' by 'imagining' objects in an environment then stepwise calculating how they interact... a process that is essentially the same as is done by computer simulators. Unfortunately for OO, even for this simulation we are better served by a system that can take multiple views over the data when inferring the future based upon it. I've rarely been satisfied with any object-based simulators with which I've worked. In the systems that have proven the most powerful, making the broadest spectrum of useful predictions, the 'objects' of interest are the 'facts' and the 'inference rules' - objects far less associated with 'fitting the human mind' than with 'simulating' it. Many of the most effective simulators today are expert systems and knowledge-engines, and as progress in the so-called 'weak AI' advances (and becomes more efficient), this will only become more true.

Using ObjectOrientedProgramming to do systems software is tolerable; like a clockwork engineer, the programmer can build pieces (domain objects) and fit them together piecewise until he has a working system. This might appeal to programmers who came out of another engineering discipline where they put objects together with their hands. It is rather like constructing a real object inside code. It goes well with the whole simulator angle, except that you might be able to put it to real work afterwards. But I'm a computer scientist. I've put together my fair share of things with my hands, but it doesn't 'fit my mind' especially well.

My understanding of computation far exceeds my understanding of mechanics, and when I use ObjectOrientedLanguages? for programming, the 'objects' I prefer to model are computational components: values, transformers, ports, message-queues, procedures, continuations, actors, expressions, messages, commands, facts, inference-rules, state-machines, grammars, parsers, codecs, presentation-layers, etc. All other domain objects will just be poor simulacra of the real things, but computation-objects are the real thing. I've applied this approach well, and seen it applied, in a broad spectrum of application domains. Other people have discovered the same... and have built libraries like Boost that are dedicated to supporting this semi-functional 'style' of ObjectOrientedProgramming. As to whether it is really ObjectOriented is an open question - I'm hesitant to call it that after the set of 'domain objects' is reduced to particular facts, tasks, and messages.

The best reasons I've seen for ObjectOrientedProgramming have nothing at all to do with it 'fitting the mind'. Of greater relevance is the ability to hook together arbitrary computations (via polymorphic methods), the ability to easily build configurations for purposes of unit-testing, and the ability to encapsulate code to prevent accidental coupling (which eases maintenance and reduces the blame-game). And all of these features can be provided by other mechanisms than ObjectOrientedProgramming. I'd choose lexical closure + first-class processes.

I'd certainly benefit more from a language where I didn't need to repeatedly reinvent these computation-components using objects (I'm stuck using C++). Even Boost is an example of GreenspunsTenthRuleOfProgramming: it's an expensive, massive reimplementation and reinvention of half of common Lisp. AreDesignPatternsMissingLanguageFeatures? I predict that the FutureOfProgrammingLanguages won't be ObjectOriented. You'll be able to simulate ObjectOriented, of course. But, excepting design and implementation of simulators (and games), you'll have very little need (or desire) to do so. And even for simulators, you'll eventually be able to choose instead a generic knowledge-system with data and prediction rules, which will provide a far more flexible and powerful simulator than you'll likely ever get by hard-coding a particular object-model. I predict that, in the long run, ObjectOrientedProgramming will be a passing fashion from which we mostly learned a few good lessons about the virtues of encapsulation and unit-testing. Either that or ObjectOriented will be one more (usually premature) optimization for making interactive simulations fast enough for realtime gameplay.

I second this. Especially the part [PageAnchor:MultipleViewsOfObjects?]. I'm working on that though. -- .gz


EditHint: Somewhere is already a topic on OOP modeling the real world if I remember correctly. Perhaps the above should be moved there.


See Also: LearningObjectOrientedProgramming, ObjectOrientedDesignIsDifficult, OldDogsNewTricks, MindOverhaulEconomics, FrustrationOverNotGettingOopDiscussion


CategoryOopDiscomfort, CategoryObjectOrientation


EditText of this page (last edited June 3, 2014) or FindPage with title or text search