Oop And Human Thought Process

Discussion based on an item listed under ArgumentsAgainstOop.

OOP does not model the human mind. The human mind is delightfully capable of holding many 'views' or 'models' of our perceptions at once - based on the patterns we need to emphasize for whichever problem we are solving. In a sense, an object in human perception is just one way of viewing the world. OO forces us to pick just one 'model' and entrench it in our program. The use of stateful objects and encapsulation both prevent us from effectively supporting multiple 'views' by functional or logical transforms. In a very real sense, objects are often 'intuitive' in a bad way that can easily lead us to error.

Besides this, how we view things is primarily a function of how we learn to view things - i.e. the patterns that prove most useful to us are the ones that stick in our brains. Because of this, even though fundamental cognition in humans is relatively consistent across the species (with rare exceptions - just as there are exceptions to the '2 arms, 2 legs, 1 heart' rule), such things as culture and prior work experience can have a significant impact on whether OO objects feel so natural to us that some might make hand-wavy allegations that OO models the human mind, or whether we might vehemently deny it.

I'd like links to some evidence that "cognition in humans is relatively consistent across the species" if the environment is filtered out as a factor. I'm not even sure that is legally testable outside of Nazi Germany-like techniques. However, for the sake of this discussion, perhaps the nature-vs-nurture issue should be put aside and merely state that developers, as is, think differently. The cause of the difference is mostly irrelevant to this topic and perhaps a distraction. It's TMI for the topic at hand. -t

Basic human thought processes, skill acquirement, recognition and learning and so on are the same across the species based on studies in labs and even under MRI. Most are even common across many animals. Study the subject in your own time if it interests you. Different human developers will think about different things - i.e. approaching a common problem from different perspectives. They may have different skill sets to apply, and use different tools to aide their action. But they do not think differently - not in terms of fundamental cognitive mechanisms.

I'd have to question that. There are many ways to model processes and abstractions in one's head. Even notation-wise, sets can be modeled as graphs and vice verse. Music scores can be modeled as d-shaped marks or as time-versus-frequency bars (sometimes called "piano-roll view"). Neither is necessarily or summarily "wrong", just different. There is no good data on whether in-born individual traits will make one gravitate toward one or the other in the absence of environmental influences. Studies of separated-at-birth twins suggests that some of it may be in-born. -t

The topic line here is "OO Does Not Model The Human Mind." This claim is very different from "The Human Mind Does Not Model OO." An ability to model an OO process in our heads is a skill. Whether OO models our heads is a question of hard, low-level, cognitive psychology. People do have predispositions for different skills, but that isn't relevant under the topic line.

When people make statements such as "OO better fits the way humans think", they generally are talking about a medium or higher levels of abstraction, not how their neurons are connected, etc.

I believe that fitness and modeling are very different issues. A round peg does not model a round hole, but fits one just fine.

They often talk about OO modeling some variation of "independent actors" that have dedicated responsibilities and externally-observable traits and behaviors. The implication is that humans better relate to thinking in terms of specialized workers, farm-work animals, specialized robots, etc. Animated "things" that do fairly-specific tasks or roles. Perhaps we can call them "specialized avatars". It's sometimes implied that this thinking model is shaped by our social nature as humanity and association with "beasts of burden" such as mules, cows, hunting dogs, etc. that have provided food and shelter for us for tens of thousands of years, at least. It may even apply to the game we have hunted for millions of years. Even our religions are based around this avatar model: demons and deities with fairly-specific characteristics and features.

There are some convincing arguments in evolutionary psychology that human intelligence evolved most significantly due to social and political pressure, as opposed to tool use. Look up 'Machiavellian intelligence' and 'Chimpanzee Politics' if you're interested. From this hypothesis, we would conclude that humans are well adapted to thinking in terms of social relationships, contracts, processes, and so on. But getting from there to OO is stretching too far. MultiAgentSystems would be a much closer 'fit' under this hypothesis.

While there may be some truth to this in general, it may not be universal, and also may not be the best way to organize software even if it does fit common initial notions. As you stated, humans are generally capable of using multiple different models as needed or in conjunction. -t

If you study evolution, you will learn that complex features (such as sexual reproduction, lungs, nervous system, organization of bones) are universal within a species, while the simple features (such as height and skin color) vary a great deal. It is universal that human brains recognize and hold multiple patterns (models) of percepts. The human brain is basically a big, parallel, pattern-matching machine - holding many views is what it does, period. Temporal-patterns (such as 'storm' and 'running') aren't significantly distinguished from spatial-patterns (such as 'lake' or 'cloud') - it all uses the same mechanism.

We don't notice because those views are filtered, rarely reaching conscious thought. Indeed, the more expertise we have in a skill, the less reaches conscious thought. Under an MRI, the brain of a neophyte chess player will glow like a lightbulb when contemplating a move, whereas the expert engages just a tiny little spot and makes killer moves. Good drivers often don't remember driving because all the important decisions are made subconsciously. Confabulation is a result of this process: if asked to remember what you ate for breakfast or how many red cars you passed driving on the way to work, your brain will make up all sorts of details to fit a scenario that barely entered long-term memory.

Anyhow, there are at least three distinct questions:

These questions have different broad answers. It might be interesting to seek something at the intersection of all three.


I don't see how OO forces the picking of just one 'model' to entrench in our programs any more or less than other paradigms.

OOP entrenches our decisions due to the combined use of mutable state and encapsulation. Mutation is a problem due to the covariance/contravariance issue: e.g. when assigning to a collection of objects, you need to assign a subtype, but when viewing the collection, you need to view a supertype. Views and models are a much broader issue than supertype and subtype - an OO model is really the decomposition of the problem domain into classes or interfaces and relationships, so ability to support 'multiple models' would require ability to easily switch between different class and relationship decompositions. In general, achieving this would require violating encapsulation. If concurrency is introduced, the messy OO multiple-models story becomes a whole lot messier.

Here are a few paradigms that do much better than OO with regards to multiple models:

The activity of writing a program is mostly about designing a model and entrenching this design in the program. If you want multiple views, you can do that by transforming objects just as much as in other styles.

Not every paradigm is programmed that way. It is an error to assume that OOP objects are as easy to transform as the modeling elements in other styles. And while support for multiple views is an important part of the question, another part is whether the different models have 'equal rights', e.g. with respect to mutation or binding changes from concurrent network updates. One way to get equal rights is to remove rights - make everything immutable, for example, or at least keep mutation outside the models.


[1] Actually many software products are that way. For example, they may try to hog as much CPU as possible to make themselves seem more responsive than the competition. Or, stick their launch icons in every conceivable OS UI place without asking. -t


See Also: MindOverhaulEconomics


CategoryObjectOrientation, CategoryHumanFactors

JanuaryEleven


EditText of this page (last edited January 25, 2011) or FindPage with title or text search