We have 4 or so titles on this topic. I think it is time to consolidate, or at least make clear divisions.
People anthropomorphize. They treat inanimate things like they were people. Programmers, good, clear-thinking, logical programmers anthropomorphize about programs. They say things like, 'The program thinks that you want to . . . '. Well, strictly speaking the program thinks no such thing. It doesn't think, it merely executes instructions. Similarly, classes don't have responsibilities. They're bits of code. People have responsibilities, code just executes.
It might work for simple things, but for heavy duty stuff, it is most likely a distraction. Here's what a HumbleProgrammer had to say:
"The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity." (EwDijkstra, EWD 709)
Surviving in groups is "heavy duty stuff". Increase in human group size co-evolved with brain structures for language and symbolic reasoning. There's nothing immature about using those brain structures to write software. -- EricHodges
There's something immature about remaining anchored in antropomorphic symbols, when it has long been proven that humans can perform at a high level of abstraction. It is immature when better approaches are available for tackling particular problems. For example in concurrent programming, humans have a natural tendency to put an observation order on the history of things, and therefore reason in terms of execution histories. As Dijkstra have observed this is not very effective, and most of the time misleading. The claim that "this is how humans think" is in the general case just bunk. Humans also think in terms of algebras, logic(s), relational calculus, petri nets, predicate transformers and in so many different ways. -- CostinCozianu
You're assuming that anthropomorphing prohibits thinking at a high level of abstraction. Applying social intelligence doesn't necessarily prevent us from using other kinds of intelligence. And evolutionary psychology suggests that these other ways of thinking are rooted in social intelligence. Calling it "immature" seems like an attempt to shame people away from thinking about it or doing it. There's a lot of research about social intelligence. I don't think it's "just bunk". -- EricHodges
"A lot of research" doesn't prove a thing. There's lots and lots of research all over the place. EwDijkstra had results to show for his theories, and you bet he attempted to shame people away from using antropomorphic terminology. He took his mission as an educator very seriously. Furthermore, see OoIsNotAnthropomorphic. -- CostinCozianu
Calling it bunk doesn't prove anything, either. Shame isn't needed. If it doesn't work we should be able to express that without resorting to emotional tactics. I'm glad Dijkstra wasn't my educator. -- EricHodges
Comparison with other fields may be helpful. Some biologists avoid anthropomorphic language at all costs, and the end result is that they say the same thing in a more technically accurate but far less direct and understandable way. Computer science would do well to skip the movement. When we say the computer wants input, we all know we mean it is has been instructed to request and receive input before its operation will continue, so why would we use the longer phrase? -- AnonymousDonor
There's a fear that anthropomorphizing will lead people to imagine software is more intelligent than it really is. That's well-founded and people should guard against that, but my experience is that most programmers are well aware of how intelligent their software can be. It's usually programmers who provide a reality check for customers in this area. -- EricHodges
I think invoking EWD may be an example of QuotingNotThinking, or at least quoting out of context; either way, the original poster hasn't supported his thesis that OO is anthropomorphic. The only examples cited so far are:
I stick with what I say 'classes don't have responsibilities' - there is, in my mind at least, a difference between 'an expected behavior' and a 'responsibility'. Legal entities, people, or collections of people like corporations, have responsibilities, Animals and inanimate objects don't. To say 'a mug has a responsibility to hold coffee' might be fine in a poem, but for ordinary prose is stretching the meaning of the word responsibility beyond breaking point, in fact as a use of language its downright irresponsible! ;-) -- StephenHutchinson
There seem to be two or more somewhat distinct issues here. First, does OO better allow "social-centric" models. Second, do social-centric models make for better software. The second invites the question of whether social modeling is a crutch for those who cannot grasp more powerful concepts. In other words, should software fit our minds, or should we alter our minds to fit software needs? On the flip side, one could also argue that to make applications more user-friendly, we need to use social models that better map to user thinking. But, what about non-human applications such as chemistry modeling with no UI's or batch jobs? Perhaps if we better divide these issues, the arguments might be better organized. Just a thought.
I think OO enables the application of social intelligence to software development more than other paradigms I've used. I don't think that makes better software. Instead it makes software development easier because we have brains well suited to modeling and predicting social scenarios. Compare it to running across a field using the long muscles of the legs versus pulling yourself across by your fingertips. Both methods accomplish the same task, but one requires less effort because of the way our bodies operate. We evolved legs for running. We evolved brains for surviving in groups. -- EricHodges
Is ResponsibilityDrivenDesign part of OoIsAnthropomorphic? I remember somebody suggesting it was but I cannot find the original suggestion. I did a write-up that was removed from this topic, probably because I failed to establish a relationship. -- top
No, ResponsibilityDrivenDesign is not "part of" OoIsAnthropomorphic. ResponsibilityDrivenDesign can be used without anthropomorphing OO. One can anthropomorph OO without using ResponsibilityDrivenDesign. They are orthogonal.
Then what are the anthro features of OO? You have "self-handling nouns" which are kind of like little machines or even little people, but is that it?
Some OO features that encourage the application of evolved social intelligence to design:
The ability to think fuzzily, and to draw correlations that don't actually exist is one of the great strengths of the human mind, and it's why we aren't computers. Taking advantage of that strength is useful. TerryPratchett makes this a fairly common theme in his books, where various entities are repeatedly surprised by the ability of humans for self-delusion, and how that ability actually *works* for them. Bet you didn't think that page would be OnTopic any time soon! -- ChrisMellon?
But that does not answer the question, "Is OO more anthropomorphic than other paradigms?" Most paradigms can model UsefulLies. Or, am I misreading the implications of the topic title?