A great deal of theoretical criticism of OOP as an abstract concept exists already. The HumanFactors perspective on OOP, on the other hand, looks at what sort of behaviors and results OOP encourages. To me, OOP seems vulnerable to criticism on grounds related to HumanFactors in several ways.
First, OOP proposes an undesirable dichotomy between class designer and application programmer. The more senior (and presumably more skilled) class designer is artificially separated from the task of actually binding his work into a useful product, allowing him to simply declare victory in the form of a BlackBox of only hypothetical usefulness. This is sub-optimal because the more skilled programmer should actually be participating in the difficult task of final delivery.
- I would argue that it's OOD that promotes an undesirable dichotomy between class designer and application programmer. The designer, who may not even have a programming background, often uses OO concepts to create a class design which may (if lucky) or may not (typically) reflect what a skilled programmer would implement. In shops where I've seen this work well, the designer's class design serves only as a conceptual model, which the implementation may not necessarily reflect. In shops where this works poorly, the programmers are forced to adhere to what may be a deeply flawed class design. This isn't a flaw of OOP per se, rather it is a flaw in certain development methodologies that employ object orientation.
- That dichotomy exists, and I would say this is similar but distinct HumanFactors problem. Also, I agree that this particular bullet is not really a 'programming' problem per se. In fact, I think that in the title of this article, and in many places where I reference 'OOP' in its body, I am using this acronym as a bit of a catch-all, which seems sloppy. But I do think that hiding-behind-the-black-box is an antipattern fostered by object orientation in general (which I will call 'OO' - how 'bout that?). A related problem is the antipattern in which the intractability or unavailability of the class designer results in necessary changes being applied to the class consumer instead of the class itself.
Second, OOP encourages open-ended, subjective debates about such things as class hierarchy. Should "Manager" be a subclass of "Person," "Vendor," or both? These questions sprout like a cancer from OOP's attempt to shoehorn what amounts to relational data into a hierarchical model. There is no right answer to such questions as these, and their abject irrelevance to reality seems, to me at least, to be self-evident.
- Increasingly, experienced OO developers subscribe to the belief that classes in an information system should not model the problem domain. Rather than use classes to create a simulation of the problem domain, they use classes to define computational machinery that processes data about the problem domain. Classes model the computational domain, i.e., an abstract machine, not the problem domain. Rather than having classes like "Manager" and "Person" and "Vendor", there are classes like "Query", "Form", "RecordSet" and "DatabaseConnection". Notions of "Manager" and "Person" may exist dynamically (say, as instances of RecordSet), but they do not belong to some static hierarchy. Most information systems are computational machinery for processing data about the real world; they are not (or should not be) simulations of the real world. The distinction between the two is significant, and an explicit, intentional awareness of the former is arguably a superior basis for effectively employing OOP than a naive implementation of the latter.
- I know. But I think there are plenty of people who haven't gotten the message and (more importantly), this transition really represents a surrender-to-reality by OO proponents. In essence, they are abandoing the original spirit of OO and yet still wanting to be OO developers. It's analagous to a situation I observed back in the 1980s, where a local business bought an Apple Lisa to meet all their computing needs. Eventually, this proved to be wildly impractical. But they were able to use the Lisa to print leaderhead stationary, and "Happy Birthday" style banners. Is this a victory for the proponents of the Lisa? No. Reduced expectations imply failure, and this is true of OO as well. It was supposed to model CPerson, CApp, and so on. I and many others "called B.S." Now, you are using OO to model CNewHireDataEntryFactory, CSavitzkyGolayConstantProvider, etc., and frankly I feel this proves right many things I've been saying all along.
- OOP was always intended to make it easier to build and maintain programs, not domain models. (OOA is something else entirely.) Some vastly naive misinterpretation of OOP, mainly by textbook authors who barely understood it, suggested that OOP was about creating pointless world-simulators via CPerson, CApp, and so on. Do we create world simulators when creating information systems using traditional procedural languages, logic programming languages, or functional programming languages? Of course not. Likewise, no genuine OO expert (this does not include the majority of OOP text authors and the designers of some popular component architecture platforms) advocates such an approach when using OOP languages.
- How it significantly makes it easier to "build and maintain programs" on a wide basis escapes me. I'd like to see semi-realistic coded examples for a domain comparable to mine. But something to consider is that making one's idioms match the domain, that is some form of DomainSpecificLanguage (including API's and services), is something that can greatly contribute to "building and maintaining programs". The closer your building blocks and e-lingo match the concepts of the domain, or help one relate and interact with the concepts of the domain, the easier creation, testing, and maintenance will generally be. Whether OO's noun-centric[1] domain modeling approach is the best way to achieve such or not is still an open issue. While it may be useful for object quantities of a few dozen, it is setup/config overkill for a dozen or less and not powerful enough in the collection-management department for hundreds or more. Its sweet-spot is too narrow, reminiscent of Earth's environment situated in the solar system with regard to conducive-ness to complex life. --top
- I'm curious what your source is for "[w]hile it may be useful for object quantities of a few dozen, it is setup/config overkill for a dozen or less and not powerful enough in the collection-management department for hundreds or more." Having personally developed OOP projects varying from less than a dozen objects, to some that trivially maintain millions of instances at run-time, I've not experienced this. Nor have any of the OOP developers I know, and I've met many over the years. Nor am I aware of anything in the literature to back up your view.
- Without examining specific applications it's difficult to verify your claim. But generally if you have millions of instances, you would want some way to analyze, study, and monitor them. An RDBMS makes it easy to do this almost out-of-the-box via query languages, constraints, ACID, and referential integrity. You can't just dump millions of them in RAM without some way to manage them. Perhaps you've been doing it the hard way for so long that you just live with the downsides. It's usually good to be able to X-ray the intestines of the patient. Out-of-the-box, OOP does not provide tools for managing large volumes of dynamic stuff. You can hand-add it incrementally, but I call GreencoddsTenthRuleOfProgramming.
- I think you've misunderstood me. I am asking you to verify your claim. That said, if you have millions of instances, you have no need to analyse, study, and monitor them. Some instances are containers, which reference collections of other instances. Instances manage each other. Thus, there is no "hard way" that I've been doing. Note that one of the things I have done and continue to do, since I started OOPing in the late 1980s, is develop relational DBMSes in OO languages. Therefore, I am very familiar with tools for managing large volumes of dynamic stuff.
- SystemsSoftware may have very different software change profiles than domains apps. And, it's not a claim, but an observation. That OOP may be great for building RDBMS engines may not translate into something comparable to domain-heavy apps. Others have also noticed OOP working better in "computation space" over "domain space" (see OopNotForDomainModeling). I suspect this is because engineers have more control over their "business rules" and/or a greater desire to apply logic and consistency than those who make the domain rules, such as politicians and marketers. OOP is very powerful *if* you can stick to its favorite ways of managing variations-on-a-theme. But if you cannot stay within it's goldilocks zone, it's worse than the alternatives, such as IF statments, predicates, and set theory. --top
- I am a strong advocate of using OOP to model "computation space" over "domain space", but this is obvious -- any software development, regardless of language or paradigm, is employed this way unless it's being used to create simulations. I don't know what you mean by IF statements, predicates, and set theory being alternatives to OOP. OOP is merely a collection of specific extensions to procedural programming, particularly to facilitate managing complexity by effectively reducing large programs to a collection of interacting small ones. There is nothing in OOP that precludes using IF statements, predicates and set theory where these are appropriate.
- I am also for "reducing large programs to a collection of interacting small ones". But I'm skeptical that OOP is usually the best way to do it. Functions and EventDrivenProgramming are examples of other ways.
- Certainly, FunctionalProgramming is of considerable value, and I would not be surprised to see it eventually largely replace OOP, much as OOP has largely replaced pure procedural programming. As for EventDrivenProgramming, I'd like to see you implement PayrollExampleTwo using EventDrivenProgramming in a conventional procedural language. I suspect you're comparing apples and oranges, unless you're referring to something like languages based on the ActorsModel.
- I don't think EventDrivenProgramming is appropriate for PayrollExampleTwo. It's mostly batch processing, for one. Traditional procedural works pretty good for typical batch processing, and this is partly why COBOL lives on and on. However, a sister of EventDrivenProgramming is described in PayrollExample. Different programs can perform custom tasks and communicate with the rest of the system via the database model. Thus, it splits up duties into different programs or sub-programs, perhaps even in different languages. Let's see OO do that well!
- As far as FP, it suffers a similar problem with OOP: you get a lot of leverage *if* you can fit things to it's style, but it can be difficult to find or get good fits. Whether it's an education/skill issue or something else, I cannot say yet. So far, it appears that procedural is the best for dealing with the EightyTwentyRule. It bends easier with the wind overall. It's less "leveraged", to rip off a financial term. In my programming style, I try to let the DB do most the heavy lifting, but often end up using procedural for the final tweaking for the stuff that doesn't fit nice abstractions or DB abstractions. -t
- I'd be interested to see your implementation of PayrollExampleTwo using traditional procedural programming.
- I'd probably just use plain-jane case statements, but wouldn't balk at polymorphism being used for that particular section. The difference in maintenance issues is minor as to not be worth quibbling and all the evil things that OO fans say happens if you use CASE statements are generally exaggerations (except maybe in C-style languages, which have a stupid case statement syntax: IsBreakStatementArchaic).
- An early version of the production code from which PayrollExampleTwo was derived was not OO and used CASE statements in the manner you suggest. The number of errors encountered during development and the time taken to implement changes was, on average, approximately double that of the OO version. The isolation between subclasses provided by polymorphism was of considerable value in helping to avoid mistakes, and the need to re-work the hierarchy -- which you've touted as a reason to avoid polymorphism -- has required negligible effort despite the fact that changes to the payroll formulae occasionally necessitate such re-working. Most applications of OO don't, by the way.
Third,
InformationHiding is sub-optimal from a
HumanFactors perspective. The greatest benefit of software is its adaptability over time. Bugs can be easily fixed, and unforeseen applications effortlessly developed for a piece of hardware. Comparatively little is set into hardware, and even the motherboard BIOS of a PC is really reprogrammable software. Hardware, firmware, and OS designs, being more or less set in metaphorical stone, are carefully though-out and made as extensible as possible.
InformationHiding , unfortunately, encourages every rank-and-file programmer to immortalize his or her output into a black box consisting of
private members operating in unknowable ways. This is not that different from the hardware design, in its (purported, at least) inaccessibility. In so doing, OOP misses the whole point of the software revolution - the accessibility and mutability of its stock-in-trade, even for future collaborators from different organizations.
- Private members should only be used to implement obviously internal mechanisms that have no conceivable external value outside the class in which they're defined. Since they're rarely used this restricted fashion, and because "no conceivable external value" is highly subjective, arguably there should be no notion of 'private' members.
- I'm inclined to agree with you. The question calls to mind a discussion I used to have with an architect at one of my former employers. He would share technical designs with me and solicit my approval, or at least my commentary. Quite often, I would say something like 'I've got no problem with the technical details you've outlined: the classes and interfaces declared, the associated guidelines, and so on. But I cannot approve of this as a 'black box' you maintain exclusively. These things must all be worked on by everyone or I just don't agree with the breakdown-of-labor that would imply.' Amazingly, this guy would change basically any aspect of design except my suggestion that he blur the lines separating his 'black box' from the rest of us. I submit that this anecdote is an example of a HumanFactors problem with OO.
- It appears the scope of your goals were different. He wanted to build a better black box, while you were willing to challenge that very premise, suggesting a gray box may be a better compromise. The only way I see to break that deadlock is to collect examples where heavy black-boxing caused actual difficulties. Faced with enough evidence, it may finally change his/her mind. However, he/she may simply have a mental craving for hard black-boxing, a sort of encapsulation-obsessed version of the GrammarVandal. They somehow mentally associate tight encapsulation with some destiny-requested earthly cleansing. I would point out that black-boxing is not necessarily a feature of just OOP. For example, in RDBMS, one may only expose stored procedures instead of the actual tables. It gives tight control, but also limits what one can ask the DB to help with. (Semi-related: HelpersInsteadOfWrappers.) --top
- [Contrary to BeauWilkinson's argument, InformationHiding (implementation hiding, really) does not hinder sharing or extensibility; use of DependencyInjection (which is not special; it just means passing objects as arguments to other objects at time of construction) allows interested parties to hold reference to shared resources without a need to make them public to all users of an object. I would agree that the fact OOP developers sometimes think DependencyInjection a novel thing is not a good sign for the education they've received. Anyhow, InformationHiding serves as the basis for a powerful, provable, and fine-grained SecurityModel: ObjectCapabilityModel. Any HumanFactors consideration must not exclude concern for security. Weak security in the language layer forces system administrators and OperatingSystem developers to lock everything down in higher layers. This lock-down historically has resulted in 'flexibility and extensibility' for any given service being forbidden to all but a privileged few administrators. Better support for fine-grained authority management, security policies, and even economics policies in the language and protocol layers would greatly improve accessibility and mutability to for the less privileged and far more numerous userbase.]
- [It is unclear to me what InversionOfControl has to do with anything written above, nor why credit would be relevant.]
- I mentioned it because you mentioned DependencyInjection. I think DependencyInjection is a specific instance of InversionOfControl. 'Credit' is relevant to me because it defines the scope of the discussion at this page. This is HumanFactorsAndObjectOrientedProgramming. To recap: I proposed the existence of a HumanFactors problem with OO, you replied with a mention of DependencyInjection. I then raised the issue of 'credit' to say, in essence, that DependencyInjection is not really part of OO. You say that DependencyInjection can be used with OO to mitigate the HumanFactors problem I proposed, and I agree. But what I'm saying is that this does not remove the problem I pointed out if one considers OO, not 'OO plus selected other stuff designed to make things better.'
- [DependencyInjection is distribution of control, not inversion of it (though IOC can be supported by use of DI). That is, given some object D, which you control, you give reference to that object D to a new object E, which you control. InversionOfControl usually involves passing an object to a system that decides when to call back without any input from you. And I mentioned DependencyInjection because it is the basis for abstraction in OOP: parametrically abstracting an object using other objects in its constructor. There is nothing special about it. It isn't an addition to OOP. It is the fundamental mechanism for parametric abstraction in OOP. Any computer scientist will tell you there are two basic forms of abstraction: implementation hiding and parametric. If there is a HumanFactors issue, it is users often aren't taught how to use both of them.]
- Beyond that, I don't think that 'enabling the ObjectCapabilityModel' represents a worthwhile goal for an entire paradigm.
- [Why not? Do you mean to say that it shouldn't be the only goal? Or do you mean to say it is not a worthy goal at all?]
- I think that would be an acceptable goal for, perhaps, a library or an IDE. But I just think it's too narrowly construed of a goal to be worthy of a high-level paradigm like OO. Besides, it's off-topic. This page is not a catch-all for everything bad/good about OO.
- [If the 'goal' is to offer flexibility and extensibility, then security is a necessary HumanFactors requirement. Extensibility without security cannot be offered to users of any service - especially not a shared service like a database or operating system - without security. If you think it off-topic, then you shouldn't have bothered ranting against InformationHiding on the basis of extensibility or flexibility (or at least should avoid mention of OperatingSystem design). Anyhow, security is not an easily separable concern; I couldn't even imagine how to get security via a 'library or IDE'.]
- I also think you're picking out specific examples of (maybe) OO techniques which you happen to like. That's not I want to use the acronym OOP to mean in a HumanFactors discussion. I think it's best for OOP here to mean 'OOP as usually taught,' not any one person's favorite variant of OOP. Similarly, OOD ought to mean 'design practices prevalent in shops with a professed emphasis on OO techniques.' I think this is fair; this is a HumanFactors page, so by its nature it deals with the way in which engineering decisions (e.g. the pursuit of OO goals by the senior developers in a shop) influence actual human behavior down the line.
- [Ah. Well, be sure to get some examples of how these are really used in practice before you attack them. I've never known a shop that would create 'Person' and 'Manager' classes... except, perhaps, for a shopping mall simulation or the like.]
- I mentioned a few problematic real-world practices encouraged by OO in my other points. I think OO encourages open-ended, non-productive, BDUF-style discussions. I think that it encourages a tendency for the class author to declare victory and either hide or move on, even while the important customer-facing work remains to be done. I think it distorts the location selected for maintenance programming, with its concept of a black box. None of this comes from a theoretical reading of anything. It's all based on observation. And while it's true that self-proclaimed modern OO shops have changed the way they enumerate objects, isn't this an embarrasing change for OO proponents? I mean, I remember being told somewhere, about 10-15 years ago, that class names should be the nouns in the problem domain and methods should be the verbs. That seemed pretty ambitious and high-level for me. But I guess that didn't work, and now OO settles for a bunch of very specific classes that are really just containers for code, similar in role to a namespace or a DLL. This change is loudly proclaimed to me whenever I point out noun-hierarchy-related problems, but is it really an argument in favor of OO? I don't think so. OO has traded an ambitious but unworkable vision for a much less ambitious vision. And believe me, there are still plenty of people who look for noun hierarchies anyway.
- Whilst classes can be simplistically used as mere code containers, similar to namespaces or DLLs, their real value lies in being able to create and manipulate multiple, independent, stateful instances of those classes. Good OO is analogous to individual programs, each independently constructed, tested, and dedicated to a given task, which communicate with each other to accomplish an overall goal or set of goals. This is what real OO has always been, apart from any imaginative but unrealistic, and foolish, visions peddled by naive OO pundits. This strategy, by in large, cannot be implemented effectively with namespaces or DLLs alone.
- [I grant that you've done much speculation on the HumanFactors problems of OO. And the noun/verb relationship still holds; you're simply mistaken on the domain. The oldest definitions of OOP, such as the NygaardClassification, focus very much on the "build programs as from Lego bricks" philosophy: the nouns, and the verbs, describe pieces of the program model. This has always been the core of OOP. Of course, decades after OOP was invented, it became a BuzzWord and Fad and a bunch of people misinterpreted it, in part because of poor (or too heavily simulation-focused) examples in 'OO for Dummies' educational material (i.e. involving animals talking, cows mooing, ducks quacking). And now people like you are pointing their fingers at the mockeries their misinterpretations and fad-crazed speculations made of OOP and crying "OOP has failed us!". Someone failed you, certainly. But the "change" to OOP you see as a failure is properly a return to its original roots and definitions, its original vision.]
- As far as cows and ducks, see OverUsedOopExamples. You are generally confirming that OopGoesHalfWay. It may be the building blocks to something arguably better, but by itself, OOP is not competitive. --top
- [I agree. OopNotForDomainModeling, OopGoesHalfWay. Even for the domains in which OOP works, OOP by itself often fails to be competitive (a big challenge is hooking into and monitoring external resources, such as mouse, joystick, webcam, filesystem, database). Any OOPL would benefit from a companion paradigm to support domain modeling (including those cows and ducks and widgets and UI), complex messaging (without CompositePattern), and to eliminate BoilerPlateCode for ObserverPattern and InversionOfControl. I believe FunctionalReactiveProgramming to be a high quality candidate companion. (Functions may include set operations, such as joins, unions, intersect. Support for sets allows a lot of data parallelism.)]
The thesis of this page is that "OOP seems vulnerable to criticism on grounds related to HumanFactors". OOP is nothing but a programming paradigm. Programming paradigms should have no impact whatsoever on HumanFactors. Programming paradigms are but one aspect of the mere NutsAndBolts by which programs are constructed. Programs may have HumanFactors issues, but programming paradigms do not, because given a set of programming languages of sufficient capability, all can be used to create equivalent programs regardless of the paradigms they employ. As such, any relationship OOP has with HumanFactors is a failing in the development methodologies that employ OOP, rather than a failing of OOP itself. Those same methodologies would be just as subject to criticism, on a HumanFactors basis, if they were used with any other programming paradigm.
By way of analogy, if it is valid to claim that OOP should be criticised on grounds related to HumanFactors, then it must be valid to claim that the colour of the wiring in the walls of a poorly-designed house should be criticised on grounds related to HumanFactors.
RE: Programming paradigms should have no impact whatsoever on HumanFactors. Programming paradigms are but one aspect of the mere NutsAndBolts by which programs are constructed. Programs may have HumanFactors issues, but programming paradigms do not, because given a set of programming languages of sufficient capability, all can be used to create equivalent programs regardless of the paradigms they employ.
I disagree with this position. The nuts and bolts by which programs are constructed must be influenced by the economic realities of our world. This includes the common involvement of multiple interests in large projects, concerns for protection of intellectual property and sensitive information. Flexibility and extensibility of services by end-users, ability to share services (databases, GUIs, etc.), support for mash-ups or accessibility transforms (e.g. screen readers, language translators) of GUIs, etc. - the ease with which these things may be done (and whether they may be done at all) depends heavily upon the nuts and bolts used to construct programs. You can't easily write a screen-reader for a program that displays text via captcha-style postscript as a texture on the wall in a 3D environment, for example.
Paradigms are sold on their presumed ability to support HumanFactors issues. During the OO market fad (a couple decades after OOP was invented) a lot was promised (e.g. it was intended that CORBA and COM would easily allow a federation of distributed objects for extensible multi-enterprise development), but OOPLs failed to deliver (usually due to issues of concurrency control, performance, persistence, disruption tolerance, handling of partial failure, and security). A lot of naive programmers fell for that propaganda. FP has similar phases, mostly involving the not-entirely mythical SufficientlySmartCompiler.
I believe OOP or any other paradigm should be criticized in terms of HumanFactors. But, as much as possible, HumanFactors should be phrased in terms of technical problems or UserStories. If a paradigm is too complex to explain or educate most users in, that's a problem. If a paradigm doesn't allow one to contract development labor off to different groups and recombine it into a working product, that's an issue. If a paradigm does not lead easily to programs that can be safely or securely extended, upgraded, or modified after fielding, that's something to consider. If a paradigm hinders reuse of code or encourages monolithic products that reinvent various wheels, that's a problem. Without HumanFactors, I could point you to a TuringTarpit and call it as good as any other language. (It ain't a failing of the TuringTarpit... it's a failing of your SelfDiscipline or development methodology! See FourLevelsOfFeature.)
I don't disagree, in principle. I disagree only in terms of degree. Programming paradigm seems to have relatively little, if any, impact compared to innumerable other factors, assuming languages of otherwise equal capability. I would argue strongly that given two languages of equal capability in terms of environmental manipulation (which is what I meant by "sufficient capability", intending to exclude TuringTarpit languages and languages of widely differing capability, like Java vs BrainFuck) -- say C# and F# -- the choice of one over the other, assuming developers are allowed to pick which one they prefer, is of almost negligible impact in terms of HumanFactors. That said, I do see your point, and I should have been clearer in making mine.
- I certainly agree there are many factors. Languages rarely correspond directly to paradigms, which makes the 'paradigm' concept difficult to measure. Many languages are hybrids which, while might be construed as best of both paradigms in some respects, are often the worst of both paradigms in various other respects. "Impure" FunctionalProgramming hybrid procedural, for example, cannot be subjected to functional abstractions, lazy evaluation, parallelization, optimizations, etc. And "impure" OOP hybrid procedural (which has ambient authority to influence the environment and synchronous message passing) hurts unit-testing, security, distribution, extensibility, sharing, concurrency, and code reuse: it makes a PathOfLeastResistance of favoring reinvention of services and use of GlobalVariable/SingletonPattern instead of proper DependencyInjection. You shouldn't think of a paradigm merely in terms of which features it supports, but also in terms of how it constrains you. Those constraints are often critical to the features offered by the paradigm. You can always constrain yourself with SelfDiscipline, but it is very difficult to constrain library developers (to achieve features and optimizations even throughout the libraries and modules and extensions) without language support.
That said, what Beau offers
isn't a
HumanFactors analysis of
ObjectOrientedProgramming. It's a rant against OOP from someone who bought into the 90s OO propaganda, who received less than was promised by a crowd of bright-eyed geeks and unscrupulous XYZ-for-dummies marketers. The cursory claim that
InformationHiding is an extensibility issue is indefensible. The claim that OOP leads any to a power-developer writing libraries rather than integrating components more so than do other paradigms (such as FP or procedural) is unsubstantiated. The argument about class hierarchies seems presumptive - it's hardly an analysis of how OOP is actually used in practice by established OO shops.
Oh, and color of wiring in the walls of a house could be criticized on grounds related to HumanFactors; there are reasons for standard coloring (USA: white or green is ground, black and red and other colors typically hot). A house whose wires were all the same color (e.g. white), some being hot and others being grounded, is certainly possible.
I meant from the viewpoint of the inhabitants of the house, i.e., the wiring is internal, unseen by the users of the dwelling.
Indeed. Except, of course, when they want to upgrade the house...
Footnotes:
[1] It may not be as noun-centric outside of domain modeling.
JanuaryTen