Oop Not For Domain Modeling

Some hold the view that OOP is not well-suited for domain-modeling. For example, OOP is allegedly not well-suited to model employees, invoices, store products, etc. However, this view holds that it is still well-suited to modeling and managing the "computational domain" or "computational space", such as GUI's, report-writers, sockets, file-system API's, etc. Recognition of this is allegedly a growing trend.


(Moved from ProblemsWithExistingOopEvidence)

A great many expert OO programmers agree that OO isn't for domain objects. To say OO is therefore not for the "business domain", however, is silly. Business domain needs plenty of SystemsSoftware.

I meant from the perspective of a custom biz app developer. Sorry if I wasn't clear on that. And also, if there is a "split camp" within the OO community around the domain-modeling-vs-compute-space-modeling issue, then perhaps we should make such distinction so that we don't keep reinventing this scope issue. By the way, other than this wiki, I don't see a lot of literature on such out on the web. I am thus a little skeptical that there is a large group of OO proponents who hold the non-domain view. -t

I doubt you've looked. I see them everywhere I look, LambdaTheUltimate being but one example. I will note, though, that many of these OO proponents (including myself) favor approaches more along the lines of ActorsModel, ProcessCalculus, CommunicatingSequentialProcesses or JoinCalculus... i.e. an OOP paradigm that includes also some concepts for concurrency and possibly distribution. As such, we tend to deprecate inheritance (and especially inheritance-based typing) and see OOP as more defined by MessagePassing polymorphism + MessagePassing encapsulation + MessagePassing concurrency (related: AlanKayOnMessaging).

That may be an "internal trend", but it's not leaking out to the public face of OOP so far. It would essential amount to heresy if a popular OOP author stated such in a book. So far, nobody has the balls.

Now you speak from ignorance {RudenessObjection}. Cardelli, Stroustrup, Hewitt, and others have written books and documents indicating this. Kay has given speeches about it at OOPSLA. These people are undeniably popular. It isn't as though their ideas are not accepted, either. There is a reason STL is functional rather than OOP. The objects and protocols favored by ObjectManagementGroup are clearly more along the lines of processes with MessagePassing (polymorphism+concurrency+encapsulation) and data-distribution. Languages such as Scala and CeeOmega that have far more support for concurrency abstractions are becoming more popular in both practice and research, as are application servers and frameworks that wrap Java, Ruby, and Python objects. Many of these languages don't use inheritance-based typing at all, including Ruby, Python, and the statically typed Scala. If you don't wish to be accused of HandWaving and lies, I suggest you avoid speaking from ignorance.

What we have here is a quantity dispute. I don't dispute that some authors reject domain modeling. The issue is how many. I have not seen this viewpoint often in the OO materials I survey. I have not seen BertrandMeyer, RobertMartin, or JamesGosling comment on it, for example, let alone endorse it. It would be a huge industry-shaking admission. There's no evidence that it is a mainstream opinion.

[I doubt it would be a "huge industry-shaking admission". I suspect it would, at best, garner a disinterested shrug. From almost all the OO developers I've worked with, I get the impression that using OO for computational modelling is the default position. Textbook examples are regarded as just that -- textbook examples. They are conceptually illustrative, but are not intended to be directive any more than the similarly trivial and unrealistic examples typically found in introductory texts on procedural programming. Obviously, there is promotion of DomainModelling where it (sort of) "works" -- when using ObjectRelationalMapping layers, for example, or in ObjectOriented databases -- but these are clearly attempts to make OO DomainModelling work where it otherwise wouldn't be appropriate.]


"One of the best leaps a programmer makes during their path from newb to godhood is to realize that 'objects' don't map well to real world objects. Programs work better, designs are more clean... code sucks less when a object in code represents the embodiment of a singular concept; a singular set of invariants rather than an object." - http://lambda-the-ultimate.org/node/3265#comment-48034.


Even Bertrand Meyer agrees that OO does not reflect the real world more than it's competitors. (He questions the wisdom of trying to model the real world, to be more precise.) -t


Object-Relational Mappers

How does the non-domain view perceive big, expensive ObjectRelationalMappers?

Big, expensive, and largely pointless.

Generally these are for mapping domain objects, not computational-space objects, which generally don't have a counterpart in the database. I can see this view supporting objects such as "row" or "result-set", which are generally considered computational-space objects, but not the usual domain object mapping such as employees, invoices, products, etc. The headaches and expenses of ORM's may largely go by the wayside if the non-domain viewpoint is adopted. However, ORM vendors may have a vested interest in protecting the pro-domain-modeling view, and thus lobby for it in various guises. If the non-domain viewpoint starts to catch hold, I expect a fight comparable to SCO's "Linux" battle.

Perhaps, but such a fight is unlikely to have the legal basis that fueled SCO's Linux battle. It's probably closer to the ObjectOrientedDatabase vs. RelationalDatabase debate of the 1980s. We know how that turned out. When was the last time you saw a dynamic Web site, or any database-driven client/server application, underpinned by an ObjectOrientedDatabase?


It seems the database still plays a role in the separation here. If the "thing" is too small or too isolated or too performance-controlled to have a reasonable chance of being part of a database, then it is a good candidate for OOP use (assuming not trivial enough to be a mere function or plain array). This lack of "databaseness" just happens to correspond to "computational space" in a good many cases. Or, is there a connection? -top

Please clarify: where, precisely, is the "database playing a role in the separation"? What do size and isolation and performance-control have to do with being a "good candidate for OOP use"? I don't recall that size has been a limiting aspect for OOP. Even full-sized databases (units of data storage, query, subscription) can be modeled as objects in the "computational space", so size seems not to be an issue. Performance is achieved by design - by carefully choosing and arranging many implementation components of a database (storage model, action queue, active transactions, query objects, publish-subscribe model, stored-procedure objects, etc.), and so performance control isn't reasonably a 'per-object' concern. And having many facets to a database, as well as shared access to facets, is also not a problem for OOP - at least not in general, though shared concurrent access does require a effective concurrency control - so 'isolation' seems at the very least an orthogonal issue. Please clarify your line of reasoning, because I don't see how you came to any of those premises upon which you base your conclusion. I do, however, have suspicions: are you, perhaps, assuming that objects compose primarily through encapsulation and thus become very 'big' internally? Encapsulation is a property between interface and implementation, not a means of composition.

RE: full-sized databases modeled as objects: In that case, "performance control" would be the deciding factor, not size. OOP is generally closer to "hardware space" than databases.

Please explain what "performance control" has to do with that decision. Exactly how is it "the deciding factor"?

Perhaps AreRdbmsSlow explains it better.

I see no reason to believe so. A database being modeled as an object is still just as fast as the same database not modeled as an object, is it not? Will you please explain what, if anything, "performance control" has to do with this decision?

Somewhere there's a miscommunication.


Outside of OOP's Comfort Area

SystemsSoftware and computation-space may have very different software change profiles than domains apps. That OOP may be great for building and managing computation-space tools may not translate into something comparable to domain-heavy apps. I suspect this is because engineers have more control over their "business rules" and/or a greater desire to apply logic and consistency than those who make the domain rules, such as politicians and marketers. OOP is pretty powerful *if* you can stick to its favorite ways of managing variations-on-a-theme. But if you cannot stay within it's goldilocks zone, it's worse than the at least one of the alternatives, such as IF statments, predicates, and set theory. OOP thus offers a contractual obligation: stick with these variations-on-a-theme norms and wonderful things happen. Deviate, and you just purchased the pottery in the bag you just dropped. --top

I find the 'comfort area' concept agreeable. It is certainly true that SystemsSoftware engineers have an easier time managing 'variations' because one is coding to well defined 'interfaces'. Domain modeling doesn't fit into OOP's comfort because the 'rules' of the model necessarily become scattered across the object code (which violates whatever modularity benefits OOP may otherwise have offered), and the primary change pattern in domain modeling is tweaking these rules. For example, what you can do with a red ball does not depend on the red ball, but rather depends on the rules of the model: perhaps the rules say you can put red things in basket A, round things in basket B, so a red ball can go into either, but a green ball may only go into basket B. You might later change those rules. A domain model must deal with collections of simulation-elements affecting one another in accordance with a set of 'rules', and so RuleMetabase? or LogicProgramming designs are more in the comfort area. (RelationalModel is closely related to LogicProgramming).

There is a reason many languages are designed to support a few different paradigms. There are 'sweet spots' where the comfort zones of different concepts effectively cover one another's weaknesses without overlap. For Procedural, TopMind claims good access to a Database is such a sweet spot (TableOrientedProgramming). After having done a great deal of SystemsSoftware work, I would suggest that the equivalent sweet spot for OOP is good support for FunctionalReactiveProgramming, which replaces a ton of awkward ObserverPattern BoilerPlateCode, handles domain-modeling tasks, and makes it easy to keep data outside the OOP graph (caching can be performed implicitly, it is easier to integrate databases). A lot of SystemsSoftware involves monitoring external resources and reacting promptly to changes or events, and FRP helps a huge amount with the 'hooking up' and 'when to react' bits. The performance (parallelism, lazy searches, etc.) of the FRP aspect is itself aided greatly if the functions include FirstClass support for sets and set operations (joins, pattern matching, logical unions). Additionally, the functional layer makes message transport easier to distribute (a'la ErlangLanguage).

We seem to be kind of agreeing that a yin-yang relationship between two different paradigms/tool can be a nice thing because one is likely to fill in the weak areas of the other. One could extend that and argue that lots of paradigms is even better. However, it's difficult for the average developer to master more than two, or at least there's insufficient motivation because they are often focused on keeping up on specific high-demand languages and tools.

Thus, two complimentary paradigms is kind of a magic peak. One-only has too many weak areas and 3 or more you get diminishing returns and fewer developers who can relate to and follow your stuff. Put another way, the benefit change of going from 1 to 2 is greater than from 2 to 3 (if good pairs are chosen). Related: ParadigmPotpourriMeansDiminishingReturns. --top

Indeed. There may be combinations of more than two paradigms out there that would serve quite well, but chances are the paradigms themselves would be smaller to compensate (to avoid overlap). I expect a skilled LanguageDesigner might tweak and trim paradigms to eliminate overlapping concepts, i.e. put three partial-paradigms together to form a sweet gestalt.


See also: ComputationalAbstractionTechniques, OopBizDomainGap


CategoryOopDiscomfort


EditText of this page (last edited January 10, 2013) or FindPage with title or text search