Fundamental Flaws In Procedural Designs

Some OO developers believe that T+DB (tasks and database) design is riddled with obvious flaws and OO helps avoid them. A database can be viewed as a big global variable and comes with all the problems of global data. It also allegedly has no decent way of encapsulating varying behavior dynamically. OO allegedly addresses these problems.

Possible or alleged flaws:

The only possible "flaw" I would see with Procedural Design is qualitative, and that is scalability. I find OO design tends to provide the ability to write larger programs with less difficulty. I don't believe that anything written with an OO approach is impossible to write with a procedural approach, merely that the OO approach simplifies the development and maintenance of the program. Interestingly, as systems scale up to necessitate multiple programs, we tend to have a procedural grouping of OO based programs. --WayneMack

It should be pointed out that not every OO practitioner agrees that procedural has obvious flaws. Many believe that it has many subtle flaws that add up. However, it has proven tough to isolate and communicate these kinds of things, resulting in a fair amount of resentment and confusion.


If OO design relies on mere convention, then one can make conventions for procedural also. Plus, databases have referential integrity, triggers, etc. to enforce data rules. If it is data that is too "small" to be put into a database, then most likely it is not "global" anyhow.

It's not about data protection so much as data packaging. In procedural code, one passes data around in structures or records. This provides no place to put the code that goes with that data. So one puts code into modules and passes the data to it, over and over and over. OO lets one package the code and the data that it's meant to work with into a single entity, the object. Now one can pass around the object, and no matter where it is, all of its methods are available and easy to find. With records and structures, one has to know where the functions are and remember them, for example, method(data); with OO, one doesn't, for example, data.method(). Beyond that, OO lets one dynamically change the skin of that data with different versions of those methods, allowing lots of dynamic flexibility. OO is convenient, plain and simple, it makes things easier. These things were and are done in procedural code too, by packing function pointers into structures along with the data... but then, that's how OO got invented. OO is just a formalization of those advanced procedural techniques.

Data and operations are generally loosely associated in practice unless one is a fan of multitudes of dedicated structures (I am not). OO designs seem to force this association artificially. Often multiple unrelated operations operate on the same data and visa versa. The relationship between data and operations is generally many-to-many in the longer run (if one factors properly). See below for more on this many-to-many claim.

OO allows one to attach functions to recordsets, so one doesn't have to pass the record around so much.

One doesn't not need to pass records all around. We may have different experiences. Perhaps a poor language forces one to do odd things. I had to do funny things in C that I would not ordinarily have to do in other procedural languages.

Ugh, and how pray tell does one pass state around from one procedure to the next? Whether you pass around a dynamic array as a parameter, or a string, or a record, or whether you map your database to your associative arrays or records.. you still have to pass them arround. Unless, you use a single global variable that all procedures can access - which doesn't scale. -- Structured Programmer

Without an example, one cannot demonstrate anything objective. Is there less code, fewer change points, or what? There is nothing countable. This might as well be an argument over art or music. There is no "science" in OO computer science. One cannot assert there are "fundamental flaws" unless there is a way to objectively demonstrate them.

Equally, please show an example of how you pass your state around from procedure to procedure without using a record parameter or an array parameter or similar. This is not a language issue - this is a theory issue. It's common sense that state somehow has to be passed around from procedure to procedure - no examples need to be provided to understand this basic concept. An object's "self" is a hidden parameter..


Having done both Procedural and OO, I do not agree that "T+DB (tasks and database) are riddled with obvious flaws."

I tend to view OO and Procedural as different ways to organize code rather than anything more significant. Procedural design tends to have fewer but larger methods than OO and does not really provide a mechanism to help segment methods into smaller parts. The methods are usually combined into a smaller number of large modules or files. OO design tends to result in larger numbers of smaller methods combined into larger numbers of smaller classes.

Procedures/functions can be as small or large as desired. You can break a task into ever smaller tasks. See StepwiseRefinement. Whether that is helpful or not is another matter. I will agree that perhaps some people think better with smaller methods/functions. We all have different minds that are tripped up and helped by different things. But, we must be careful not to extrapolate our preferences into others.

How does one share context (the data the method can read and/or write) between ever smaller tasks? Is the context passed to each method or stored as global variables?

See lower part of ProceduralMethodologies

OO gives some guidance on how and when methods are to broken down, while Procedural does not. Procedural code tends to be initially written as a few very long methods intended perform a function, while OO tends be initially written with lots of small methods that must be combined to perform a function.

Procedural division is usually done by dividing task into subtasks. At the larger scale tasks tend to be (or should be) considered independent, but on a smaller level tasks are often broken-down more or less hierarchically. And, I see nothing which forces one to make small methods in OO. If you could provide an example, that would be helpful. Plus, large functions are not inherently bad, although it appears that what "bothers" people tends to vary per individual. LongFunctions.

The guidance is from encapsulation and cohesion. Encapsulation basically requires one to work with methods rather than data. By applying encapsulation, one tends to segregate operations on a more granular level, with a higher level method coordinating the overall operation. Cohesion helps define a common "home" for all of these newly created methods by centering them around common data elements. Procedural designs, in general, tend to be more ad hoc in the allocation of methods to modules.

The allocation of methods in Procedural design can be considered ad hoc in the sense that it is not supported by the design approach. Individual designers may use private patterns for self-consistency, but these patterns may only be conveyed by task design documents. OO provides guidance at a larger scope.

One observation I have made is that one often finds OO-like design in Procedural code, particularly in difficult or troublesome areas of the code. Encapsulation, partial if not full, tends to be a solution applied to handle these situations. When strange things are happening to some set of data, the natural inclination seems to be to get control of the manipulations on the data.

{Examples of both of these claims would be helpful.}

By "get control," I am referring to the situation where one sees anomalous data or anomalous operation based on data. A typical response is to pull at least all operations doing a write to data into a common module and use only the common methods. Sometimes, operations requiring a read are also consolidated. Instead of having duplicate inline operations scattered throughout the code, there are a limited number of centralized access.

There is no one right grouping. Different groupings help with different kinds of debugging. If one groups behavior around nouns, then debugging that relates to how behavior and nouns interact may indeed be a bit easier. However, it may make other kinds of debugging harder in exchange. Perhaps it depends on the kind of mistakes that individuals tend to make. Different people make different proportions of certain mistakes. The grouping of code is simply a compromise to select what one sees as the least evil.

<I think we somewhat concur on "no one right grouping." The original point I tried to express was that there were differences in method grouping between Procedural and OO, but nothing that could be considered either a fundamental or obvious flaw. I, personally, find that OO provides an adequate grouping most of the time, but I recognize times where it doesn't seem to work. I may not always agree with others on exactly when those conditions occur, but I do find it interesting to try to understand why the approach falls short.>

On the other hand, when crossing program boundaries (one program to another, or a program to a third party or independent component), it seems more appropriate to revert to a procedural organization. The methods tend to be organized procedurally, for example, a component that handles the GUI display aspects of a data element, but really provides no "business" functionality. At these interfaces, we tend to decapsulate the data, pass it across the interface and re-encapsulate it.

Given that there are many successful applications that have been built using Procedural Design, I can't agree that those designs are "riddled with obvious flaws." If these programs have flaws, the flaws are probably not obvious. Moving to an OO approach will sometimes help to isolate a flaw and help developers correct it. I would tend to a pragmatic approach of using the design approach that best fits the current skill level, knowledge, needs, and development tool set of one's development team. Also, recognize that these will certainly change over time, and the code can be moved between a Procedural design and an OO design and vice-versa to reflect those changes.


If OO was just about encapsulation of data, than it would be qualitatively no different than procedural. There's absolutely no fundamental difference between the two alternative examples, one being OO style and the other being procedural style:

Example 1 in Java :

HashTable x= new HashTable(20);
x.put("A", "A value"); 
String value= x.get("A");

Example 2 in ObjectiveCaml:

  let x= Hashtbl.create 20;;
  Hashtbl.add x "A" "A value";;
  let value v= Hashtbl.get x "A";;

The only difference between the two is in notation:

  data.procedure( <parameters> ) 
versus
  procedure( object, <parameters> )

{Some might argue that the implementation will look different if one used more than one implementation of a hash-array in a given application.}

In the ObjectiveCaml example the internal data of the hashtable can only be manipulated by the functions in Hashtbl module. The type of the internal data is totally hidden to clients (unlike in C where somebody using stdio.h has access to the definition of FILE structure). So encapsulation is perfect.

The essential difference between ObjectOrientation and procedural is something different altogether, and is defined by KristenNygaard as per NygaardClassification. It is absolutely correct to say that the OO paradigm has no more nor less encapsulation than the procedural paradigm.

What is more essential, is that OO developers should consider the procedural paradigm (for example avoiding such nonsense as FundamentalFlawsInProceduralDesign?); there is a lot of good knowledge that should not be lost. Instead OO developers should be studying procedural means of abstraction and composition very carefully, so that they do not get into "all I have is a hammer and everything looks like a thumb". There are lots of procedural tricks that most OO practitioner are blissfully unaware of, and the truth of the matter is that there are no fundamental flaws whatsoever that are related to procedural programming.

For better understanding people should consult such fine books like StructureAndInterpretationOfComputerPrograms or the more recent ConceptsTechniquesAndModelsOfComputerProgramming by PeterVanRoy and SeifHaridi?, and those will provide an unbiased and very useful exposition of all the major paradigms. To quote from van Roy and Haridi: We show that multiparadigm programming is natural and that the conventional boundaries between paradigms are artificial and limiting.

Please note that neither example shows any sort of class (there are no methods!) nor data encapsulation. The example shows two different ways to store and retrieve data from a data structure.

The issue seems to be that NobodyAgreesOnWhatOoIs.


I'm currently working with some code in an OO language that was written in a procedural style. The primary flaw with this approach is that it forces us to talk about the code in different language than the domain. As a result, we have to make a mental transition between what our domain expert says and what we put in the code. That will inevitably lead to defects when we misunderstand the domain expert but don't realize it. The domain expert won't realize it either, because when we talk about the design, it makes no sense to her. I'm helping the team refactor the code to a DomainDrivenDesign to help resolve this problem.

--JimShore

I think there is/was a debate topic about this somewhere. How exactly OO allegedly better "fits the domain" I don't exactly know. Traditional Simula-67 type of models don't apply to my domain because it deals mostly with intellectual or virtual property: taxes, money, reports, approval status, etc. One is not emulating things but rather processing and managing them. For example, the physical approach to dealing with accounting is obsolete with transaction-oriented databases. We don't need DoubleEntryBookkeeping anymore. And, we can multi-index library books so we don't need to model Dewey Decimal trees or book or card physical proximity anymore. So what are you looking at to compare? Perhaps you really mean OoBetterModelsHumanThinkingDiscussion? -- top

I strongly agree that OO is not appropriate for all domains. OO is particularly bad for math, and it's not so good for database-centric applications either. I think it is good for domains that have complex rules, particularly if they're non-regular (i.e., typical business rules). You seem to imply that OO is best for simulating physical objects; while that was its original purpose, I find that it works fine in other domains as well. In the example above, the domain is very conceptual, not physical at all. --JimShore

It seems to me OO would be worse for "non-regular" rules, because linear and hierarchical taxonomies will tend to fail, and OO gets messy outside of those. By "messy" I mean it no longer has the catchy-but-simple code and change patterns found in toy shape, animal, and device-driver examples. I did not mean to imply that OO was about physical modeling, I was just trying to address what I thought you were saying or implying. If I got it wrong, I apologize. It is hard to compare how well something matches to non-physical things. -- top

Well, let's be sure to distinguish the way OO is taught and the way experts use it. OO is taught... poorly... as these rigid inheritance hierarchies. You're right, it's silly and only works for toy examples. But experts use OO by creating a mesh. Each class represents a concept in the domain and encapsulates processing related to that concept. The classes have relationships to other classes that correspond to how the concepts relate to each other in the domain. Inheritance is used sparingly. This approach does map to non-regular domain rules fairly well, not that it's ever easy to unambiguously describe the rules in the first place. --JimShore

But the relationship between "concepts" and "processes" is often not one-to-many, but more like many-to-many, especially over the longer run. If the goal is to relate (track) the relationships, then a database beats code in most cases in my opinion. If you get away from mere hierarchies and animal-like code shapes, then OO just reinvents databases the hard way. -- top

Yes, concepts and processes are orthogonal. That's what makes OO difficult for so many people. Procedural code is easy because when the customer says, "take this thing that we need done and do it," you go write a routine that does it, probably decomposing [tasks?] along the way.

Well, at least you seem to agree that procedural makes *some* things easy.

Object-oriented code is harder because when the customer says, "take this thing that we need done and do it," you say, "what do I add to the various pieces of my domain model to do this thing?" and "how does this change my overall model?" The procedural scripts get smeared out into bits of little code fragments on various classes in the domain model.

This is more or less EventDrivenProgramming, and it is good. It's all part of DivideAndConquer. -- top

The first approach is easier and more intuitive, but it becomes difficult to manage when you have lots of things to do and complex domain logic. Eventually it becomes simpler to manage a domain model [in OO].

"Lots of things", yes. But that is why we have databases. How are lots of event snippets worse than lots of classes and lots of methods anyhow? As far as this not handling "complex" logic, I would sorely like a specific example.

For people who do a lot of OO work, like myself, that time comes sooner rather than later, because we've been trained to think in terms of domain models.

Well all these years I thought I was dealing with a "domain model" also. How can we objectively see if your head domain model is better than my head domain model?

As I look at your other comments, I think you may think that a class corresponds to a UserStory/UseCase. That's not at all true... that's the procedural approach.

No, I did not say that nor mean to imply it. However, some OO design philosophies probably propose that. It is all part of the OoLacksConsistencyDiscussion complaints. -- top

A DomainDrivenDesign has a class corresponding to a concept in the domain--what you're calling an entity--and a single UserStory may involve multiple classes, just as in your table below. Although a UserStory probably wouldn't be as low-level as your table implies.

"Concepts" are multi-faceted by my observation. EverythingIsRelative. They intertwine in the real world. One cannot draw clean *global* circles around "concepts" and have that be sufficient. Thus, I strive for local or ad-hoc "circles". OO's abstractions of concepts tend to try to be global, and that is a flaw in my opinion.

I can't agree that OO reinvents databases; the goal of OO isn't to track relationships. A domain model reflects the relationships between concepts, but that's a side-effect, not the goal... and it isn't necessarily a complete or accurate representation.

--JimShore

Polymorphism, inheritance, and composition/aggregation are all about "relationships".

(Responding to Top's inline comments): This discussion seems to be veering off into a discussion of databases versus OO which isn't what I'm interested in discussing. (Although I have to admit that an event-oriented approach using database tables, triggers, and stored procedures, assuming that's what you're suggesting, sounds interesting.)

{I believe tables/DB is the *key* to procedural design's success. OO relies too much on code, and that is its main downfall. This seems to be a constant sticking point between us. I learned to stop thinking in code (as much) and quickly started thinking in tables. It felt liberating. It is declarative with code being the low-level detail that carries out orders and request based on data. With regard to GUI's, DB triggers would play little or no role in the designs I tend to follow. The framework already has "outer control", so we don't need triggers. However, that is mostly a framework implementation issue rather than something an application developer has to deal with directly.-- top}

To recap my position, there's no such thing as a fundamental flaw in the procedural design approach. I don't see a fundamental flaw in any mainstream design approach; just fit for a particular purpose or not fit for a particular purpose. Which, if I had been willing to keep my mouth shut, is exactly the point of the WhenToUseWhatParadigm link down at the bottom of the page.

WikiGnomes, please feel free to do anything you like with my end of this discussion.

--JimShore


"Fixel" Domain Matching Example

An example might be more clear: In one case, the domain expert says, "you need the first fixel" We say, "we'll use the fixelFinder on the dataArray to get an array of fixels and then we can get the first fixel from that." Our domain expert says, "huh?" We reply, "well, a dataArray is how we represent the data points in the blodget, so using fixelFinder allows us to find all the fixels in the blodget. Once we have that list of fixels, we can just take the first one." And the domain expert says, "that doesn't make any sense, but okay... whatever..."

  firstFixel = fixelFinder.find(dataArray)[0]

After refactoring, we say, "We have the blodget object, so we'll ask the blodget for its first fixel." Our domain expert says, "yeah, that's right, you want the first fixel from the blodget."

  firstFixel = blodget.fixel(0)

The new code matches the domain, making it easier for us to talk to the domain expert and reducing the likelihood of errors.

This is a real story from last week. I've just changed the domain terms to keep things mysterious.

--JimShore

Why not just have a function called "getFirstFixel"? I suspect you will argue that OO forces one to group all fixel-related stuff into the same class and that procedural relies on convention only to do that. However, there are at least 2 problems I see with that reasoning.

<Probably an appropriate refactor of the above code would produce Blodget.FirstFixel(). I suspect a more in depth refactoring would replace the FirstFixel() method with one that actually performs an action;instead of something like Modulate(Blodget.FirstFixel()) one would have Blodget.Modulate(). Remember, there is probably not a need to "get" the first fixel, there is probably a need to take some action that just happens to involve the first fixel.>

First, many tasks or scenarios involve multiple nouns. A single PrimaryNoun is artificial. It could involve 2 nouns, 5 nouns, or 1 noun. It may later grow to getFirstFixelThatHasLessQuantityThanRelatedSnarkle. This now involves both fixels and snarkles.

Second, as the OO design matures it often involves splitting ever growing classes into multiple smaller classes. At this point any grouping of related classes is now a convention, just like subroutine grouping.

Besides, why should the domain expert dictate how the code looks? If you want to track UserStory's that is fine, but to force them to be associated with just one entity is arbitrary in my observation. There is no Noun Police I know of in the universe that enforces that. At best it is a UsefulLie. A better visual way to track might be a grid with UserStory/UseCase on one axis and entity on another. We may then have something that looks like this:

. UserStory/UseCase|  Entity
.-----------------------------------------------------------
.| Snargle  Flaggle  Groggle  Etc...
. getFirstSnargle|X--
. removeDuplicateGroggle |X-X
. fooTheBar|-X-
. Etc....

--top

If the domain has a Snargle, it makes sense that the program has a Snargle, then the domain expert and the programmer can discuss things without having to do a mental translation every time Snargle is mentioned. That applies even if Snargle is just a concept in the domain.

Why can't the Snargle be a DB table? I see no advantage to make it a global code thing. Database entities and noun attributes are one-to-one, but often not behavior. Behavior and entities tend to be orthogonal enough to avoid hard-wiring a large-scale relationship into code.

Simple, because a table isn't code. Domain models are ways of organizing code, not data. The domain object may be put into a table when it's not being used. We have to write code, so we need a way to organize it.

Organize it by minimizing it. Factor noun-ness out of its structure and move it into tables so that we are dealing more with tables and less with code because tables are more flexible. You can't query code very well. Plus, limiting code to almost only task-ness makes its organization more consistent so that there is less Picasso-ing going on with different developers. See CodeAvoidance.

We can organize it procedural style, by task, or OO style, by related behavior. You choose task because you think it's easier, we choose OO style because we think it's easier.

Because it is in a style without any objective external evidence. Do you want Software Engineering (and Wiki) to be like science or like marketing?

[Programming style preferences are subjective. I write OO code because I enjoy it.]

{Moved some to OoVersusTablesRants}

Plus, I am fascinated by OO thinkers. It is like trying to communicate with an alien being with a totally different way of thinking. The problem is that it is hard to convey solid thoughts back and forth. (I don't mean "alien" in a bad way. It is just an analogy to convey the feeling it gives me.)

I am not sure I follow the gist of the previous discussion. Would it not be fair to say that the operations performed on "Snargle" need to go (i.e., be stored) somewhere? I do not see how use of "tables" reduces the number of operations to be performed; to me the only change would be in the implementation of the operation.

I don't think anybody suggested that it reduced total operations. It is about managing them (finding, changing, listing, etc.) However, in my opinion OO designs do tend to re-implement DatabaseVerbs in each class.

Would we be in agreement that the quantity of operations is the same in either approach? Also, I would like to confirm the definition of "operation" that I am trying to use, I am using operation to mean a segment of code that could either be implemented as a section of code contained inline within a large method or could be implemented as a stand alone method called by the larger method. My opinion is that the some basic (though fuzzily defined) operations need to be performed in whatever approach one takes, and the issue is primarily differences in implementation and packaging. Does this seem acceptable?

Yes, I generally agree, except perhaps in cases where the OO version appears to be reinventing things that databases already do out-of-the-box. But, for the sake of argument lets put that issue aside right now. -- top


In addition to the discussion above it is essential for people who perceive that there are fundamental flaws with procedural, to come to terms with NygaardClassification, because otherwise they might be programming procedurally without even realizing it.

The perception is primarily rooted by the fact that in stock procedural languages (especially C and COBOL ) it was very easy to write spaghetti code, as well as the fact that procedural languages were en vogue in an early period when the maturity of the software engineering work force was a little bit less than it is now when OO is in fashion. Therefore the flawed perception is that simply by applying good engineering practices (like InformationHiding, avoiding code redundancy,etc) one made the jump from procedural land into OO land, while in reality they are simply writing good procedural code in Java or whatever OO language. See the vacuous claim that OoIsJustGoodEngineering.

To illustrate this phenomenon I'll bring the book RefactoringImprovingTheDesignOfExistingCode, considered by many a "bible" of good OO programming practice, where most examples are not ObjectOriented as per NygaardClassification, but are mostly concerned with better organizing snippets of code, and those really are procedural/functional snippets. The mere fact that they are grouped in classes does not make them ObjectOriented.

I completely disagree with the above conclusion. RefactoringImprovingTheDesignOfExistingCode is full of OO code as per NygaardClassification. Firstly, all OO code is made from procedural functional snippets, putting them into classes and having those snippets work on instance variables of that class is what makes it OO code. Almost every sample in the book shows you how to improve bad OO code, or procedural code into good OO code. OO code is after all just better organized procedural code in many many circumstances.

["Refactoring..." is full of examples of how to turn procedural-ish code into OO and vice versa - it doesn't explicitly favor a paradigm. If I read the Fowler book correctly, most of the refactorings worked to make each class responsible for a single responsibility, and so the class would become more Nygaard-like and less C-with-classes-like. There were also several refactorings to go the other way, to turn a mostly OO class into a more procedural version.

And OO code is not just "better organized procedural code" under the Nygaard/Kay definitions. It's a fundamentally different way of thinking. -- JonathanTang]

I am generally not a fan of Fowler's work -- top

A nice exercise would be to take some longish example like http://www.refactoring.com/rejectedExample.pdf, and translate that from Java to a good procedural language such as Scheme or ObjectiveCaml (make sure you don't use the objects in the camel), and I guess even Pascal or Ada without objects (Ada'83) is up to the task. It really is no difference but a small change of notation !!! Of course some may argue the reverse, that on the contrary the Scheme version would be OO programming in Scheme rather than Java version being procedural programming in Java. To resolve this dilemma we'd have to appeal to the wisdom of KristenNygaard: Is the design of the system focused on simulating autonomous entities that react to signals by changing their state and sending other signals ? Or is it just focused on how to best organize a series of calculation as a succession of procedure calls (even when those procedures are methods) ? A closer look will show that the later is the case.

[Scheme and OCAML are functional languages. Did you mean "good functional language..." or did you mean to use Pascal/Ada for the example. -- jt]


By the way, I have no doubts that TopMind is able to produce code at least as good on that [above] example as MartinFowler. What do you think, Top ?

It sounds suspiciously similar to the example analyzed at http://www.geocities.com/tablizer/mellor.htm . In the end the disagreements circled around the probability of certain change patterns. In my opinion those proposed by the OO debaters were artificially regular in the "shape" of change. Perhaps polymorphism fans and procedural fans simply perceive change differently. -- top


Is the design of the system focused on simulating autonomous entities that react to signals by changing their states and sending other signals? Or is it just focused on how to best organize a series of calculations as a succession of procedure calls (even when those procedures are methods)?

What's the criteria used to make this distinction? A procedure call is a signal.

I would say neither. The design of the system should be focused on automating a manual task. It should not be an attempt to throw a lot of techno-babble at a client (as was done in the question).

Be careful not to project experience from some problem domains onto others. Many systems do not automate existing manual tasks, and even in systems that do, strict adherence to existing roles and responsibilities can be counter-productive. I'm still curious how we can tell the difference between simulating autonomous entities signalling each other and organizing a series of calculations as a succession of procedure calls.

(moved rest of discussion to AutomatingExistingProcessVersusImprovingIt)


The relationship between data and operations is generally many-to-many.

Could someone expand upon this statement? I have a hard time picturing even a one to many relationship between a single operation and multiple data types. I can only think of a few restricted places where this is even remotely implementable and those involve hidden type changes by the compiler (for example char to int in C). How can a single operation apply to more than one data type?

I generally divide operations into two kinds: DatabaseVerbs, and domain-specific. Keep in mind that the division may be a little fuzzy in some cases, it is just a conceptual classification. But in my opinion DatabaseVerbs belong to the database for the most part. One should not have to repeat Add, Change, and Delete for each and every entity class, for example. That is poor InterfaceFactoring. But beyond those, many "operations" do or can involve multiple nouns. For example, printing an invoice involves customer data, invoice data, and line-item data, as well as possible tax data. Thus, at least 4 entities are involved. It is true that at original development time, one particular noun may be the PrimaryNoun, but over time this is far from guaranteed. Thus, the coupling between operations and nouns should be light, and procedural fits well with this policy. Polymorphism is too heavy a coupling statement in my opinion. -- top

How would one write such shared methods? For example, how would one create a single Add() method for both a name and a birth date? The only options I see are to write multiple Add() methods with some means to discriminate between them, or to create an Add() method with a large parameter list and select the particular Add operation one wishes to perform based on the parameters passed. Is there some alternative to create a shared method I am not aware of?

Most database interfaces seem to solve the problem by using data or meta-data instead of a coded approach. Schemas are stored as data, so what an "add" needs for a given entity is calculated/checked as needed. It would be similar to using a DataDictionary for validation and other purposes.

Could you expand further? Unfortunately, I still can't see what you are describing. Are you implying a code generator or something similar (or am I totally in left field)? For the Add() example, how does the offered data get placed in the correct location?

It depends on how you define "location". Your "add a birth-date" text is a bit confusing to me. Generally you "add" a person object or record. If there is already a record, then you "replace" the birth-date in the record/object, not "add" it.


Why should the domain expert dictate how the code looks?

I think this might be an interesting question to explore. I, for one, have always just accepted that code organization becomes "better" as it more closely reflects the problem domain. I also realize that I cannot justify that belief. I would suggest that what might look at some lower level questions as well.

-- AnonymousDonor

When I first learned OO, closely modeling the problem domain was in vogue. I was told that would facilitate communication with end users. I've drifted far away from that position over the years. My users do not care what my code looks like. I try to use their jargon as much as I can, but I do not try to maintain a one-to-one model of their processes and roles in my code. They want the code to meet their requirements but (for the most part) they do not care how it does so.

OnceAndOnlyOnce drives most of my code organization decisions. I often write a new feature in a verbose, repetitive or brute force way ("make it work"), then refactor the common code ("make it small"). The problem domain determines the requirements the code has to satisfy, but the solution domain determines the structure of the code.

I have done this in procedural languages, but I still use object oriented techniques. Object oriented languages make those techniques easier, so I prefer them.

-- EricHodges

Can you provide an example of OO facilitating OnceAndOnlyOnce?

One alternative to domain driven code organization would be "component driven" organization. This approach views software as consisting of existing components that are merely "wired together" to create a new application. I still see occasional references to this "absolutist" belief, but do not feel it will ever be practical. On a lower level, however, I see there is always a trade-off between using an available generic module and create a domain specific module. --WayneMack


Re: Database is "global" and "naked"

See GateKeeper


Re: Inheritance is an example of OO facilitating OnceAndOnlyOnce.

Yes, but I find problems with using inheritance to factor out duplication:

Subroutines/functions can also be used to factor out duplication, and appear to be nearly as powerful as inheritance in that regard. I agree that it does require a little bit more code than inheritance, but that is the cost of not buying into a tree-shaped view of change and differences. In other words functions are more change-friendly if the world is not tree-shaped in reality. If the world goes trees, then inheritance pays a bit better than functions in terms of OnceAndOnlyOnce; if not, functions win.


See also: WhenToUseWhatParadigm, SwitchStatementsSmell, OoLacksConsistencyDiscussion, ImprovingProceduralLanguages


EditText of this page (last edited December 4, 2014) or FindPage with title or text search