Premature Generalization

This is really going to be a clean framework. I'll make an abstract class out of this part so that folks can subclass it later, and I'll put in a bunch of well-commented overridable hooks in the concrete subclasses so that folks can use them as templates, and just in case somebody ever needs to build special debug subclasses, I'll put in extra stubs over there (somebody will thank me for 'em one of these days). Yeah, this is really going to be neat.

Thus is bloated software produced when our artistic sense gets the better of us. -- DaveSmith


Here are some generalizations in programming:

Here are some examples of inappropriate generalizations:

It's interesting to note that all these maladies can be cured by refactoring. But refactoring isn't always possible, particularly when you're dealing with somebody else's immutable code. (Maybe the real problem is the immutability of the code, and not the generalization at all.)

Here are some times at which a programmer can generalize:

Contributors: EdwardKiser

I would consider speculation as different from generalization. Until you have the information necessary, you are just guessing at what is needed and your success is determined largely by luck. I also believe you may be grossly underestimating the costs of speculation. -- WayneMack


When the generalization turns out to be inappropriate, we call it premature

I disagree, though perhaps only with this sentence taken out of context. "Premature" has to do with when, not what. It is premature to posit a generalization until you can base it on concrete solutions to real problems.

Given specific solutions to base a generalized solution on, it's still possible to botch the generalization, but at that point, it's not a premature botch. -- DaveSmith


Edward,

First, generalization is not a set of tools. Generalization is a process to make code more general. The tools and methods you describe can be used for generalization, but they also can be used for specialization, depending on how do you apply them to your code. Moreover, generalization doesn't require such tools and methods; removal of some unnecessary function parameters or ASSERTs sometimes constitute a generalization, too.

Second, generalization is not about comprehension/inclusion or concretion. Generalization is about removal of excessive details. In other words, generalization is simplification.

Concerning your examples:

Could you restate this? I am having trouble understanding this argument.

The code

  enum command{ start, stop };
  void f_command(command do){
if( do==start )
f_start();
else
f_stop();
  }
  void f(){
f_command(start);
f_command(stop);
  }
isn't more general (less detailed) than the code

  void f(){
f_start();
f_stop();
  }
Isn't overloading a generalization of a method's signature? Why do you say this is not a generalization?

Is overloading of operator<<(ostream&,int) a generalization of operator<<(int,int)?

Creating an abstract base class is a generalization regardless of whether it is used or not. If it is not used, then it is most definitely a PrematureGeneralization.

Creating an abstract base class doesn't make the code more general (just the opposite, I fear). Writing the code that doesn't need the full class interface signature does, and that doesn't matter whether such a limited signature is defined as a class itself or not.

An abstract base class is less specific than a usable class, as the abstract class does not provide any implementation of the methods, only names and signatures. The abstract base class only defines intent and allows the intent to be satisfied in multiple ways. This more general than a method which will return a fixed result for a specific set of calling parameters. If the abstract class does not result in more general calling code, then it is an inappropriate (and perhaps premature) generalization.

An abstract base class is less specific than a usable class, so if you replace your "usable class" with your "abstract base class", and your code still works, then you generalize you code. The question is do you always generalize the existing code if you just add a new code, however general it is?

As an example, class c001 {}; is the most general class I can imagine. If I add this class to my code, will I therefore generalize my code? I don't think so.

-- NikitaBelenki

I hate to say this, but the attributions here are very confusing. What is Nikita's, and what is not? The bit that starts in italics and then suddenly stops is particularly confusing. Can someone please clean this up?


In reading this page so far, I get the impression that we don't have a clearly defined notion of WhatIsGeneralization, so then it's doubly hard to say when it's premature or otherwise faulty. If the participants here are interested in clarifying this fundamental question, some starting points might be

-- WaldenMathews


To make a piece of machinery "more general" you have to modify it to be capable of doing something other than what it already does, while making sure it can still do what it does now. "Premature" in this context would be adding a capability that does not improve the system as a whole, so it depends on the opinion of the users or future developers. For example, adding a saw to a SwissArmyKnife would be useful to some people, but for others it would just add weight and complexity.

An example of "ultimate generalization" could be DWIM(x) - "Do what I mean to X" (http://ars.userfriendly.org/cartoons/?id=20011121). An actual implementation would be able to do literally anything, and would be useful for any X.

An example of the opposite ("totally free of generalization") could be code that takes no inputs and returns a hard-coded value.

-- VictorEngmark


PrematureAnything means before it is ready. By definition, PrematureGeneralization is "generalization" done before it is ready to do so. It's maybe also a case of YouArentGonnaNeedIt, and YagniMightLeadToPrematureOptimization


I'm not quite yet to the point of stating that PrematureGeneralization is the root of all evil, but I've come close. This is a sin common to ObjectNewbies who either tend to try to create everything as an abstract class, or take the opposite approach, and have all isolated classes with the occasional ShallowHierarchies.

On the other hand, it also seems to be a common sin for a certain set of ObjectExperts? who tend to believe that they can successfully generalize from one success into a framework that will be universally useful. Unfortunately, these people tend to congregate together into StandardsCommittees?. Witness the CORBA Persistence Service...

-- KyleBrown


Actually, I think it's the approach that is taken to generalization. If you design something and it isn't general so you add lots of hooks and conditionals to make it general that isn't a very good approach.

What does make generalization handy is the removal of special-case conditionals and hooks. Sometimes it's difficult to come up with a true general solution, and in that case you may have to live with specialized code.

Of course, when you do have a general solution that does cover all cases (and even potentially the ones you haven't though of) it usually ends up being quite elegant and maintainable.

Just my $0.02 worth. -- MarkGrosberg


Oh yeah! I've trained myself for years to be able to do this. I enjoy working my brain. It's what I am. Very hard to avoid. Stop me before I generalize again.

On C3, we have a special Artifact which is awarded to developers when they do things that are too cool. It's a lovely red yellow and blue cap with a propeller on the top.

Thanks, Kent.

-- RonJeffries


Whenever I'm afraid I'm about to generalize prematurely, I think of Kent, and the urge goes away.

Thanks, Kent.

Of course, if you defer generalization until the time is right, the result can be profoundly satisfying.

-- BrianFoote

Yep, there's nothing like hearing him say "Of course you're much smarter than I am, so it will probably be OK for you to do it that way ..." -- RonJeffries

"Premature generalization", "deferred generalization" - sounds kinda Tantric!


Much agreed on the pleasure of deferring generalization! In ExtremeProgramming, we discover generalizations. "Hmm, these two classes have a lot in common, let's make one a subclass of the other." "Hmm, look, there's this other one, would an abstract superclass make for less code?" "Hmm, we seem to be making collections sorted by date a lot, maybe we need a HistoricalCollection? object." It has been very hard for me to stop foreseeing what we might need and to wait until we need it. But dammit, it does work better. -- RonJeffries

Doesn't it happen a scenario where to add a new feature you need to rethink/rewrite a lot of code, while if you had done a little bit of premature generalization, everything would have been easier? I have no example at hand, but I imagine that "premature generalization" is able sometimes to keep doors open, making it easier and faster to add some kind of feature later, while without it, it could become harder to add that feature. How about PrematureGeneralization and BuildForTodayDesignForTomorrow; and PlanAheadForReuse? (the sixth principle in SevenPrinciplesOfSoftwareDevelopment) does not indirectly suggest to generalize (too) in order to make it easier the reuse? -- MauroPanigada

It is more common that you'll generalize in the wrong direction and later undo your generalizations then redo them in a better direction. It is much easier to make a good generalization decisions when you have concrete instances from which to generalize (thus rules such as ThreeStrikesAndYouRefactor). I think the exception to this rule would be when creating an ApplicationProgrammerInterface? or ProgrammingLanguage where you expect developers outside your house to also use your model. In that case, you are constrained by BackwardsCompatibility on how you can later rethink/rewrite your code. But good API or LanguageDesign is not a trivial task, and you might spend years on it... so, for productivity, you should avoid sharing APIs with other projects or developing languages until you have several projects needing the same API (ThreeStrikesAndYouRefactor at the project level).


"I Have This Pattern."

If PrematureGeneralization is an AntiPattern, then maybe the corresponding pattern is "JustInTimeReuse"? (I've been meaning to write up something like this for a while now, and hope to in my Copious Free Time.) Or, in the spirit of what Ron said above, maybe "JustTooLateReuse"? [see CodeHarvesting]

Someone in my group put together a reusable framework. It had enough generalization that the average size of our apps shrank by a factor of ten; a Good Thing(tm). But it had 'way more generalization than we needed, and brother, was it Byzantine. -- PaulChisholm


I used to do this kind of PrematureGeneralization all the time. Its part of what I now call "Too many toppings on my ice cream" syndrome: If, atop your ice cream serving, you put strawberries, hot fudge, marshmallow, caramel, chopped nuts, whipped cream, bananas, pineapple, coconut, butterscotch, etc., at some point you cant taste the ice cream anymore; you only taste the toppings. And you have so many of them, they don't mix well together. I suppose this is the kind of software bloat that Brooks talks about in his "Second Systems Effect" principle. But with some of us, it happens for our first system too ;-) Maybe it has something to do with trying to make a "Master Plan."

Anyway, whenever I get such ideas to generalize this way, I don't necessarily do it. What I do is try to make sure I don't make any other design decisions which would preclude me from adding it later on. If I can't do that, then I at least try to do it in a way that doesn't require a complete overhaul if I need to go back and change it.

-- BradAppleton


How does this relate to RoomFullOfMeccano? If we're going to BuildaSkyscraper?, don't we need to start with some kind of "Master Plan"?

I don't see that a Master Plan is inevitably connected with Generalization. In terms of RoomFullOfMeccano, I can interpret the above conversation in the following way: The people on the right side of the room are working bottom-up, the people on the left side of the room are working outside-in. The people on the right just put pieces together. Periodically one of them says, "hey look at this... it is a stable right angle!" and other people start incorporating that. Another says, "I finally got a three-tube-connector to work!" and people start using that. The point of the above PrematureGeneralization talk is to consider what it would mean if, at the beginning, one of them were to say, "Look guys, I have a 5-tube-connector, which we can use anywhere we need a 2-tube connector, and then we'll have three tube entries free in case we should ever need them!" Someone on Ron's team would at that point start singing the propellerHead song and bring in the beanie. The point is that no one has identified the need for a 5-tube connector, and they have built this more than sneaky feeling that if you haven't identified the need, YouArentGonnaNeedIt. -- AlistairCockburn

Oooh, can I download the PropellerHeadSong? for my cell phone ring?


A lead in here from SimpleSuperclassName. It seems that if we stick to simple domain abstractions, the natural tendency is to form specializations as subclasses. When we start to search for commonality on top of our domain concepts, we can get into trouble. I recognize this as a powerful heuristic for OO, but the other side of me leans towards BlackBoxComponentry where we specifically look for non-domain abstractions for very utilitarian reasons. I suppose that it is cool to do both but make a strong separation between them.

By the way, I have to ask this. I heard that once there was MultipleInheritanceInSmalltalk, and it was even implemented in Smalltalk but it was retracted. Will anyone relate the story? -- MichaelFeathers

After a quick web search - I'm not expert on smalltalk history (in other words, I don't know smalltalk at all) - I found an article about adding MI to smalltalk-80. Quoting from the end of said article: "The changes required to add multiple inheritance to Smalltalk-80 are only a few pages of Smalltalk code". The whole text can be found here: http://www.unicavia.com/Squeak/MultipleInheritanceInST80.html


Friends of mine said they found the hooks still lying dormant in the current Smalltalk and they experimented with them themselves. So I suspect it is all true and all you have to do is go in and look around. -- AlistairCockburn


I found that no one but the author ever uses the "hooks." While I had built an extremely powerful object model framework, I doubt anyone else will ever fully understand it. Heck, people had problems adding a new attribute effectively. I did grow it from just-in-time requirements, but everytime I added a "feature" whose callers got refactored out, I left it in, especially template virtual methods (template ala the TemplateMethodPattern). I don't think they'll be used now that I left. On the other hand, while I was still there, I could get magic to happen quickly, which is what counted. -- SunirShah

On a similar note, consider the following from ForthValues:

The main enemy of simplicity was, in his (ChuckMoore's) view, the siren call of generality that led programmers to attempt to speculate on future needs and provide for them. So he added a corollary to the Basic Principle: "Do not speculate!"

Do not put code in your program that might be used. Do not leave hooks on which you can hang extensions. The things you might want to do are infinite; that means that each has 0 probability of realization. If you need an extension later, you can code it later and probably do a better job than if you did it now. And if someone else adds the extension, will he notice the hooks you left? Will you document this aspect of your program?

Now THAT's a great example of wisdom from pre-OO RespectedSoftwareExperts, as somebody was asking for recently (but where, O RecentChangesJunkie's?)


Isn't this what SteveMcConnell calls GoldPlating in RapidDevelopment?


Maybe I'm the only dissenter here (which should ideally give me pause, but what the hell). I think a lot of times you can prematurely generalize, although of course I wouldn't call it that. I think you can install hooks in your code. I guess the trick is to have the discipline to not do so arbitrarily, but to come up with a unifying reason for why you're doing so. And this is hard, which leads to lots of "Don't do that" types of comments.

That is, if you decide to install a hook of some kind in one situation, then you should be able to generalize the reason that you installed that hook in the first place, so that you will know a priori where to install others as you move on. If it's a specific hook to solve a specific problem, then it will never be used again, so don't do it. But if it's a hook that's there because it's an instance of a completion of a business activity, then you should install hooks at all points where a business activity completes. Judging when and when not to do this is an art, and I think presenting it as an antipattern is therefore overly simplistic.

To put my money where my mouth is, I tend not to put hooks in my code, but to "fire" lots of events. If I've done something interesting in a given method, then I like to alert potential listeners about it. That leaves a couple avenues open for specialization: subclassing or composition. Deciding what's "interesting" is, of course, generalization, and would be judged "premature" by this discussion, I fear. I don't think it is. Thoughts?

-- LairdNelson

But would it hurt to leave the events out until you need them? If you're not using them now, why don't you put them off? You can't be sure that you'll need the functionality later, but you can be sure that you don't need it now.

You also need to consider HowPeopleReallyAre... [Continued on that page.]


It's actually a lot easier to remove stuff nobody used than add useful stuff later.

I find it extremely difficult to discover methods or conditional branches that are not being used and, another example, I recently two classes that had been aribtrarily split into two (creating a three level hierarchy where only a two level one was justified). These things are very difficult to find, time consuming to verify, and in the last case, time consuming to correct.


Is there another risk, as well? What if somebody uses it, but the use they put it to isn't a good match to the use you had in mind? After all, you're no clairvoyant, and if you guess what people will need in the future you're going to be wrong quite a bit. The result could be that you have code that ends up being used, so you can't delete it, but it's nasty and crufty because it's being misapplied.

Anything you do has a risk. You have to match the risk against your context, not some ideal context. These aren't guesses. They should be from use cases and be tested. There's no reason for the code to be crufty or misapplied. You are also being clairvoyant by predicting that developers will adapt the class in the future instead of writing wrapper code or writing new code. This type of clairvoyance doesn't seem to bother people here.

It isn't clairvoyance if you've also implemented other practices (UnitTests, CollectiveCodeOwnership, etc., etc.) that enable your coworkers to adapt your work without fear. Yes, this is more ExtremeProgrammingDogma?. And, yes, if you say that can't be done where you work, someone will tell you to ChangeYourOrganization.

''It is dogma of the most condescending kind."


I don't know of many compilers that can correctly force-feed SlimFast? down the throat of a program, even if they are documented as doing this at some level. The hooks and factored out functions will eat up executable space and possibly slow down the system. To reiterate the above statement, any process or hook that is misunderstood can be detrimental to the system. IMHO, if it's not in the design at the end of the lifecycle, it is misunderstood (conversely, if it needs to be added to the design, do so in a way that the process or hook is clearly understood by its user). What good does it do for a Dog Catcher to call a class that involves cats just because we might want to implement Animal Control in the future even though it is not specified in our RequirementsAndGoals or RequirementsAndDesign? -- WyattMatthews


"... and I'll put in a bunch of well-commented overridable hooks ..."

I still don't understand what hooks are. They are the ultimate strawman I know that. Are they called hook1, hook2, hookn? If they are derived from use cases, they aren't hooks.

See TemplateMethodPattern, TemplateMethod, and HookMethod.

If a method is used in an algorithm it is hardly a hook. The algorithm is abstracting a process so the method is central and required. There's nothing uneeded or extra about it.

If you make a hook method there because you *think* someone might use it someday, then it is extra and unneeded. You're generalizing prematurely based on an imagined future need. If you implement an algorithm and discover through refactoring that you can push a hook method up to a superclass, then you have a real need, and the hook method isn't extra.

TemplateMethodPattern says there's a method for the algorithm. How can it just be added into the algorithm if it means nothing to the algorithm? There's no imagined future need in an algorithm. Can you provide specific examples? I still think this is just a strawman. Nobody has really provided anything other than generalizations.

O.K., Example: Say you have an immediate need (e.g., UserStory) for writing a class to scan a set of logfiles. The story calls only for tallying certain attributes. You think "Ah, but it'll be really convenient for somebody someday if there's a hook method that gets called whenever we being to scan a new logfile." But there's no immediate need for such a feature. If you implement it, you're generalizing prematurely.

I've seen lots of code like this that was produced by well-meaning people with the thought that one day, someone would find the extra methods convenient. Meanwhile, that extra code has a carrying cost. It has to be tested. It has to be maintained. It occupies intellectual space in the head of whoever has to grok the class.

I have not seen lots of methods like CallThisAfterReadingLineFromLogFile?. Just have never seen it. Imho it is still a strawman. In fact, I see quite the opposite. Programmers rarely think of clean algorithms. In this case, you probably have a method to read a line from the log file. Make that virtual then people can override if they wish then there's no hook because it is part of the algorithm.


How can this be related to a design situation where some developers say "let's factor the code out in to more layers of abstraction because it may be used in different ways later" and others that would say "we use it in one way now and that's probably all it will ever be used as, so there's no need to abstract"?

A concrete example from my experience would be separating the data access layer from the actual data/logic model itself. If you're loading your widgets from a DB and you're only ever likely to load them from a DB, is it worth the overhead of separating the loading of the widget out from the widget itself?

I tend to say yes, as I like the function of classes to be clear and distinct. I also think that refactoring can sometimes be more work than building a flexible and healthy structure in the first place. Many of my coworkers disagree (especially when working on client billable time).

What's a healthy middle point? Where's the distinction between good coding practices and PrematureGeneralization? Is it something that simply has to come from experience?

Thoughts?

-- Aaron Liebling


To answer the last question, any coding practice that makes code more readable cannot be faulted on the grounds of PrematureGeneralization. PrematureGeneralization occurs when the code become more difficult to maintain or understand. -- JaredLevy


"any coding practice that makes code more readable cannot be faulted on the grounds of PrematureGeneralization"

What if the goal is to support flexibility (an extra level of abstraction) when it is unknown if the flexiblity will ever be used? Obviously, going overboard with normalization (abstract based classes for everything) can make code less understandable rather than more, but when does it cross over in to PrematureGeneralization?

Would it be argued that NO abstraction/normalization should be performed unless it is based on already defined needs or improves readability/maintainability?

-- Aaron Liebling


Avoiding PrematureGeneralization is a good rule of thumb, but it's not an absolute. Sometimes you can anticipate future requirements and design accordingly. Still, once you're aware of the dangers of PrematureGeneralization, you can intelligently weigh the pros and cons of making your code more general.


I'm glad to read about the problems and trouble I've had in a scientific way! Generally, I find myself generalizing after the three first times I implement something. (RuleOfThree) -- StijnSanders



I think the discussion is a bit too general, when writing a program it's easier to know when to generalize but what if your writing a library? You might not need it(for example, your not planning on using your own library) but someone else might.


See Also: WhatIsGeneralization, BigDesignUpFront, CanAnArchitectureEmerge, DoTheSimplestThingThatCouldPossiblyWork

Contrast: WhatWillThisBecome

CategoryAntiPattern


EditText of this page (last edited May 30, 2011) or FindPage with title or text search