Cost Of Future Change

Let's figure out a model relating cost of more anticipatory design to cost of future changes without the anticipatory design. -- RonJeffries


Here's a simple start from Ron:

Let p be the probability that a change we think we're gonna need will actually be needed.

Let the cost of making the first simple feature be C0, cost of anticipatory design be F, the cost of the change without anticipatory design C1 and the cost with anticipatory design C2. We would like to assume C2 <= C1. (What could make C2 > C1?)

Then the dollars spent with anticipatory design is

designed = C0 + F + p*C2
and the cost of doing without the anticipatory design is
undesigned = C0 + p*C1
Note that the cost of the change is multiplied by the probability that we were right when we guessed the change was needed.

Now for designed < undesigned, we have to have

C0 + F + p*C2 < C0 + p*C1
or
F + p*C2 < p*C1.

p*C2 is positive, so we can subtract it from both sides without changing the inequality:
F < p(C1-C2)

No surprise: this says that in the simple case, the cost of the anticipatory design has to be less than the savings in implementation costs times the probability that we really need the change.

In other words, if we wind up needing half the features we foresee, it must cost twice as much to build each one without anticipatory design as it would to do the design.

Or is that the whole story even with this simple model? What about the features we designed for and didn't implement? In those cases, isn't F a net loss? Do we need a stronger model, one that shows both sides, i.e. the cost of waste design? What does that look like?

And what other dimensions should be put in place? In which mode will the system deliver value sooner? In which mode will the rest of the work go faster?

What say you?


One thing that this sort of model misses is the value of thinking about things that won't be built. Sort of a MeasureTwiceCutOnce aspect of software - I'd guess that I "design" about twice as much software as I actually build. And, by and large, I don't consider it wasted effort - a large part of that "excess design" is mapping out the tradeoffs and thinking about the potential shape of the software in revision 2 and just, in general, becoming informed about the possibilities and tradeoffs inherent in the thing I'm building.

I view it like this: there's the thing that gets built. And there's sort of a fuzzy cloud of "possible programs" that are somehow "nearby" in "program space." and I think that exploring this space, and doing so fairly early in the lifecycle of a piece of code, is a good thing.

XP, on the other hand, seems to discourage abstract exploration in favor of exploring-by-refactoring (and using UnitTests to help guide the exploration). -- WilliamGrosso

I think that it's not so much exploring-by-refactoring, but rather exploring-by-coding. -- KielHodges

How might we quantify the value to the people funding the thing that is to get built, of exploring this space? -- RonJeffries

See TheValueOfResearch


A more complex model of the process is displayed at http://www.armaties.com/AnticipatoryDesign.htm. I'd welcome alternative models, and will be happy to post them on my site if their authors wish.

Mind you, I'm not saying that the analysis above or the one referenced here are the definitive and final answer. What they do suggest, however, is that there is at least a substantial range of project parameters where anticipatory design does not directly return the cost of doing it. Food for thought? -- RonJeffries


In your first post you asked:

(What could make C2 > C1?)

It is possible that the more designed software is more elaborate (and perhaps more complex). This would raise the total cost.

For example:

People often feel that my initial design is "over-engineered" - I just feel that I am anticipating unarticulated needs. Usually, the users don't know what they will get - so they don't know what problems they will encounter. I usually have some idea because I've "been there and done that". -- ShalomReich


If people often feel the initial design is over-engineered, it might be time to listen. How often is the elaboration used, and at what stage in the project? What percentage of the time is the elaboration not used at all? What is the effect of elaboration on the initial release date? As you say, elaborating the software and making it more complex raises the [initial] cost. The question is when, if ever, is that cost returned. XP suggests that it's much less often than we are inclined to think when we get those good ideas.

Some contributors here and in person suggest that experienced developers can develop with extreme simplicity because they "automatically" avoid painting themselves into corners, and that less experienced developers might make more mistakes trying to do things simply.

My experience is that a little architecture goes a long way toward making the system flexible; refactoring goes a long way ditto.

Putting in unused code in anticipation that it will someday be used certainly does not make the system more flexible: it might possibly make some evolutions easier, but it makes others more difficult. YouArentGonnaNeedIt. -- RonJeffries


Hmmm. The only unused code I put into a system is the code that tells me when some basic assumption was violated (assertions, preconditions etc.). OTOH, the design I come up with is usually more elaborate than what the user asked for - because they did not have a chance to think through all of the ramifications of what they asked for. When I explain what I have done they may ask for some modifications or they may feel my solution is OK. Either way, the solution is more complex that the original problem statement.

I previously worked in a shop that had a few massive classes. When I needed to do something that went against the basic architecture of these classes, it was a real pain. I felt that a major part of the problem was that the classes weren't flexible enough because they were so big. In my current job, I've set up lots of smaller classes that could be used to implement both sets of solutions. These classes are smaller and much more fine-grained (is that what you mean by refactoring? similar to function refactoring in procedural code?). This definitely is an architecture issue - but I'm usually the person defining the architecture.

Anticipation of future changes is handled (mostly) by thinking about what directions the system may take in the future and how that would impact the static and dynamic structures. I try to understand where the system needs to be most flexible so that I can encapsulate it. -- ShalomReich


Anticipation of future changes is handled (mostly) by thinking about what directions the system may take in the future and how that would impact the static and dynamic structures. I try to understand where the system needs to be most flexible so that I can encapsulate it.

The question is when do you do this thinking. I do it immediately (seconds, minutes) before using the flexibility in question. I find I achieve better results faster with lower risk than if I ask the same questions months or years before needing the same flexibility. This implies that I have to restructure existing code and data to match the new structure. If I had made the change long before, I wouldn't have this overhead, but I might be wrong. I'm wrong a lot. -- KentBeck


This implies that I have to restructure existing code and data to match the new structure. If I had made the change long before, I wouldn't have this overhead, but I might be wrong. I'm wrong a lot.

Restructuring code doesn't bother me. Restructuring data (especially relationships captured by the data) bothers me. It implies that my view of the world (the model that I my application represents) was incorrect. A bad model will cause problems for the users of the system because it does not match the world that they have to deal with. Fixing these problems can be real expensive to the company (even if it doesn't bother the programmers).

There are future changes that I can see coming (or that have been deferred) for Release 2 even while I am coding Release 1. I don't think it is wrong to plan for these. Do you?

If by planning we mean thinking, we probably can't help it. If by planning we mean doing, then XP says don't do it until you need it. More below.

There are future changes that cannot be planned for.

  1. Changes that are driven by changes in the world external to the company.
  2. Changes in the users' view of the world (model) that come through use of the system.
I agree that I do not try to anticipate these.

The changes that I try to anticipate fall somewhere in between. For example, if you look in David Hay's "Data Model Patterns" book, you can see a progression from a simple representation of "Parties" to more elaborate representations of these ideas. I have often seen systems evolve from the simple representations to the complex ones. So when a user asks for a system that uses a simple representation should I 1) give him what he wants or 2) ask him if he really wants a more elaborate system? I consider the second approach "planning for the future" and I do it regularly. How do you feel about it?

-- ShalomReich


We really do recommend not doing anything before you need it. (But see YagniExceptions.) In particular, using Hay's book or MartinFowler's, I would recommend going with the simplest model that will fit my current need. This will deliver the currently-requested business value to the user sooner. It will ease maintenance and enhancement in the interim, by reducing code volume, test volume, system size. And it will be easy to change over when the time comes, since the models shown in these books readily evolve from one to the next. -- RonJeffries


It may be easy to change code over time, but data - particularly if you're constrained to using a conventional RDBMS, and more so if your database is distributed - is far less maleable. I suspect that any problems that C3 may have had with incremental schema updates were mostly technical. Gemstone is wonderful in that regard. In the relational world (in my experience, at least), migration problems may seem small on paper, but can present huge logistical headaches, with many non-technical (e.g., political) overtones. In Ron's formula above, this means a much larger value for variable C2 than experience with C3 might suggest. -- DaveSmith

This smells like a database HolyWar. Could you please provide more specifics on when and why relational allegedly flops?


This may be a good area to involve the client in. Some clients are good about asking for things upfront, but some explicitly declare ignorance of their needs. I've never met the vague requirement that was easy to satisfy. DoTheSimplestThingThatCouldPossiblyWork means having a good idea what it means to "work". I'd like to hear any theories on estimating the cost of change both now and in the future. Is it useful to provide such an estimate to a client? Is there a way to build a development contract around the idea that requirements gel as the system forms? It seems that anything based too directly on effort will result in cut corners, an AntiPattern so obvious that nobody bothers to mention it. -- IanKjos


Some of the biggest costs with anticipatory design are when the anticipated design is just wrong. These costs include: help desk calls for partially thought out solutions, and the costs of removing, maintaining, or retrofitting the code. A lot of testing and development time is wasted repairing functions that are never used by the end users. This to me is enough reason not to add something before it is needed. -- WayneMack


One must not forget time discounting from the investment world. It basically says that a dollar gained today is worth more than a dollar gained tomorrow, and not just because of inflation. There is some good theory and reasoning behind the concept, so I don't think we can exempt I.T. at this point.


CategoryPlanning


EditText of this page (last edited March 18, 2003) or FindPage with title or text search