Nano Increments

A nanoincrement is an increment (deliverable functionality) developed and integrated in less than a day, perhaps on the order of an hour.

You ... you mean there's any other kind??? -- PhlIp

Some people think that increments have increasing economic cost as they get shorter, that 6-16 weeks is the optimal range, and the overhead of getting into and out of an increment grows percentagewise as the increment gets shorter than 6 weeks.

The ChryslerComprehensiveCompensation and other ExtremeProgramming people seem to believe otherwise, and are working to what I consider valid increments on the order of 1 hour to 1 day. In the first nanoincrement, a small piece of function is put in, tested, integrated and released. In subsequent nanoincrements, the code is refactored to retain the same external behavior but have different internal structure.

I am still getting over my surprise that incremental development doesn't completely fail from the overhead of such short increments, and waiting to learn more about these new economics.

-- AlistairCockburn


What is the CostOfAnIncrement? Maybe ExtremeProgramming eliminates this cost, too.


As well, there is a tension in my understanding of ExtremeProgramming practices. As the increments get smaller, is it easier to get trapped in local minima in the solution structure? That is, small amounts of work which move you toward an immediate goal (after all, YouArentGonnaNeedIt) but which end up with massive refactorings because no one wanted to invest in solution infrastructure along the way.

I think that the idea is that the massive refactorings are better than the more massive solution infrastructure investment. That is, refactorings create the infrastructure, but the code tells you how the infrastructure is to be built, so that you don't have to spend time up front building a bad infrastructure and tearing it down again. -- RobMandeville

I'd like to see ExtremeProgramming in practice to be able to see just how the foresightful architecture vs instant gratification dynamic plays out.

Come visit C3 any time: we love to be observed. I'm serious.

I know that in my recent feeble attempts at ExtremeProgramming, I have two things in my head: what I need to do immediately to get the system running, and what I want the system architecture to look like eventually. I service the first goal primarily, but I do small little doses which move the architecture towards what it should be over time. My conception of the architecture sometimes changes, but I am content to move toward it a little at a time. I attempt to evade the local minima this way. -- MichaelFeathers


Ward gave me the ParableOfThePainter. While adding a touch of white in a small region of the canvas, s/he is understanding the effect on the whole canvas. -- AlistairCockburn


I guess you can avoid the LocalMinima? because RefactoringEqualsReparametrization -- TomHoulder


It's definitely the case that an ExtremeProgramming project starts from a "metaphor", which amounts to an architectural overview of how the system works. As well, when we go to do some new thing, we do a CrcSession? to see how the objects are supposed to fit in. Finally, and this is a hypothesis, I am coming to believe that when you RefactorMercilessly, you don't wind up at local minima, you find the real one ... because when you refactor, you are reducing duplicate code in the whole system, not just where you start looking.

That said, I see no reason to believe that you would ever go from one architecture to a more favorable one just by refactoring. And IRL we sometimes see that we've screwed something up and replace it. That's where patterns like ArchitecturalSubstitution come in. -- RonJeffries


Yes, I can see that you will end up at the real minimum. The question is how much do you have to undo to get there? If you proceed in 10 small increments and undo 10% at each increment, you end up with 1 whole unit redone over time. And this isn't even a cumulative calculation.

This isn't quite what happens. Each "upgrade" is a generalization, the simplest generalization you can use. This process converges in practice: it rarely takes more than one or two generalizations to give you what you need "forever". And the really good thing is that you wind up where you need to be, not with more than you need.

I guess that what I am getting at is if in XP we are to assume that we are not as smart as we think we are, just how unsmart can we afford to play? How much judgement do we discount in favor of empiricism? I recently chose a simplified version of MultiCaster as an architectural base for some software. If I hadn't been aware of it, I probably would have ended up with something that fitted the problem at hand tighter, yet it would have been less easily adaptable over time. As it stands, I've been consistently surprised at how easily emergent requirements can be accommodated using this architectural base.

We work hard to be as simple as possible, as unsmart as possible. We have never once been bitten by putting in too much simplicity. And frankly, I was very surprised by this: I've always felt that it was important to be smart.

It seems to me that in XP there are experience-based judgement calls that are being made all the time. If there is a line between YouArentGonnaNeedIt and building infrastructure that you feel you are really going to need based on experience, the ExtremeProgrammingMaster skates slightly on the more conservative end of that line (i.e. toward simplicity), but not so conservative that development thrashes on issues that should seem obvious. Or am I getting this wrong?

Actually we try very hard to DoTheSimplestThingThatCouldPossiblyWork, and to assume YouArentGonnaNeedIt. We really do skate as close as we can to the simplicity end of the line, subject to "it could possibly work". Development doesn't thrash because we're going forward, so things don't often get moved back and forth. And because we RefactorMercilessly, the process doesn't hurt. The reason, we believe, is that you're slowed much more by over-general code than by small-scale refactoring. See also SimpleIsntEasy.

I hope that XP is documented with plenty of stories so that this judgement process is laid bare. Moreover, so that the culture can be shown and emulated. I have a feeling that the culture you all have developed, and the way that the practices serve it, is the crux of everything. -- MichaelFeathers


I think part of the economic advantage of NanoIncrements is that it resembles a form of RiskManagement, and it is something one of my clients actually insists on. If a step in a project is going to take three weeks, you have a potential problem - how will you know, two weeks in, how close to schedule you are? What do you do three weeks later when you discover that you still have three weeks of project to go?

If you are working in NanoIncrements, it becomes much easier to see when you are making progress. Problems become apparent sooner and can be communicated and worked out (or around) by the group.

This is a somewhat odd way of looking at things, because it implies there is a cost to using NanoIncrements, at least at the level of the increment, and that the advantage is that risk is better managed and avoided (which probably decreases cost at a higher level).

-- BillTrost (pretending to be an economist)


As the increments get smaller, is it easier to get trapped in local minima in the solution structure? That is, small amounts of work which move you toward an immediate goal (after all, YouArentGonnaNeedIt) but which end up with massive refactorings because no one wanted to invest in solution infrastructure along the way. -- MichaelFeathers

There is a thing that happens here, which is that if you don't do your little refactorings, you'll likely wind up with one that is too big to swallow readily. C3 has had this problem a time or two. But it doesn't seem to happen when you refactor as you go along, only when you get so "busy" that you don't clean up after yourself. That's our experience, anyway. -- RonJeffries

C3 did make a design mistake that we had to back out after we were a long way along: we were recording deductions as negative numbers added to entitlements and users wanted them to be positive numbers that were subtracted from entitlements. That change (not really a refactoring, since it changed function) took us a week or week and a half to do: we broke a bunch of AcceptanceTests and took that long to get them all back up to green. Even in that case, we did the actual releases by NanoIncrements, just told everyone not to panic about the tests dropping for a while. -- RonJeffries


Ron, there is something very important you just wrote (some months ago) that my newly culturally tuned mind is finally able to detect. "just told everyone not to panic about the tests dropping for a while." - - - and they did that (not panic, that is). I'm learning to notice just how rare and how critical that is. Cheers -- AlistairCockburn 3/28/00 (can I use two digits again, now?)


The non-linear cost-of-change curve is present in all aspects of development. When refactoring, changing twice as much code before testing exposes you to more than twice the cost of finding and fixing any bugs exposed by your test. When integrating, changing twice as much code before integrating exposes you to more than twice the cost of finding and fixing conflicts with other developer's work. When measuring, going twice as long between measurements exposes you to more than twice the time to recover if the project is going in the wrong direction. When planning, going twice as long between planning sessions exposes you more than twice the time to recover when (not if) the customer changes the stories. By changing tiny pieces of code, integrating several times a day, measuring daily, and planning every few weeks, you don't ever have to start climbing the steep part of the curve. NanoIncrements are how XP keeps the cost-of-change monster at bay.


EditText of this page (last edited April 18, 2005) or FindPage with title or text search