Adaptive Discipline

on WhenXpFails the following assertion is made: Any adaptive software development discipline, relentlessly applied, is unlikely to fail.


How unlikely? More likely to fail (in the large) under what circumstances?

There are several practices that use this principle: the various modified waterfalls, BarryBoehm's SpiralModel, and, most visibly on Wiki, XP.

XP is the interesting one because of it's fine-grained nature.

Talk of attempted solutions corrected on the basis of experimentation brings to mind the class of numerical methods called PredictorCorrector.

IF XP and its relatives constitute PC methods for software, then the analogy prompts two questions:

On some of the XP pages there seem to be implied assumptions that feedback always reduces errors (solves any problem), fine grained feedback being better than large grained (iteration is always stable).

This isn't a perfect analogy, of course, but in numerical analysis these two assumptions are never made.

So, how would one do a stability analysis of a software method?


WorstThingsFirst isn't fine-grained in its experimentation. You don't do an experiment and then probe close to the same area. You pick the "worst" problem you have, experiment until you have learned about that problem and reduced its risk (or failed to, in which case you probably get cancelled).

DTSTTCPW seems to be based on just this kind of local search, though. But in fact it isn't. Simplest isn't about searching. It says to start with the simplest solution you can think of, and replace it with something more complex if you need it. The more complex thing is probably NOT a probe near the original.

Then you pick the "worst" [remaining] problem and work on it. The effect of this is that you are all over the domain, probing. You learn the global terrain, not just the locale.

Fine-grained feedback, if I understand the term, could leave you at a local maximum.

Or diverge and blow up completely, oscillating between several no-good solutions. But humans don't make this mistake (in software) very often. It's not likely that you'll oscillate between binary search and hash tables.

Many adaptive search techniques probe locally a while to find a local max, then jump "randomly" to see whether they find better ground.

To the extent that the numerical analogy holds at all, this might be a better model of WorstThingsFirst.

You mean something like SimulatedAnnealing? That's an interesting thought. Hasn't someone else made that comparison somewhere?


CategoryExtremeProgramming


EditText of this page (last edited June 2, 2008) or FindPage with title or text search