Improving Extreme Programming

ExtremeProgramming provides many forms of feedback to observe project quality. Test results, task tracking, commitment schedule, and open human communication are some of these.

When something goes wrong on an Extreme Project, you tend to find out right away. After all, if everything is getting done on time, and the tests all run correctly, and no one knows of any problems, you might be in good shape.

When something does go awry (and it will), an ExtremeProgrammingMaster will look at the situation, and with her team, try to make things simpler so as to make them better. Her first step would be to improve the code:

We found that if the production support people forgot to reload certain class variables after pushing to GemStone, bad things happened. We changed the system to do it automatically.

Then she would improve the UnitTests:

We found that certain classes that implemented one method needed also to implement another (like #= and #hash, for example). We wrote some UnitTests to machine review all classes.

Then she would improve the AcceptanceTests:

We found that defects showed up at the boundary where our output files go to COBOL. We changed the acceptance tests to compare the actual file produced, not the data written to the file.

She would do those things first because they are all automated. Once put in place, they check that quality issue forever with no additional cost.

Then, and only then, would she look to human-based improvements. She would examine first improvements that didn't give people more to do.

We found that we were getting more errors showing up in acceptance tests instead of unit tests. We were also slowing down. The team was feeling pressure and had slacked off unit testing "to get more done". The coach reminded the team that unit testing gets more done, not less. People went back to doing it right, and things got better. A reduction in effort with an improvement in speed and quality!

If she found no improvements without additional work, she would consider improvements that might possibly increase the actual workload, but that were based on XP practices already in place.

For certain complex stories, we found that we were slowing down and didn't like the code we got. We increased the frequency of CRC sessions before coding, and the problem went away. Improved communication and quality at the same time, and didn't require a new process.

At that point, if there was still no solution, she'd call an ExtremePowwow? and some new process element would be invented. She'd make sure people knew when, and most importantly when not to use it.


ExtremeProgramming provides many forms of feedback to observe project quality. Test results, task tracking, commitment schedule, and open human communication are some of these.

You've read about XP. You think it's great. Then you read about the RUP. You think that's great too. Then you read how CarHoare believes the most important thing is to prove correctness. Maybe you think that's great too. Same about reviews, DesignByContract, UI prototypes, Change Management etc.

Cant't we add some or all of these to XP and make it even better? I don't know, but I believe one thing is extremely important:

If you do, make sure you don't break the feedback loop.

Don't replace RelentlessTesting with correctness proofs, but add the proofs if you must. Don't replace ContinuousIntegration with cumbersome change management. Add another layer (*at a different timescale*) of configuration management if you must.

The experience of those doing XP makes them feel that usually you don't need the extra complications. The benefits don't justify the costs.

-- Martijn Meijering


What might be valid reasons for adding complexity?

Correctness can be so important that it is worth the cost of reviews, design by contract or even correctness proofs. If you're writing software for a nuclear reactor or a jet liner or a space probe you should probably do more than just RelentlessTesting.

ExtremeProgrammingChallengeFourteen displays some simple[?] concurrent code containing a defect. So far none of the Extreme folks have been able to build a test that displays the defect.

[add your reasons here]


CategoryExtremeProgramming


EditText of this page (last edited March 25, 2003) or FindPage with title or text search