Extreme Programming Code Reviews

Code reviews are considered important by many large-process gurus. They are intended to ensure conformance to standards, and more importantly, intended to ensure that the code is clear, efficient, works, and has QWAN. They also intended to help disseminate knowledge about the code to the rest of the team.

ExtremeProgramming requires that all development is done by two engineers working together. The code is actually reviewed on the fly, to quite a great degree. This ensures that more than one person has intimate knowledge of the code at all times.

ExtremeProgramming requires that all objects have UnitTests. These ensure that the object works, and continues to work as modified.

Some languages are reflective. In such languages, UnitTests can check directly for important standards conformance. (e.g. objects must implement both #= and #hash, or neither.)

ExtremeProgramming practices CollectiveCodeOwnership, which means that objects needing attention will be browsed by many developers. This tends to bring pressure to bear on those producing code that doesn't conform to standards. Visiting developers are encouraged/expected to bring code into conformance when they find deviations. This also ensures that knowledge of code is disseminated beyond the initial pair of programmers that created it.

Therefore, ExtremeProgramming projects do not require explicit reviews. Drop them from your methodology.

Contributors: RonJeffries KentBeck Anonymous


An interesting concept - code reviews are generally intended to:

  1. verify the correct functioning of production code
  2. ensure that agreed-upon standards are followed
  3. promulgate understanding of code throughout a project

I can see that the 1st point is adequately handled via UnitTests, the second via collective CodeOwnership, and the third through PairProgramming; however, there still seems to be something missing here: how do you ensure that the UnitTests actually test the added functionality? Don't the tests themselves need to be reviewed? -- RussellGold

[A: See CodeUnitTestFirst to ensure that all functionality is tested.]

I'm not sure, do they? The developers are individually and severally responsible for writing good UnitTests. At the same time, they don't always get the same level of attention as the production code. In a team that wasn't as cohesive as ours, or that had deadwood that couldn't be weeded out ... one might need reviews. Though I suspect one would do better to get a better team. -- Ron

Quis custodiet ipsos custodes? -- Juvenal

AcceptanceTests ultimately test whether things work. If something slips through into acceptance tests, or (g*d forbid) into production, the ExtremeProgrammingMaster updates his UnitTests to avoid future embarrassment. In XP we rely on the engineer to be doing the right thing, and on pair programming and collective code ownership to pick up the slack. It seems to work ... but does it really? -- RonJeffries

Isn't PairProgramming also used when writing UnitTests and AcceptanceTests? Wouldn't that help eliminate the need to review the tests?


The difference between ordinary code reviews and the extreme sort is that you use a computer with a projection screen and browse the code online with most, if not all, the members of the team present. As you see problems you make the changes right then and there. If the problems are big enough you write your findings on a card and assign a pair to tackle it separately. Perhaps as a tool they are not needed on C3. The VcapsProject has used ExtremeProgrammingCodeReviews most successfully as a tool to get down the path to being extreme. I myself would say to use them as a tool until you find you don't need them. -- DonWells


Another difference is that code reviews are either done by the machine (some kind of lint) or human readers. In the latter case usually not the complete code is reviewed but the reviewer takes random samples. In XP everything is reviewed by at least one human reader because of PairProgramming. -- PeterMaier


Shouldn't some mention be made here about how tight the XP feedback loops are between coding & testing? Doesn't that drive a big incompatibility wedge between shoring up changes in big chunks and waiting to review them, then preparing the review (reading the code) and conducting the review itself? XP shoves that entire code-review-test cycle into iterations that take minutes rather than days or hours, but on a much smaller granularity of code.


Anyone have any emprical evidence that code reviews do or don't improve XP-generated code?

XP doesn't generate code. [Could someone expand on what is meant or implied by this answer?]


In short, XP doesn't need PostHocReview?s because it uses ContinuousReview


Even in the best XP projects, forces from outside the team cause big changes in the codebase to happen all at once, so that ContinuousIntegration is interrupted. This is a good time to consider a more traditional code review. ReviewsBeforeBigChanges.


EditText of this page (last edited June 16, 2004) or FindPage with title or text search