Take a rule out and run the system. Measure the difference in the expected emergent behavior. Put the rule back in. Measure again.
If there was no change, the rule isn't important.
Applications:
When people object to a particular ExtremeProgrammingPractice, I tell them to get good at it, then try an iteration or two without it. -- KentBeck
The trick with a sensitivity analysis is that for any system you do this to (whether a good old-fashioned statistical model or a real-world software process), you are assuming a linear relationship between the bits you're taking out. That changing one part a little bit doesn't perturb the others significantly. And in ExtremeProgramming (for example ;-), isn't it true that the rules depend on one another for their positive effect on the whole? -- BillTozier
Kent doesn't seem to be assuming any such thing; without making such an assumption, if taking out the bit you're taking out doesn't make any difference to measured behaviour, you can safely deduce that it did have a linear relationship with the other bits: zero effect.
A contrario, Ken implies that the experiment was carried out in the process of getting to XP, and that the remaining practices do in fact depend on each other. If you do remove one ExtremeProgrammingPractice and measure no net effect on your productivity, I suppose you should tell Kent ASAP.