Cost Benefit Of Process Elements

Here's a starting thought on applying multiple process elements to address the same risk. Your comments are welcome.

Software development processes are about fear: the very well justified fear that if you don't do something more than just code, things will go wrong. (Some people prefer to say things like "risk reduction": make that substitution if you like.)

Some of the things that can go wrong are: defects; unmaintainability; late delivery; poor performance, and so on. Imagine for a moment that we have a fear whose value to us is one dollar.

Every process element addresses one or more things that could go wrong. Each element, informally, removes some percentage P of some particular concern.

Suppose that we know of two possible process elements, each of which can remove 2/3 of the fear, and which will cost 2/3 of a dollar to do it. Either of these elements is (barely) worth applying.

As a first approximation (but I think a pretty good one), we can assume that the process elements are independent, i.e. when applied one after another, each still removes its 2/3 of the remaining risk.

So if we apply one element that removes 2/3 of the risk, the remaining risk is 1/3. If we apply the other element that removes 2/3, it will remove 2/3 of 1/3, or 2/9. Total risk removed = 6/9 + 2/9 = 8/9.

What about the cost-effectiveness of these elements? Remember that each element would exactly pay back its cost if applied independently. That is, each element costs 2/3 to apply.

Then if we apply either one, we break even: we reduce risk by 2/3 at a cost of 2/3. But if we apply them both, we reduce risk by 8/9, at a cost of 4/3 or 12/9. We actually lose 12/9 - 8/9 dollars (4/9 dollars!) by applying both elements!

Clearly, for application of both to make sense, the [average] cost of each must be 4/9, not 2/3 (6/9).

It should be clear at this point that multiple application of techniques against the same risk is likely to be subject to diminishing returns.

So some questions might be:

What other questions do you have? Better yet, what other answers or analyses?


Suppose now the gain factor is 10:1 or 100:1 or 1000:1. That is, the payback for the procedure is 10 or 100 or 1000 times the cost. Run your equation again. The returns, while diminishing, are still great. How many process checks do you run before they stop paying back.

One way to think of it is that if 6 people sit around for 2 hours, and find one defect, they pay for 12 work hours. If you include the running of regression (or non-regression!) test suites, writing defect reports, etc., it is quite likely that the 6 people sitting around for 2 hours achieved payback. With a defect introduction rate of 8-20 per 1,000 lines of code, there is quite a lot of process that can be cost justified (FullStaffRedundancyWhileProgramming, for example).

So another model is to find out the cost of detecting and fixing a defect at various places in the lifecycle, then work out how much process you can afford to put into place to recover that cost. -- AlistairCockburn


Absolutely. And the earlier the better given equal performance?

As an example, evidence exists suggesting that FullStaffRedundancyWhileProgramming may be less costly per line of code than non-redundant methods. Would we say that the point of every process element is to be a less-costly way of attaining some "same" result? I'd guess so. What can we say, then, about how we should choose how many bits of precision we need in our process, or which specific elements to use and which not? Verrry interesting ...

Glad to see a real methodologist chiming in. -- RonJeffries

How would something like standards/compliance (e.g., ISO9000) fit in here? By that I mean, what if your shop is required to jump through the requisite ISO9000 hoops to win a certain contract for a certain project? Is this still related to fear? If so, is it fear at the same level? a level once removed? an extra level? Or is it another matter entirely? -- BradAppleton


I'm not entirely sure how this fits in with the discussion above, but recently it occurred to me that while the previous team in our space at the company had spent a couple hundred thousand dollars to buy software tools like ClearCase, MS Project, Together, etc. we had replaced those mechanisms with things like CVS and IndexCards. Our cost could be measured in mere tens of dollars [sic], whereas theirs could be measured in hundreds of thousand. Keeping in mind that it usually costs a full time person to manage MS Project and ClearCase, and keeping in mind the hidden synergy business value from using IndexCards and the hidden costs to using overly-complex buggy bloatware, the real difference in cost is much greater than that. Since the margins on most software is actually very low (if you realize that >90% of projects fail), then it stands to reason no board exercising due diligence would permit purchases of RationalUnifiedProcess-stamped bloatware. If I were a shareholder, I'd sue. -- SunirShah

But considering how much software development efforts can (and routinely do) waste in salaries, a few hundred thou for software tools probably doesn't look like much. (And gets justified as an appropriate insurance policy.) No, software companies - as victims - should really sue the vendors because the tools turn out to be SnakeOil - no one has been able to prove a significant effect. A class action suit against M$ would be appropriate. ;)

A few hundred thousand dollars is usually greater than the revenue of most products for one annum. There's no justification of even having more than a handful developers. Besides, the sweet spot for development seems to elbow around four to seven developers. (This is related to my opinion on LargeScaleEqualsFailure.) -- SunirShah


EditText of this page (last edited June 22, 2005) or FindPage with title or text search