Developers generally become more productive if they focus on improving quality by techniques such as regular builds, unit tests, and others discussed on Wiki. However, extremely high quality systems -- pacemakers and aerospace are the canonical examples -- are even more expensive to produce. For most developers, QualityIsFree -- time put into making tests or finding errors early will be richly rewarded, but that doesn't continue forever.
I wonder (with no hard evidence) if perhaps the elbow in the quality-cost curve is at the point of no known bugs: as you get to that point, you get more and more productive. As you go beyond it to making sure that even bugs you'll never test for aren't there (formal methods, etc) then things get much more expensive.
[Editing note: broken image link removed]
-- MartinPool (I redrafted these paragraphs for clarity, the message is the same.)
High quality systems are different: they live on the right hand side. Many projects live on the left hand side. But on the right hand side, increasing quality increases costs.
A high quality system is one that requires zero critical bugs. Fixing more non-critical bugs costs more development and testing.
-- EricUlevik [incorporating feedback]
Unfortunately for what would be a powerful argument, I believe the curve is not simply correct. What is more likely the case is that there exist strategies of development where the curve has an elbow, and there exist strategies where it has not.
A strategy where I believe there is an elbow is a high-testing, test-up-front strategy like that used in XP. What provides the elbow in cost is the rapid feedback of such a strategy.
A strategy where quality improvement strictly increases cost would be one with Big Bang Testing at the very end. The lowest project cost is attained when you don't test, ever. Because you just code till you think it might work, and then stop. Quality would be abysmal, of course. Thereafter, the more testing you do, the more the project costs.
For the "QualityElbow" notion to be closer to the truth, we need to compare development strategies. Maybe there's a family of curves? --RonJeffries
I can well believe in a family of curves depending on:
I suppose we could draw another curve showing how much customers are prepared to pay for a system at a certain level of quality.
In my limited experience of hardware and embedded systems, the curve seems to get insanely steep near the left-hand side, because if the system doesn't work it's very hard to tell what's going wrong. So, it's obvious to everyone that it has to be kept working pretty well or it will founder.
In this model, pacemakers and phone systems are developed to the right of the elbow because they couldn't sell the system if it was any further left, even though they could perhaps build a prototype with "just a few bugs" for much less money.
If you don't have a minimum quality for safety reasons but just want an economical outcome, then you need to a) choose a curve that is cheap enough and good enough, and b) work really hard to operate near the elbow. In practice, most people are to the left, and I think if your process and insight is good enough to get to that level of quality you won't overdo it.
The elbow idea suggests that even when doing a throwaway prototype you should still keep bugs out of the code because it'll help you get the prototype out faster. Of course, it's fine to have features which are merely unimplemented, and to amputate features rather than debug them. This also covers you in the case where the prototype code finds it's way into the final version: you don't have to wish you'd done it properly.
How about a QualityBathtub? curve; similar to the "bathtub" curve used to describe failure rates over time for many physical components? In the former case, one encounters "infant mortality"--usually design flaws that cause the part to fail much faster than specified. That's the left hand side. On the right hand side, one encounters failures due to the part wearing out (and reaching the end of it's useful life).
For quality, the bathtub curve plots cost (on the y axis) vs quality on the x axis. For low levels of quality; the cost of rework, lost business, and intangibles (such as damage to reputation) dominates the testing costs. You don't want to be here, generally--neither you nor your customers will be happy. (Only your competitors will be pleased). In the bottom is the ideal "sweet spot" for most products, as postulated above. Testing costs are reasonable; costs due to rework and lost business are far lower. To the right is the appropriate place on the curve for mission-critical products. Testing costs are expensive; but failures of any sort are unacceptable.
This curve shows why many people suggest that QualityIsFree. For a significant range of quality; the MarginalCostOfQuality? is negative; adding more quality DECREASES costs. But eventually the MarginalCostOfQuality? becomes positive; and additional testing should be justified by explicit customer or regulatory requirements. (But of course, the actual cost of quality is never zero; which is why QualityIsNotFree...)
See Poor-Quality Cost by James Harrington (ISBN 0824777433 ) where the econometrics of the bathtub curve is explored. One major claim is that the curve holds for a given production process, changing the process changes the curve - so the sweet spot can move to the right (toward higher quality) and even downwards (toward lower cost). -- DaveVanBuren
See also IsYourCodeThatImportant