I'm thinking that part of the value of testing early and reporting issues early is that the defects are in code that the engineer is more familiar with, having very recently modified that code. When testing is done at the end of the project, the defects may lie in code that was developed a long time ago, and the engineer has to re-familiarize himself with that code (requiring additional time for fixing, and adding risk of creating new bugs).
I think this can be measured indirectly by recording how long it takes the engineer to fix the defect when issues are reported early compared to when issues are reported late.
Consider a company that will not believe that defect rates could be lowered as much as some XP projects have reported (hundreds per month down to 1 or less per month). They would need some "proof" to support any change in their process, and the best proof for earlier testing may be in data they have already collected.
Since many companies postpone all acceptance testing until the end of the project (UnconsciousWaterfall), we can try to compare bug-fixing-old-code versus bug-fixing-new-code -- how long ago the code was written that the bug was found in should be seen to influence how long it takes to fix the bug. If we can get code-created dates correlated to defects fixed for tracking purposes.
Are there better and easier measurements that could be used to justify moving testing into earlier parts of the project?
Are there papers out there that I could reference?
-- KeithRay
Boehm [1] is your primary source. There is a more recent update [2], which gives current (2001) numbers.
[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477
[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"