We probably all do testing when we are developing a system. We probably all use a variety of methods. But what is the best way of writing a set of tests and documenting the results when you are trying to satisfy an auditor? The objective here is to prove that testing has taken place on a project and that it was carried out thoroughly. My first thoughts are that you need a hierarchical set of tests to prove that you have covered various possibilities offered by the software. If you try to write this type of structure in Word or Excel you get into a mess very quickly. Has anyone any experiences or advice to offer about how best to create a set of formal tests, how to record the results, and what tools are most appropriate for the job? Remember that an auditor generally wants to see a paper document (maybe auditors becoming more up-to-date somewhere?) and that document needs to be as 'correct' and informative as possible, so writing tests and results in a Wiki will not do.
Would simply printing out the source of a complete set of programmer and user tests satisfy this?
The best strategy is to prepare tests as documents in the first place. These will start as words but then have specific test data included within the document as development progresses. The words may look like a specification, but it is really an argument to the auditor that when the test data passes, the program is ok. Remember this when authoring: write to the auditor.
Now about record keeping: if it isn't written down, it didn't happen. Use the documents with test data prepared above to actually test the system. Use a program to do this so that you can do it at the end of every iteration. You will run the tests more often of course, but you save the printed results at the end of every iteration. Print them in color. Put them in in fancy binders. Maybe even add a form letter signed by the business unit manager in front of each bundle.
Imagine what the auditor is going to think when he sees easy to understand documents with test results embedded in place. Then imagine he can leaf through sections corresponding to each iteration. Don't you think he is going to find some test that failed and check to see that it was corrected in the next iteration? Then he is going to flip forward to see if it ever regresses. Of course, you did that yourself when you were running iterations, and you fixed any regressions, so he won't find any. So then he looks to see if you were really testing things that mattered so he reads your argument that addresses everything he can think of. You've had months to think of things so again you are way ahead of him. How could he not be happy?
This is AcceptanceTesting as prescribed by XP. You will also want to do UnitTests but you won't want to show these to the auditor. Why confuse him?
The FrameworkForIntegratedTest is a Java realization of exactly this that works with documents as varied as Word and Wiki. It is available now in beta form under the GNU GPL license. -- WardCunningham
The precise answer to the question at the top of the page depends on what you are being audited for. CMM, ISO, or some other system will problem have a manual describing what you will be audited and perhaps provide some sample templates for the documents. For formal testing, you will probably need a test plan document and a test results document. The test plan should cover both the testing philosophy or approach to be used and a detailed set of short descriptions describing an attribute to be tested and the expected result. Actual test procedures or test scripts can be attached as an appendix, but you can also get by without them. AcceptanceTest level (not UnitTest level) is appropriate for this document. The test report should echo the second half of the test plan and include a line for Test Passed or Test Failed. Both documents should include software version numbers and the date(s) of the formal test.