Functional Test Graph

Below is an example of a graph used to track FunctionalTesting progress. It shows, for each iteration or other time increment, the total number of functional tests in existence, which are running correctly, which incorrectly, and which are not yet validated.

A chart like this can give a lot of information in a small picture. We can see that creation of tests has proceeded fairly linearly throughout the project. However, early on, valid scores were not available, so the test results were not very reliable. Then around iteration 6, validation improved, and showed that quite a few tests were not running correctly. It took until iteration 9 to catch up on the scores, after which quality improved smoothly until the present time.

Here should be a link to functest.gif (I found a bad link to http://www.a r m a t i e s.com/images/functest.gif)

So ... how did you know that testing was going to catch up in time? What method do you use to decide how to resource QA? Or didn't you have a separate QA group? Who wrote the FunctionalTests?

You don't know. That's why you have the graph, so you can read the situation and make decisions.

In the first release, we had two programmers who wrote tests and evolved a framework for tests. In the second, programmers had tasks to write the shell for the tests. In both cases, customers provided the data to test against.

In general, I advise against separate QA groups for AcceptanceTests (as we now call them), because the feedback cycle tends to be too slow. A QA group that checked the software on all the different configurations could be a good thing. If we want to discuss the topic of this paragraph, we need a new page.


EditText of this page (last edited October 5, 2014) or FindPage with title or text search