Partially Automated Testing

Some types of software are not amenable to AutomatedTesting:

For such software, some resort to manual testing where a person follows a written test script and records the results.

A middle ground is a testing regime that is moderated by the machine. It displays test steps and waits for a human tester to perform the step and then indicate whether the test passed or failed. For example, it might display a dialog box like the following:

        Tests.testModemOffline
        ======================

Turn off the modem. Click the "Send Message" button in the application window.

Did the application display a message box indicating that the modem is offline?

[Yes] [No] [Skip] [End Tests]

Such tests can be specified using the same TestingFrameworks used for fully automated tests. Each test can be as simple as displaying a message box and then asserting that the answer is Yes or No. You can take advantage of those frameworks' test runners and reporting features.

A partially automated test suite ensures that all steps are followed in a repeatable fashion although it cannot ensure that the human tester is doing the job properly. The obvious downside to such tests is that they are time-consuming and labor-intensive. It is not cost-effective to run all the partially automated tests after every refactoring. They should be kept separate from the fully automated tests that you do run after every minor change.

Besides RegressionTests, another use of partially automated tests is as AcceptanceTests. Rather than trying to convince witnesses that a GreenBar proves that everything worked, you can demonstrate test steps that are obviously doing something.

Having external device tests with your regular test suite can really slow down progress. You can use MockObjects to UnitTest your code and have a separate test suite to test your actual external interfaces which will be much slower.


EditText of this page (last edited December 24, 2013) or FindPage with title or text search