All The Possibilities

All The Possibilities

In an exhaustive testing procedure, the only sure method of verifying correctness is to test AllThePossibilities. But in the practical sense, this is not really possible, since the scope is not completely understood, and if it were, there is little likelihood that a test procedure testing all the possibilities and all the combinations of possibilities that are known to exist could be constructed and executed.

There are questions about just what kind of test is adequate. What is a proper scope of testing that will prove to be sufficient? If we conduct an isolated test for each case requiring confirmation, can we say that we have done sufficient testing? Or must we also test all likely combinations? Who will determine that testing is complete?

This is part of the problem. There are other questions, not stated here, that will require answers.

The practical answer as to whether we can test AllThePossibilities is No.


Just TestEverythingThatCouldPossiblyBreak. That's usually enough.

Ok, Now you only have to determine what can "break", but precisely what does that mean? Software breaks, Hardware breaks, User breaks, Network breaks, Power Outage, .... etc? It seems the list here would be quite long and nearly impossible as well. If you replace the word possibly with probably, the list would be more manageable.

Perhaps you could utilize pair programming here, where the list of one is given to the other, and then further modified by combination, extension and then come up with test requirements. It would seem a list would be preferable to a catch-all phrase. It has the added advantage of serving as a checklist and a future reference.

Presumably you have experience with writing software. If so, consider the question : when writing similar code, what usually does break? This should yield a finite set - although it may seem otherwise at times when we reflect on our past experiences.

Take "everything that could possibly break" to mean "everything you know might break". That is guaranteed to be finite.

Then it would read: TestEverythingThatYouKnowMightBreak?, which is is a limited set of "AllThePossibilities" and certainly not comprehensive as the slogan would indicate. I am only trying to be concise and accurate. We should mean what we say, and not a subset of what we say.

Your concern with accuracy reflects highly on you. My own objective is to give useful advice to readers of this page. It would be splendid if both goals could be reconciled.


Further limiting AllThePossibilities:


For my last employer, our status reports had to include the test coverage percentage. That seemed kind of ridiculous, since anyone mathematically inclined could come up an endless list of tests.

I've grouped tests into various levels.

Level 1: Testing all individual features

Level 2: Testing plausible combinations of two features; testing boundary and error conditions

You could devise further levels, but in practice I never completed level 2. In addition, you should create a test whenever someone finds a defect in your code.

This last is very good advice - CaptureBugsWithTests. It gets you well on the way to testing everything that could possibly break. Because in testing and repairing what has already proven to be broken, you may discover some other things that could "possibly" break!


Quick note: testing for all possible combinations of two features (or all possible combinations of 3, 4 , 5, or 6 features) is not as difficult as it seems, nor will it require nearly as many test cases as you would at first suspect. Try a test design tool like Hexawise http://www.hexawise.com which uses applied statistics-based Design of Experiments methods to identify the smallest possible number of tests to meet a user's quality and thoroughness objectives. A 10-slide deck currently on the home page explains how 72 billion possible test cases for comprehensive testing can be reduced to only 35 test cases that would accomplish testing for all combinations of two features. Boundary testing is also easy to accomplish with such test design tools without requiring nearly as many test cases as you might at first suspect. The "trick" if that is the right word, is the Design of Experiments-based algorithms in test design tools that change the variables in each test case for you, based on mathematical optimization methods that have been thoroughly proven and refined for 40+ years.- Adam


EditText of this page (last edited August 23, 2009) or FindPage with title or text search