Pre Condition

Every procedure, function, class, et. al. has a necessary weakest precondition, often just referred to the shortened form precondition. The precondition specifies that which must be true for the function, procedure, et. al. to produce valid output. If the precondition fails to hold, UnspecifiedBehavior results, thus leading to the common truism, GarbageInGarbageOut.

Note that, very often, the weakest precondition may well be more liberal than the software engineer intended. Hence, one should not be surprised to find documented preconditions that actually prove to be more restrictive than the weakest precondition.


For example, given the function:

  int multiply(unsigned int a, unsigned int b) {
    unsigned int i;
    unsigned int result;

result = 0; for(i = 0; i < a; i++) result = result + b; return result; }

How do we know that this procedure is correct? One can write UnitTests; but, unless you intend on writing one test for each combination of a and b (on a 32-bit machine, this will yield 2^64 unique unit tests!), very often only edge-cases are tested. Since no explicit conditionals appear in the code above, it appears UnitTests might not be the best tool for the job.

(Footnote: it is also said that unit tests can only demonstrate bug-free behavior under circumstances set forth by a function/procedure's input parameters; they cannot, however, prove the absence of bugs en toto.)

We start with the result in this case: it must be an integer according to (in this case) C. Since C's integers follow modular arithmetic, we know the result of multiply() must stand correct if, and only if, the combination of inputs a and b are small enough not to result in wrap-around.

Thus, we know result = (a*b) iff (a*b) <= MAX_UNSIGNED_INT. Thus, our first precondition becomes (a*b) <= MAX_UNSIGNED_INT.

Taking the loop next, we see that we're using a simple addition-algorithm to compute multiplication. We can solve its correctness inductively:

The loop has been proven to be correct in the absence of modular arithmetic; but, it still has preconditions of its own in the real-world! Remembering that we're working with modular arithmetic, we know that the addition operation is valid if, and only if, (old result + b) doesn't roll over, and thus, result <= MAX_UNSIGNED_INT - b at all times prior to each addition iteration. If this condition is ever violated, we know that the loop will produce incorrect output.

Since result = (i * b) at the start of each loop iteration, then:

 i * b <= MAX_UNSIGNED_INT - b

i <= (MAX_UNSIGNED_INT / b) - 1

for all i in the loop. The loop terminates when i = a, and the addition never executes; hence, we can assert the following set of preconditions for the procedure:

 a <= (MAX_UNSIGNED_INT / b)   <--  Note: no 'minus one' here because the addition never executes.

This serves as a useful check against our original expectations: re-arranging the above yields the original expectation that (a*b) <= MAX_UNSIGNED_INT.

Rewriting the above code using explicit preconditions and loop invariant conditions as a kind of documentation (and, if you enabled assertions, debugging) aid:

  int multiply(unsigned int a, unsigned int b) {
    unsigned int i;
    unsigned int result;

assert(a <= (MAX_UNSIGNED_INT / b));

result = 0; for(i = 0; i < a; i++) { assert(i <= ((MAX_UNSIGNED_INT / b) - 1)); assert(result <= (MAX_UNSIGNED_INT - b)); result = result + b; } assert((float)result == ((float)a * (float)b));

return result; }

This is, of course, a very simplistic example of deriving and documenting preconditions from existing code. In reality, however, TonyHoare recommended not to bother annotating existing software as a rule; instead, he preferred the technique to be applied only with new code. According to http://research.microsoft.com/~thoare/6Jun_assertions_personal_perspective.htm , he agrees that programmers ought to express invariants first, and the code later. This closely resembles TestDrivenDesign.


Personal observation from Samuel A. Falvo II:

I also want to point out that use of invariants, pre- and post-conditions is not mutually exclusive with TestDrivenDesign. According to KentBeck, one should TestOnlyWhatCanBreak?. If your software can be proven not to break, there exists no overt need for unit tests for it. Unfortunately, not everyone has the time or patience to explore the logic of their software; hence, many feel that FormalMethods do not scale. I disagree; but, I also agree that it is cheaper to just write tests for larger functions or procedures. Therefore, I often find myself using FormalMethods for reasonably small procedures/functions and very abstract notions of my software; for everything else, I tend to use TDD.

(Footnote: Note that TDD exercises specific cases of a procedure's or function's post-conditions.)


This is all perilously close to Dijkstra's prescription for code that's provably correct by construction (EWD340). EWD's theory is that testing and debugging are always more expensive than doing it right in the first place. After reading a lot of Dijkstra's work, I ran a software shop on a "first-time-final" philosophy and never found a reason to disagree. It's hard on the programmers at first, but they grow to like it, and then it becomes a competitive thing. A code review where it's your job to prove your code is correct is a lot more fun than the usual everybody pokes holes in it approach.

Here's an interesting thought: are first-time-final and agile/xp antithetical? --MarcThibault


SeeAlso: HoareTriple, FormalMethods, EiffelLanguage, SatherLanguage


EditText of this page (last edited October 17, 2010) or FindPage with title or text search