Proof Optimizer

A proof optimizer can generally be considered an optimizer for automated proofs. Such a tool could be used to optimize TypeChecking, StaticAssert, ConstraintLogicProgramming, DeclarativeMetaprogramming, code optimizers (which must prove that the transform result has the same programming effects as the pre-transform), and even parsing (which is essentially a proof that text obeys a set of rules).

To implement AutomatedTheoremProver?s we typically utilize heuristics to help select search paths over others. These heuristics can be considered domain-specific proof optimizers of sorts. While they don't adapt effectively to conditions outside the domain they were designed for, they essentially reduce the amount of wasted search time and help one get to a proof quickly. Heuristics can be wrong, and this is okay - obtaining fast searches is a matter of probabilities, reducing the search space... because when heuristics fail, the proof can always fall back to a more brute-force solution.


Repetition in Proofs

One thing to note is that most AutomatedTheoremProver?s (type checkers, constraint logic provers, parsers, etc.) end up running the same or similar proofs over and over and over again. Type checking and parsing is proven repeatedly in an edit-compile-test cycle.

This repetitive phenomenon suggests we would do well to have proof optimizers that adapt to the code or other content being proven, learning and remembering previous proofs such that small variations in the content result in a blazing-fast "I already know this!" solution that costs only a slight variation in the proof. Essentially, one can cut out most of the 'failed' search paths that occur when repeating proofs over and over. This leads to the:


Adaptive ProofOptimizer

A thought of mine with the following train:


Biological ProofOptimizer

A programmer can also help direct a failing proof, especially when they wrote the code. With a good IDE they ought to feel able to step in and help out whenever a proof is taking a noticeable amount of time. (Of course, the proof can continue in the background, too, unless the programmer stops it.) They can make optimization suggestions, or help out with a StaticAssert, etc. Under today's conditions, such a feature would be something of a mistake: the effort would be forgot on the next edit-compile-test cycle. But with an Adaptive ProofOptimizer that remembers and applies your suggestions in future runs, it isn't such a problem.

As an example, if the above StaticAssert example was failing to prove that 'get_range::start >= 0', the programmer might suggest try proving get_range::start == main::range_start. This could be proven more easily, and then would serve towards a proof that get_range::start >= 0 because range_start >= 0.

Related, if the proof is failing because the code is bad, then programmers can edit the code while doing so helping the proof along. This is an idea that could lead to a very excellent IDE component (is there a NewIdeWishList?) because one would be able to whip code into working condition and observe proof failures even as editing is ongoing.


Adaption Distribution

Supposing your IDE has learned your code and benefited from your suggestions and now can parse, typecheck, and optimize a huge amount of code in a tiny little bit of time, it's time to distribute that code. On the other machine, the compilation grinds to a halt... Not Good.

When code written with a heavy amount of internal proofs is to be compiled on other people's machines, the effort that went into 'training' the AI to perform these proofs rapidly should be capable of being distilled into a smallish file that is distributed along with the rest of the code. It should be considered a critical part of the project. It is critical that the AIs on remote machines be able to complete the proof without advice and be as fast or nearly as fast as on the original build.

Alternatively (or additionally) one could work in something more of a distributed programming environment that naturally shares such training with other active compilations and executions - e.g. via some sort of distributed WikiIde. Certainly, over time the quality of heuristics and the number of specific corner-case helpers will be polished and improved between automated operations and suggestions by smart programmers, and so these 'generic' heuristics should be distributed with the compiler.


Related: Recent (20080815) article about ProofEngine? competition: http://tech.slashdot.org/tech/08/08/13/2028229.shtml


EditText of this page (last edited March 25, 2010) or FindPage with title or text search