Algorithmic Performance

Can theory really say anything absolute about performance, or are such statements tied to current technology?

Theory can certainly make definitive statements about relative performance (e.g., <this> will take twice as long as <that>; or <this> will take a time proportional to the number of items, but <that> will take a time proportional to the square of the number of items, while <yet another> will take a time proportional to the number of items factorial.) By in large, theory cannot make absolute statements about performance -- e.g., that this algorithm will take ten minutes -- unless all relevant machine characteristics are known.


Re: "Second, there is a great deal more that is rigorous than machine performance: there is also AlgorithmicPerformance? (independent of machine)" (From KeepCriticismNarrow).

Can we really ignore the machine? How can we know there's not an optimization that we simply don't know about? A sufficiently-smart compiler/interpreter may perform optimizations that we didn't know were possible or not practical for current chip technology. We cannot assume it will execute our instructions in a simple one-by-one approach of a bare-bones interpreter. We can only know our source code, not all possible execution techniques.

In fact a human studying the algorithm to find a better way to get the same result by an entirely new algorithm is optimizing it by changing the algorithm. They are a "smart compiler". I doubt there is a clear line between optimizing an existing algorithm and devising a new algorithm that produces identical results. A sufficiently smart compiler/interpreter may take a bubble-sort as input and internally execute it using a quick-sort. The compiler user couldn't know the difference if its a true black-box. "God's compiler" may have all kinds of tricks we haven't dreamed of yet.

Or, it may be relative to factors such as parallelization. The most efficient algorithm for a single-threaded CPU may not be the most efficient for one that uses massive parallelization (and some optimization tricks). Some algorithms may be easier to optimize for parallelization even if they are poor for a single processor.

--top

Parallelization actually decreases algorithmic performance since you pay two additional costs: shipping pieces of the problem off, then later assembling the results - though these might not affect asymptotic costs. The advantage is you can take advantage of greater computational power - the sum of many CPUs or cores rather than just one.

And there is no SufficientlySmartCompiler that replaces algorithms with other ones that work even better. Maybe it will happen some day. But, until that day, we can safely ignore or account for the compiler for purposes of calculating algorithmic performance. Either way, we've already proven some 'optimal' algorithms, such as comparison-based sorts maxing out at O(N*log(N)) by means of formal proof. Further, algorithmic performance is still algorithmic performance, still dependent on the language and the guarantees the language offers, and still independent of machine. And for known optimizations in any era, people who compute algorithmic performance can simply account for those optimizations - e.g. the TailCallOptimization is often depended upon.


Re: "Maybe it will happen some day. But, until that day, we can safely ignore..."

It depends if we are talking practical or theoretical. In practical terms, I generally agree. In theoretical terms, textual code does not dictate implementation.

Dictate? no. But it does influence compiled implementation in a very predictable manner. When that changes we'll have less compilers and more AIs we could speak to in English.

I generally agree. But the point stands that we may not know the theoretically smallest optimization (barring formal proof of reduction limit). We can only deal with *current* state of the art when talking about performance/speed. --top


CategoryPerformance


EditText of this page (last edited August 16, 2008) or FindPage with title or text search