Outsmarting the compiler refers to any attempt to defeat the optimization heuristics (or other techniques for code improvement) of a compiler, often with the hubris that expert programmers can do better at code generation than modern optimizing compilers. (While this may be true, most expert programmers know better to outsmart the compiler except in specific circumstances; conversely most programmers who spend lots of time trying to outsmart compilers aren't experts, and often make things worse.)
Unless profiling tells you that your compiler is doing the wrong thing, and this is leading the code to not meet its performance requirements, doing so is PrematureOptimization, and as such is an AntiPattern.
OutsmartingTheCompiler is frequently bad because:
- It causes precious programmer resources to be focused on details of the language implementation, and not solving the customer's problem.
- It usually produces code which is less portable (or not at all). Even if no non-portable constructs are used, the result is often code which performs well on one platform and sucks on others. (If you let the optimizer do its job, it can produce good code for any platform).
- It often produces code which is more brittle. Many common optimization tricks in CeePlusPlus, for example, can contribute to the FragileBaseClassProblem.
- It can interfere with debugging (the same is true of many compiler optimizations in general).
- It can interfere with maintenance. Many JavaLanguage maintenance programmers have yanked out hair due to predecessors who declared methods (or entire classes) to be "final" just to avoid DynamicDispatch and squeeze out a few more cycles.
On the other hand, it is occasionally useful because:
- Sometimes, profiling does reveal bottlenecks that the compiler won't fix.
- Some compilers are flat-out not very good. (In many cases, the correct fix is to get a better compiler; but for many languages/environments a SufficientlySmartCompiler doesn't exist)
- Programmers do have access to information (and can thus enable optimizations) that are difficult or impossible (equivalent to the HaltingProblem) for the compiler to infer or discover. For example, declaring pointers to be restrict-ed in AnsiCee (latest revision)--the compiler generally won't do this for you as the semantics of CeeLanguage make the requisite alias analysis impossible. Of course, the programmer had better be right!
Examples of outsmarting the compiler:
- Embedding of numerous performance hint directives (inline, register, restrict in C/C++ for example) in the code. Occasional use of "inline" is generally considered acceptable, and see comment on restrict above.
- Especially true if nonstandard extensions are used (GnuCompilerCollection is full of them)
- "register" is one which is long deprecated, of course, but still is occasionally seen in the wild.
- Use of UndefinedBehavior or UnspecifiedBehavior to get a performance boost (will likely break if the code is ported; or even if the compiler or OS is upgraded)
- Use of "final" in Java for performance reasons ("final" is OK if overriding a method would break an abstraction or compromise security)
- Many Java implementations can detect if a function isn't overridden and optimize accordingly.
- Use of macros in C++ on the because you don't trust the compiler to respect an "inline" declaration.
- Spuriously taking the address of functions in the hope that this will prevent inlining. (It will make sure a non-inlined copy is present; it won't require that everyone use the non-inlined copy).
- Disabling the optimizer altogether on the grounds that optimization is "dangerous", or because someone on the team once got bitten by an optimizer bug.
CategoryCompilers