Optimizing compiler

Optimization is a collection of heuristic methods for improving resource usage in typical programs.

Worst-case assumptions need to be made when function calls occur or global variables are accessed because little information about them is available.

Peephole optimizations are usually performed late in the compilation process after machine code has been generated.

During LTO, the compiler has visibility across translation units which allows it to perform more aggressive optimizations like cross-module inlining and devirtualization.

Techniques such as macro compression, which conserves space by condensing common instruction sequences, become more effective when the entire executable task image is available for analysis.

[6] A potential problem with this is that XOR or subtract may introduce a data dependency on the previous value of the register, causing a pipeline stall, which occurs when the processor must delay execution of an instruction because it depends on the result of a previous instruction.

[3]: 596 Some optimization techniques primarily designed to operate on loops include: Prescient store optimizations allow store operations to occur earlier than would otherwise be permitted in the context of threads and locks.

The purpose of this relaxation is to allow compiler optimization to perform certain kinds of code rearrangements that preserve the semantics of properly synchronized programs.

Interprocedural optimization is common in modern commercial compilers from SGI, Intel, Microsoft, and Sun Microsystems.

For a long time, the open source GCC was criticized for a lack of powerful interprocedural analysis and optimizations, though this is now improving.

Due to the extra time and space required by interprocedural analysis, most compilers do not perform it by default.

One such example is the Portable C Compiler (PCC) of the 1980s, which had an optional pass that would perform post-optimizations on the generated assembly code.

Compiler errors of any kind can be disconcerting to the user, but especially so in this case, since it may not be clear that the optimization logic is at fault.

[19] In the case of internal errors, the problem can be partially ameliorated by a "fail-safe" programming technique in which the optimization logic in the compiler is coded such that a failure is trapped, a warning message issued, and the rest of the compilation proceeds to successful completion.

[3]: 740, 779  By the late 1980s, optimizing compilers were sufficiently effective that programming in assembly language declined.

This co-evolved with the development of RISC chips and advanced processor features such as superscalar processors, out-of-order execution, and speculative execution, which were designed to be targeted by optimizing compilers rather than by human-written assembly code.