[1] In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power.
One popular example is space-time tradeoff, reducing a program’s execution time by increasing its memory consumption.
Conversely, in scenarios where memory is limited, engineers might prioritize a slower algorithm to conserve space.
There is rarely a single design that can excel in all situations, requiring engineers to prioritize attributes most relevant to the application at hand.
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained.
Consequently, optimization processes usually stop once sufficient improvements are achieved, without striving for perfection.
Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching diminishing returns.
Typically some consideration is given to efficiency throughout a project – though this varies significantly – but major optimization is often considered a refinement to be done late, if ever.
The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk.
At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load.
occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients).
A good example is the use of a fast path for common cases, improving performance by avoiding unnecessary work.
Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference.
Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predicting branching, for instance.
Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization.
As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models.
This technique dates to the earliest regular expression engines, and has become widespread with Java HotSpot and V8 for JavaScript.
For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to N: This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like: The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality.
Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource.
These trade-offs may sometimes be of a non-technical nature – such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient.
Optimization may include finding a bottleneck in a system – a component that is the limiting factor on performance.
This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read.
[8]) "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[6]"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code.
Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize.
It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated.
In many functional programming languages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use.
For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms.
But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated.
A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large.