The savings in processing time can be significant, because retrieving a value from memory is often faster than carrying out an "expensive" computation or input/output operation.
Lookup tables are also used extensively to validate input values by matching against a list of valid (or invalid) items in an array and, in some programming languages, may include pointer functions (or offsets to labels) to process the matching input.
FPGAs also make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality.
[3] In ancient (499 AD) India, Aryabhata created one of the first sine tables, which he encoded in a Sanskrit-letter-based number system.
In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144"[4] Modern school children are often taught to memorize "times tables" to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).
Early in the history of computers, input/output operations were particularly slow – even in comparison to processor speeds of the time.
It made sense to reduce expensive read operations by a form of manual caching by creating either static lookup tables (embedded in the program) or dynamic prefetched arrays to contain only the most commonly occurring data items.
Despite the introduction of systemwide caching that now automates this process, application level lookup tables can still improve performance for data items that rarely, if ever, change.
[5] This has been followed by subsequent spreadsheets, such as Microsoft Excel, and complemented by specialized VLOOKUP and HLOOKUP functions to simplify lookup in a vertical or horizontal table.
[2]: 468 For a trivial hash function lookup, the unsigned raw data value is used directly as an index to a one-dimensional table to extract a result.
For small ranges, this can be amongst the fastest lookup, even exceeding binary search speed with zero branches and executing in constant time.
[7]: 282 A simple example of C code, designed to count the 1 bits in a int, might look like this:[7]: 283 The above implementation requires 32 operations for an evaluation of a 32-bit value, which can potentially take several clock cycles due to branching.
[7]: 284 "Lookup tables (LUTs) are an excellent technique for optimizing the evaluation of functions that are expensive to compute and inexpensive to cache.
For example, a grayscale picture of the planet Saturn could be transformed into a color image to emphasize the differences in its rings.
One common LUT, called the colormap or palette, is used to determine the colors and intensity values with which a particular image will be displayed.
In computed tomography, "windowing" refers to a related concept for determining how to display the intensity of measured radiation.
A classic example of reducing run-time computations using lookup tables is to obtain the result of a trigonometry calculation, such as the sine of a value.
An error in a lookup table was responsible for Intel's infamous floating-point divide bug.
The latter case may thus employ a two-dimensional array of power[x][y] to replace a function to calculate xy for a limited range of x and y values.
While often effective, employing a lookup table may nevertheless result in a severe penalty if the computation that the LUT replaces is relatively simple.
This will be close to the correct value because sine is a continuous function with a bounded rate of change.
In digital logic, a lookup table can be implemented with a multiplexer whose select lines are driven by the address signal and whose inputs are the values of the elements contained in the array.