In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types (with a storage count that usually is not a power of two) using special software (or, rarely, hardware).
There is a long history of extended floating-point formats reaching back nearly to the middle of the last century.[when?].
In a few cases the implementation was merely a software-based change in the floating-point data format, but in most cases extended precision was implemented in hardware, either built into the central processor itself, or more often, built into the hardware of an optional, attached processor called a "floating-point unit" (FPU) or "floating-point processor" (FPP), accessible to the CPU as a fast input / output device.
Floating-point arithmetic operations are performed by software, and double precision is not supported at all.
[6] The IEEE 754 floating-point standard recommends that implementations provide extended-precision formats.
The extended format was designed not to store data at higher precision, but rather to allow for the computation of temporary double results more reliably and accurately by minimising overflow and roundoff-errors in intermediate calculations.
To enable intermediate subexpression results to be saved in extended precision scratch variables and continued across programming language statements, and otherwise interrupted calculations to resume where they were interrupted, it provides instructions which transfer values between these internal registers and memory without performing any conversion, which therefore enables access to the extended format for calculations[b] – also reviving the issue of the accuracy of functions of such numbers, but at a higher precision.
As a result, software can be developed which takes advantage of the higher precision provided by this format.
William Kahan, a primary designer of the x87 arithmetic and initial IEEE 754 standard proposal notes on the development of the x87 floating point: "An extended format as wide as we dared (80 bits) was included to serve the same support role as the 13 decimal internal format serves in Hewlett-Packard's 10 decimal calculators.
[20] An exponent field value of 32767 (all fifteen bits 1) is reserved so as to enable the representation of special states such as infinity and Not a Number.
On the x86 design most C compilers now support 80-bit extended precision via the long double type, and this was specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)).
Such compilers also typically include extended-precision mathematical subroutines, such as square root and trigonometric functions, in their standard libraries.
in REAL*8: inter-conversion will involve approximation, except for those few decimal fractions that represent an exact binary value, such as 0.625 .
Bounds on conversion between decimal and binary for the 80-bit format can be given as follows: If a decimal string with at most 18 significant digits is correctly rounded to an 80-bit IEEE 754 binary floating-point value (as on input) then converted back to the same number of significant decimal digits (as for output), then the final string will exactly match the original; while, conversely, if an 80-bit IEEE 754 binary floating-point value is correctly converted and (nearest) rounded to a decimal string with at least 21 significant decimal digits then converted back to binary format it will exactly match the original.
[12] These approximations are particularly troublesome when specifying the best value for constants in formulae to high precision, as might be calculated via arbitrary-precision arithmetic.
[26][27][28][c] The x86 floating-point units do not provide an instruction that directly performs exponentiation: Instead they provide a set of instructions that a program can use in sequence to perform exponentiation using the equation: In order to avoid precision loss, the intermediate results "log2(x)" and "y·log2(x)" must be computed with much higher precision, because effectively both the exponent and the significand fields of x must fit into the significand field of the intermediate result.
In conclusion, the exact number of bits of precision needed in the significand of the intermediate result is somewhat data dependent but 64 bits is sufficient to avoid precision loss in the vast majority of exponentiation computations involving double-precision numbers.
[27] Another example of calculations that benefit from extended precision arithmetic are iterative refinement schemes, used to indirectly clean out errors accumulated in the direct solution during the typically very large number of calculations made for numerical linear algebra.