In computing, a roundoff error,[1] also called rounding error,[2] is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic.
[3] Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them.
[4] When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors.
When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation.
In ill-conditioned problems, significant error may accumulate.
[8] Here are some examples of representation error in decimal representations: Increasing the number of digits allowed in a representation reduces the magnitude of possible roundoff errors, but any representation limited to finitely many digits will still cause some degree of roundoff error for uncountably many real numbers.
[9] Rounding multiple times can cause error to accumulate.
Rounding 9.945309 to one decimal place (9.9) in a single step introduces less error (0.045309).
are infinite and continuous, a floating-point number system
The IEEE standard stores the sign, exponent, and significand in separate fields of a floating point word, each of which has a fixed width (number of bits).
Machine epsilon can be used to measure the level of roundoff error in the floating-point number system.
[3] There are two common rounding rules, round-by-chop and round-to-nearest.
Suppose the usage of round-to-nearest and IEEE double precision.
Thus, the normalized floating-point representation in IEEE standard of 9.4 is
This representation is derived by discarding the infinite tail
can be used to measure the level of roundoff error when using the two rounding rules above.
Machine addition consists of lining up the decimal points of the two numbers to be added, adding them, and then storing the result again as a floating-point number.
The shifting of the decimal points in the significands to make the exponents match causes the loss of some of the less significant digits.
[11] Note that the addition of two floating-point numbers can produce roundoff error when their sum is an order of magnitude greater than that of the larger of the two.
In general, the quotient of 2p-digit significands may contain more than p-digits.Thus roundoff error will be involved in the result.
, the result is still significantly unreliable in typical cases.
There is not much faith in the accuracy of the value because the most uncertainty in any floating-point number is the digits on the far right.
This is closely related to the phenomenon of catastrophic cancellation, in which the two numbers are known to be approximations.
Errors can be magnified or accumulated when a sequence of calculations is applied on an initial input with roundoff error due to inexact representation.
An algorithm or numerical process is called stable if small changes in the input only produce small changes in the output, and unstable if large changes in the output are produced.
due to the large error introduced in subtracting two similar quantities, whereas the equivalent expression
[12] Even if a stable algorithm is used, the solution to a problem may still be inaccurate due to the accumulation of roundoff error when the problem itself is ill-conditioned.
[3] A problem is well-conditioned if small relative changes in input result in small relative changes in the solution.
[3] In other words, a problem is ill-conditioned if its conditions number is "much larger" than 1.
The condition number is introduced as a measure of the roundoff errors that can result when solving ill-conditioned problems.