Arbitrary-precision arithmetic

Several modern programming languages have built-in support for bignums,[1][2][3][4] and others have libraries available for arbitrary-precision integer and floating-point math.

Rather than storing values as a fixed number of bits related to the size of the processor register, these implementations typically use variable-length arrays of digits.

It is also useful for checking the results of fixed-precision calculations, and for determining optimal or near-optimal values for coefficients needed in formulae, for example the

[7] Arbitrary precision arithmetic is also used to compute fundamental mathematical constants such as π to millions or more digits and to analyze the properties of the digit strings[8] or more generally to investigate the precise behaviour of functions such as the Riemann zeta function where certain questions are difficult to explore via analytical methods.

Similar to an automobile's odometer display which may change from 99999 to 00000, a fixed-precision integer may exhibit wraparound if numbers grow too large to represent at the fixed level of precision.

Some processors can instead deal with overflow by saturation, which means that if a result would be unrepresentable, it is replaced with the nearest representable value.

In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow.

Some programming languages such as Lisp, Python, Perl, Haskell, Ruby and Raku use, or have an option to use, arbitrary-precision numbers for all integer arithmetic.

However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the numerator and for the denominator.

But even with the greatest common divisor divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 − 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900.

Numerous algorithms have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision.

This is not a problem for their usage in many formulas (such as Taylor series) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome.

The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below.

But if exact values for large factorials are desired, then special software is required, as in the pseudocode that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc.

The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic.

This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler.

Later, around 1980, the operating systems VAX/VMS and VM/CMS offered bignum facilities as a collection of string functions in the one case and in the languages EXEC 2 and REXX in the other.

The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used lookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available.

The largest memory supplied offered 60 000 digits, however Fortran compilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory.