In computing, decimal128 is a decimal floating-point number format that occupies 128 bits in memory.
Formally introduced in IEEE 754-2008,[1] it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.
Both alternatives provide exactly the same set of representable numbers: 34 digits of significand and 3 × 212 = 12288 possible exponent values.
The encoding can represent binary significands up to 10 × 2110 − 1 = 12980742146337069071326240823050239 but values larger than 1034 − 1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).
The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.
In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is