decimal64 floating-point format

Decimal64 is a decimal floating-point format, formally introduced in the 2008 revision[1] of the IEEE 754 standard, also known as ISO/IEC/IEEE 60559:2011.

The binary format of the same size supports a range from denormal-min ±5×10^−324, over normal-min with full 53-bit precision ±2.2250738585072014×10^−308 to max ±1.7976931348623157×10^+308.

Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.

This format uses a binary significand from 0 to 1016 − 1 = 9999999999999999 = 2386F26FC0FFFF16 = 1000111000011011110010011011111100000011111111111111112.The encoding, completely stored on 64 bits, can represent binary significands up to 10 × 250 − 1 = 11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016 − 1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).

As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).

The leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number.

The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.

In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is