decimal32 floating-point format

The binary format with the same bit-size, binary32, has an approximate range from subnormal-minimum ±1×10^−45 over normal-minimum with full 24-bit precision: ±1.1754944×10^−38 to maximum ±3.4028235×10^38.

Besides the special cases infinities and NaNs there are four points relevant to understand the encoding of decimal32.

Densely Packed Decimal encoding for all except the first digit of the significand, hardware centric and promoted by IBM(r), differences see below.

Both alternatives provide exactly the same range of representable numbers: up to 7 digits of significand and 3 × 26 = 192 possible exponent values.

IEEE 754 allows these two different encodings, without a concept to denote which is used, for instance in a situation where decimal32 values are communicated between systems.

Prefer data exchange in íntegral or ASCII 'triplets' for sign, exponent and significand.

That enables bigger precision and range, in trade-off that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require computations to extract exponent and significand and then try to obtain an exponent aligned representation.

Be aware that the bit numbering used in the tables for e.g. m10 … m0  is in opposite direction than that used in the document for the IEEE 754 standard G0 … G10.

The resulting significand could be a positive binary integer of 24 bits up to 1001 1111111111 1111111111b = 10485759d, but values above 107 − 1 = 9999999 = 98967F16 = 1001100010010110011111112 are 'illegal' and have to be treated as zeroes.

The significand's leading decimal digit forms from the (0)cde or 100e bits as binary integer.

Be aware that the bit numbering used here for e.g. b9 … b0 is in opposite direction than that used in the document for the IEEE 754 standard b0 … b9, add.

Benefit of this encoding is access to individual digits by de- / encoding only 10 bits, disadvantage is that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require decoding to decimal digits (and evtl.

[clarification needed] The decimal formats include denormal values, for a graceful degradation of precision near zero, but in contrast to the binary formats they are not marked / do not need a special exponent, in decimal32 they are just values too small to have full 7 digit precision even with the smallest exponent.

Thus, it is possible to initialize an array to infinities or NaNs by filling it with a single byte value.