Normalized number

In applied mathematics, a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point.

[1] Thus, a real number, when written out in normalized scientific notation, is as follows: where n is an integer,

That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point.

Simply speaking, a number is normalized when it is written in the form of a × 10n where 1 ≤ |a| < 10 without leading zeros in a.

This is the standard form of scientific notation.

An alternative style is to have the first non-zero digit after the decimal point.

The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10.

In base b a normalized number will have the form where again

In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing).

Although the point is described as floating, for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power.