Precision (computer science)

It is related to precision in mathematics, which describes the number of digits that are used to express a value.

The single- and double-precision formats are most widely used and supported on nearly all platforms.

The use of half-precision format and minifloat formats has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.

An example would be to store "sin(0.1)" in IEEE single-precision floating point standard.

The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).