Methods of computing square roots

Most square root computation methods are iterative: after choosing a suitable initial estimate of

[2] Heron's method from first century Egypt was the first ascertainable algorithm for computing square root.

[3] Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance.

In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation.

A least-squares regression line minimizes the average difference between the estimate and the value of the function.

That is the best estimate on average that can be achieved with a single piece linear approximation of the function y=x2 in the interval

For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden.

is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided.

The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation.

In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of y = x2 better than a line.

For a more typical case like 75, the hyperbolic estimate of 8.00 is only 7.6% low, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result.

[Note 3] The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table.

Here is a reference table of those boundaries: The final operation is to multiply the estimate k by the power of ten divided by 2, so for

The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand.

From the multiplication tables, the square root of the mantissa must be 8 point something because a is between 8×8 = 64 and 9×9 = 81, so k is 8; something is the decimal representation of R. The fraction R is 75 − k2 = 11, the numerator, and 81 − k2 = 17, the denominator.

will be an underestimate, and vice versa, so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence).

(The true value is 354.0451948551....) Notice that early iterations only needed to be computed to 1, 2 or 4 places to produce an accurate final answer.

If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows:

It is slower than the Babylonian method, but it has several advantages: Disadvantages are: Napier's bones include an aid for the execution of this algorithm.

For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of

This, however, is no real limitation for a computer-based calculation, as in base 2 floating-point and fixed-point representations, it is trivial to multiply

Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation.

The numerator/denominator expansion for continued fractions (see left) is cumbersome to write as well as to embed in text formatting systems.

Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root:

If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate.

A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa.

Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number.

Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method.

So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by

Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration.

Semilog graphs comparing the speed of convergence of Heron's method to find the square root of 100 for different initial guesses. Negative guesses converge to the negative root, positive guesses to the positive root. Note that values closer to the root converge faster, and all approximations are overestimates. In the SVG file, hover over a graph to display its points.