Halley's method

In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative.

Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.

Like the latter, it iteratively produces a sequence of approximations to the root; their rate of convergence to the root is cubic.

Multidimensional versions of this method exist.

[1] Halley's method exactly finds the roots of a linear-over-linear Padé approximation to the function, in contrast to Newton's method or the Secant method which approximate the function linearly, or Muller's method which approximates the function quadratically.

[2] Halley's method is a numerical algorithm for solving the nonlinear equation f(x) = 0.

The method consists of a sequence of iterations: beginning with an initial guess x0.

[3] If f is a three times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy: This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence is cubic.

can be simplified: When the second derivative is very close to zero, the Halley's method iteration is almost the same as the Newton's method iteration.

Applying Newton's method to g gives with and the result follows.

[further explanation needed] Suppose a is a root of f but not of its derivative.

Then Taylor's theorem implies: and also where ξ and η are numbers lying between a and xn.

and re-organizing terms yields: Put the second term on the left side and divide through by to get: Thus: The limit of the coefficient on the right side as xn → a is: If we take K to be a little larger than the absolute value of this, we can take absolute values of both sides of the formula and replace the absolute value of coefficient by its upper bound near a to get: which is what was to be proved.

To summarize, Halley actually developed two third-order root-finding methods.

The above, using only a division, is referred to as Halley's rational method.

A second, "irrational" method uses a square root as well:[6][7][8] This iteration was "deservedly preferred" to the rational method by Halley[7] on the grounds that the denominator is smaller, making the division easier.

A second advantage is that it tends to have about half of the error of the rational method, a benefit which multiplies as it is iterated.

On a computer, it would appear to be slower as it has two slow operations (division and square root) instead of one, but on modern computers the reciprocal of the denominator can be computed at the same time as the square root via instruction pipelining, so the latency of each iteration differs very little.