In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1.
The algorithm is iterative and has a rate of convergence of d + 1.
These methods are named after the American mathematician Alston Scott Householder.
Householder's method is a numerical algorithm for solving the equation f(x) = 0.
The method consists of a sequence of iterations
[1] If f is a d + 1 times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy:[citation needed]
This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order d + 1 or better.
In particular, Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large d. The Ostrowski index expresses the error reduction in the number of function evaluations instead of the iteration count.
Then f has a Taylor series at a and its constant term is zero.
Because this constant term is zero, the function f(x) / (x − a) will have a Taylor series at a and, when f ′ (a) ≠ 0, its constant term will not be zero.
Because that constant term is not zero, it follows that the reciprocal (x − a) / f(x) has a Taylor series at a, which we will write as
When we compute its d-th derivative, we note that the terms for k = 1, ..., d conveniently vanish:
We thus get that the correction term that we add to x = xn to get a value of xn+1 that is closer to a is:
If a is the zero of f that is closest to x then the second factor goes to 1 as d goes to infinity and
around a point b that is closer to a than it is to any other zero of f. By König's theorem, we have:
The actual proof of the convergence is also based on these ideas.
is the update of the Newton iteration at the point
This line was added to demonstrate where the difference to the simple Newton's method lies.
The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation
The Taylor series of the reciprocal function starts with
The result of applying Householder's methods of various orders at x = 0 is also obtained by dividing neighboring coefficients of the latter power series.
As one can see, there are a little bit more than d correct decimal places for each order d. The first one hundred digits of the correct solution are 0.0945514815423265914823865405793029638573061056282391803041285290453121899834836671462672817771577578.
An exact derivation of Householder's methods starts from the Padé approximation of order d + 1 of the function, where the approximant with linear numerator is chosen.
Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator.
Just as the Taylor polynomial of degree d has d + 1 coefficients that depend on the function f, the Padé approximation also has d + 1 coefficients dependent on f and its derivatives.
More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant.
One could determine the Padé approximant starting from the Taylor polynomial of f using Euclid's algorithm.
However, starting from the Taylor polynomial of 1/f is shorter and leads directly to the given formula.
has to be equal to the inverse of the desired rational function, we get after multiplying with
Householder's method applied to the real-valued function f(x) is the same as applying Newton's method