In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value
It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century,[1][2] though the idea was already known to Christiaan Huygens in his calculation of
[3] In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated.
"[4] Practical applications of Richardson extrapolation include Romberg integration, which applies Richardson extrapolation to the trapezoid rule, and the Bulirsch–Stoer algorithm for solving ordinary differential equations.
(exact value) that depends on a step size h (where
) with an error formula of the form
Note that by simplifying with Big O notation, the following formulae are equivalent:
Richardson extrapolation is a process that finds a better approximation of
by changing the error formula from
by subtracting the largest term in the error which was
This process can be repeated to remove more error terms to get even better approximations.
by removing the first error term, we multiply equation 2 by
and subtract equation 1 to give us
A general recurrence relation can be defined for the approximations by
The Richardson extrapolation can be considered as a linear sequence transformation.
Additionally, the general formula can be used to estimate
(leading order step size behavior of Truncation error) when neither its value nor
Such a technique can be useful for quantifying an unknown rate of convergence.
yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two O terms is only approximately valid)
for some arbitrary valid choices of
, this approximate relation reduces to a quadratic equation in
is called the Richardson extrapolation of A(h), and has a higher-order error estimate
Very often, it is much easier to obtain a given precision by using R(h) rather than A(h′) with a much smaller h′.
Where A(h′) can cause problems due to limited precision (rounding errors) and/or due to the increasing number of calculations needed (see examples below).
The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE
In this example we halve the step size
The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise
since the exact solution of the ODE is
This pseudocode assumes that a function called Trapezoidal(f, tStart, tEnd, h, y0) exists which attempts to compute y(tEnd) by performing the trapezoidal method on the function f, with starting point y0 and tStart and step size h. Note that starting with too small an initial step size can potentially introduce error into the final solution.
Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance.