These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach.
In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
if Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms.
may also be called the asymptotic error constant, and some authors will use rate where this article uses order.
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm.
Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate
[1] This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed.
This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive.
as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1.
For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618.
More precisely, the limits imply the leading order error is exactly
In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate.
converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible
, called fixed point iterations, define discrete time autonomous dynamical systems and have important general applications in mathematics through various fixed-point theorems about their convergence behavior.
These may reduce the computational costs of approximating the limits of the original sequences.
One example of series acceleration by sequence transformation is Aitken's delta-squared process.
that still converges linearly (except for pathologically designed special cases), but faster in the sense that
On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement.
Discretization scale parameters may be spacings of a regular grid in space or in time, the inverse of the number of points of a grid in one dimension, an average or maximum distance between points in a polygon mesh, the single-dimension spacings of an irregular sparse grid, or a characteristic quantum of energy or momentum in a quantum mechanical basis set.
applying the forward Euler method for numerical discretization using any regular grid spacing
as follows: which implies the first-order linear recurrence with constant coefficients Given
, but it does not converge uniformly on the unbounded set of all positive real values,
is asymptotically equivalent to the shifted, exponentiated, and rescaled sequence of iterate errors
Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior.
For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit.
The non-asymptotic rate is then an inverse of that number of iterates or computer time.
In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower.
For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common.
For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error.