In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation (called an "iterate") is derived from the previous ones.
In contrast, direct methods attempt to solve the problem by a finite sequence of operations.
In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations
[1] If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x.
If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point.
If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.
Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated.
While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.
the error by An iterative method is called linear if there exists a matrix
is smaller than unity, that is, The basic iterative methods work by splitting the matrix
However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier.
The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.
Mathematical methods relating to successive approximation include: Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and π in The Treatise of Chord and Sine to high precision.
An early iterative method for solving a linear system appeared in a letter of Gauss to a student of his.
The theory of stationary iterative methods was solidly established with the work of D.M.
The conjugate gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time.
Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type.