Preconditioner

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods.

Preconditioning is typically related to reducing a condition number of the problem.

since the rate of convergence for most iterative linear solvers increases because the condition number of a matrix decreases as a result of preconditioning.

Iterative solvers can be used as matrix-free methods, i.e. become the only choice if the coefficient matrix

are diagonal and scaling is applied both to columns and rows of the original matrix

Small condition numbers benefit fast convergence of iterative solvers and improve stability of the solution with respect to perturbations in the system matrix and the right-hand side, e.g., allowing for more aggressive quantization of the matrix entries using lower computer precision.

which has optimal condition number of 1, requiring a single iteration for convergence; however in this case

as somewhere between these two extremes, in an attempt to achieve a minimal number of linear iterations while keeping the operator

are, in most cases, mathematically equivalent to standard iterative methods applied to the preconditioned system

In this case, the desired effect in applying a preconditioner is to make the quadratic form of the preconditioned operator

Such preconditioners may be practically very efficient, however, their behavior is hard to predict theoretically.

The most common use of preconditioning is for iterative solution of linear systems resulting from approximations of partial differential equations.

In such a case, the goal of optimal preconditioning is, on the one side, to make the spectral condition number of

to be bounded from above by a constant independent of the matrix size, which is called spectrally equivalent preconditioning by D'yakonov.

Under the Frobenius norm, this reduces to solving numerous independent least-squares problems (one for every column).

must be restricted to some sparsity pattern or the problem remains as difficult and time-consuming as finding the exact inverse of

The method was introduced by M.J. Grote and T. Huckle together with an approach to selecting sparsity patterns.

[3] Eigenvalue problems can be framed in several alternative ways, each leading to its own preconditioning.

This gives the Inverse iteration, which normally converges to the eigenvector, corresponding to the eigenvalue closest to the shift

The Rayleigh quotient iteration is a shift-and-invert method with a variable shift.

Spectral transformations are specific for eigenvalue problems and have no analogs for linear systems.

They require accurate numerical calculation of the transformation involved, which becomes the main bottleneck for large problems.

To make a close connection to linear systems, let us suppose that the targeted eigenvalue

allows one to easily utilize for eigenvalue problems the vast variety of preconditioners developed for linear systems.

, a comprehensive theoretical convergence analysis is much more difficult, compared to the linear systems case, even for the simplest methods, such as the Richardson iteration.

Preconditioning here can be viewed as changing the geometry of the vector space with the goal to make the level sets look like circles.

[5] In this case the preconditioned gradient aims closer to the point of the extrema as on the figure, which speeds up the convergence.

is a real symmetric positive-definite matrix, is exactly the solution of the linear equation

This is an analog of preconditioned Richardson iteration for solving eigenvalue problems.

The increased cost of updating the preconditioner can easily override the positive effect of faster convergence.

Illustration of gradient descent