Numerical linear algebra

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics.

Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics.

It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics.

Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",[1]: x  even though it is a comparatively small field.

[1]: ix Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation.

Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones.

Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations.

[2] The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947.

[3] The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems.

[2] For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors.

with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A.

[1]: 8  Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms.

, we could use the column partitioning perspective to compute y := Ax + y as The singular value decomposition of a matrix

The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices

Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.

[1]: 148  Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits.

[2] The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.

[1]: 151 Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors.

[1]: 8 Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others.

[1]: 147 [4]: 99 Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem.

Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem.

When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results.

Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra.

[1]: 140  Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting.

There are two reasons that iterative algorithms are an important part of numerical linear algebra.

First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach.

The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions.

When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method.

If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN.

If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration.