Moore–Penrose inverse

[4] Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903.

The pseudoinverse is defined for all rectangular matrices whose entries are real or complex numbers.

In particular: In the more general case, the pseudoinverse can be expressed leveraging the singular value decomposition.

and can be obtained by transposing the matrix and replacing the nonzero values with their multiplicative inverses.

The element of this subspace that has the smallest length (that is, is closest to the origin) is the answer ⁠

This description is closely related to the minimum-norm solution to a linear system.

[5]: 263 In contrast to ordinary matrix inversion, the process of taking pseudoinverses is not continuous: if the sequence ⁠

be a real-valued differentiable matrix function with constant rank at a point ⁠

⁠ and their inverses explicitly is often a source of numerical rounding errors and computational cost in practice.

The case of full row rank is treated similarly by using the formula

⁠, we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place.

In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros.

For example, in the MATLAB or GNU Octave function pinv, the tolerance is taken to be t = ε⋅max(m, n)⋅max(Σ), where ε is the machine epsilon.

The above procedure shows why taking the pseudoinverse is not a continuous operation: if the original matrix ⁠

Optimized approaches exist for calculating the pseudoinverse of block-structured matrices.

⁠)[18] has been argued not to be competitive to the method using the SVD mentioned above, because even for moderately ill-conditioned matrices it takes a long time before ⁠

⁠ can be computed by applying the Sherman–Morrison–Woodbury formula to update the inverse of the correlation matrix, which may need less work.

In particular, if the related matrix differs from the original one by only a changed, added or deleted row or column, incremental algorithms exist that exploit the relationship.

[20][21] Similarly, it is possible to update the Cholesky factor when a row or column is added, without creating the inverse of the correlation matrix explicitly.

[22][23] High-quality implementations of SVD, QR, and back substitution are available in standard libraries, such as LAPACK.

Writing one's own implementation of SVD is a major programming project that requires a significant numerical expertise.

The Python package NumPy provides a pseudoinverse calculation through its functions matrix.I and linalg.pinv; its pinv uses the SVD-based algorithm.

The MASS package for R provides a calculation of the Moore–Penrose inverse through the ginv function.

The Octave programming language provides a pseudoinverse through the standard package function pinv and the pseudo_inverse() method.

[25] The pseudoinverse provides a least squares solution to a system of linear equations.

⁠ does not have full column rank, then we have an indeterminate system, all of whose infinitude of solutions are given by this last equation.

A large condition number implies that the problem of finding least-squares solutions to the corresponding system of linear equations is ill-conditioned in the sense that small errors in the entries of ⁠

These weights are the identity for the standard Moore-Penrose inverse, which assumes an orthonormal basis in both spaces.

In order to solve more general least-squares problems, one can define Moore–Penrose inverses for all continuous linear operators ⁠

[31] Example: Consider the field of complex numbers equipped with the identity involution (as opposed to the involution considered elsewhere in the article); do there exist matrices that fail to have pseudoinverses in this sense?