A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form
The integer ni is termed the algebraic multiplicity of eigenvalue λi.
The linear combinations of the mi solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue λi.
The total number of linearly independent eigenvectors, Nv, can be calculated by summing the geometric multiplicities
Let A be a square n × n matrix with n linearly independent eigenvectors qi (where i = 1, ..., n).
That can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1.
One special case is that if A is a normal matrix, then by the spectral theorem, it's always possible to diagonalize A in an orthonormal basis {qi}.
The linearly independent eigenvectors qi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products Ax, for x ∈ Cn, which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix A.
The number of linearly independent eigenvectors qi with nonzero eigenvalues is equal to the rank of the matrix A, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectors qi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation A.
And can be represented by a single vector equation involving two solutions as eigenvalues:
When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above.
Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.
See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
A similar technique works more generally with the holomorphic functional calculus, using
This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition.
[9] Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial.
One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.
This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine.
[11] Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.
[11] (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.
[11] Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − A.
The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used.
For example, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave.
In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.
If n linearly independent vectors {v1, …, vn} can be found, such that for every i ∈ {1, …, n}, Avi = λiBvi, then we define the matrices P and D such that
However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally.