Eigenvalues and eigenvectors

For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.

In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes.

[10] In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.

[13] Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.

[11] In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory.

[16] He was the first to use the German word eigen, which means "own",[6] to denote eigenvalues and eigenvectors in 1904,[c] though he may have been following a related usage by Hermann von Helmholtz.

[17] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method.

One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis[18] and Vera Kublanovskaya[19] in 1961.

[22][23] Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4] which is especially common in numerical and computational applications.

The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.

The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.

Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.

[43] Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an

Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods.

Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961.

[citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.

The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue.

Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics.

The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components.

The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom.

In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body.

The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.

Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system.

In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues.

In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers.

In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection.

The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis.

In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes.

Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language.

In this shear mapping the red arrow changes direction, but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it does not change direction, and since its length is unchanged, its eigenvalue is 1.
A 2×2 real and symmetric matrix representing a stretching and shearing of the plane. The eigenvectors of the matrix (red lines) are the two special directions such that every point on them will just slide on them.
Matrix A acts by stretching the vector x , not changing its direction, so x is an eigenvector of A .
The transformation matrix A = preserves the direction of purple vectors parallel to v λ =1 = [1 −1] T and blue vectors parallel to v λ =3 = [1 1] T . The red vectors are not parallel to either eigenvector, so, their directions are changed by the transformation. The lengths of the purple vectors are unchanged after the transformation (due to their eigenvalue of 1), while blue vectors are three times the length of the original (due to their eigenvalue of 3). See also: An extended version, showing all four quadrants .
Horizontal shear mapping
PCA of the multivariate Gaussian distribution centered at with a standard deviation of 3 in roughly the direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) covariance matrix scaled by the square root of the corresponding eigenvalue. Just as in the one-dimensional case, the square root is taken because the standard deviation is more readily visualized than the variance .
Mode shape of a tuning fork at eigenfrequency 440.09 Hz
The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator . They are associated with eigenvalues interpreted as their energies (increasing downward: ) and angular momentum (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement . The center of each figure is the atomic nucleus , a proton .
Eigenfaces as examples of eigenvectors