is called diagonalizable or non-defective if it is similar to a diagonal matrix.
This property exists for any linear map: for a finite-dimensional vector space
The geometric transformation represented by a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling).
Many results for diagonalizable matrices hold only over an algebraically closed field (such as the complex numbers).
invertible matrix (i.e. an element of the general linear group GLn(F)),
The fundamental fact about diagonalizable maps and matrices is expressed by the following: The following sufficient (but not necessary) condition is often useful.
One can also say that the diagonalizable matrices form a dense subset with respect to the Zariski topology: the non-diagonalizable matrices lie inside the vanishing set of the discriminant of the characteristic polynomial, which is a hypersurface.
Suppose that there exists a linear transformation represented by a matrix
which is written with respect to basis E. Suppose also that there exists the following eigen-equation:
can be diagonalized, that is, then: The transition matrix S has the E-basis vectors as columns written in the basis F. Inversely, the inverse transition matrix P has F-basis vectors
written in the basis of E so that we can represent P in block matrix form in the following manner: as a result we can write:
The D-matrix can be written in full form with all the diagonal elements as an nxn dimensional matrix:
Taking each component of the block matrix individually on both sides, we end up with the following: So the column vectors of
also suggests that the eigenvectors are linearly independent and form a basis of
This is the necessary and sufficient condition for diagonalizability and the canonical approach of diagonalization.
is a real symmetric matrix, then its eigenvectors can be chosen to be an orthonormal basis of
For most practical work matrices are diagonalized numerically using computer software.
A set of matrices is said to be simultaneously diagonalizable if there exists a single invertible matrix
Even if a matrix is not diagonalizable, it is always possible to "do the best one can", and find a matrix with the same properties consisting of eigenvalues on the leading diagonal, and either ones or zeroes on the superdiagonal – known as Jordan normal form.
This happens more generally if the algebraic and geometric multiplicities of an eigenvalue do not coincide.
For example, consider the matrix The roots of the characteristic polynomial
: and the latter is easy to calculate since it only involves the powers of a diagonal matrix.
, we have: This is particularly useful in finding closed form expressions for terms of linear recursive sequences, such as the Fibonacci numbers.
reveals a surprising pattern: The above phenomenon can be explained by diagonalizing
The reverse change of basis is given by Straightforward calculations show that Thus, a and b are the eigenvalues corresponding to u and v, respectively.
By linearity of matrix multiplication, we have that Switching back to the standard basis, we have The preceding relations, expressed in matrix form, are thereby explaining the above phenomenon.
The basic reason is that the time-independent Schrödinger equation is an eigenvalue equation, albeit in most of the physical situations on an infinite dimensional Hilbert space.
A very common approximation is to truncate (or project) the Hilbert space to finite dimension, after which the Schrödinger equation can be formulated as an eigenvalue problem of a real symmetric, or complex Hermitian matrix.
Formally this approximation is founded on the variational principle, valid for Hamiltonians that are bounded from below.
First-order perturbation theory also leads to matrix eigenvalue problem for degenerate states.