Idempotent matrix

must necessarily be a square matrix.

matrix to be idempotent is that either it is diagonal or its trace equals 1.

For idempotent diagonal matrices,

so a satisfies the quadratic equation which is a circle with center (1/2, 0) and radius 1/2.

In terms of an angle θ, However,

is not a necessary condition: any matrix The only non-singular idempotent matrix is the identity matrix; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns).

, assuming that A has full rank (is non-singular), and pre-multiplying by

When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent.

This holds since If a matrix A is idempotent then for all positive integers n,

An idempotent matrix is always diagonalizable.

is a non-zero eigenvector of some idempotent matrix

This further implies that the determinant of an idempotent matrix is always 0 or 1.

The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer.

This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in statistics, for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance).

In regression analysis, the matrix

from the regression of the vector of dependent variables

be a matrix formed from a subset of the columns of

, or in other words, the residuals from the regression of the columns of

(by direct substitution it is also straightforward to show that

These results play a key role, for example, in the derivation of the F test.

Idempotency is conserved under a change of basis.

This can be shown through multiplication of the transformed matrix

Idempotent matrices arise frequently in regression analysis and econometrics.

For example, in ordinary least squares, the regression problem is to choose a vector β of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ei: in matrix form, where

is a vector of dependent variable observations, and

The resulting estimator is where superscript T indicates a transpose, and the vector of residuals is[2] Here both

(the latter being known as the hat matrix) are idempotent and symmetric matrices, a fact which allows simplification when the sum of squared residuals is computed: The idempotency of

plays a role in other calculations as well, such as in determining the variance of the estimator

is a projection operator on the range space ⁠

is an orthogonal projection operator if and only if it is idempotent and symmetric.