In linear algebra, the trace of a square matrix A, denoted tr(A),[1] is the sum of the elements on its main diagonal,
This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.
This is a natural inner product on the vector space of all real matrices of fixed dimensions.
The Frobenius inner product and norm arise frequently in matrix calculus and statistics.
This holds true even if A is a real matrix and some (or all) of the eigenvalues are complex numbers.
This may be regarded as a consequence of the existence of the Jordan canonical form, together with the similarity-invariance of the trace discussed above.
One can state this as "the trace is a map of Lie algebras gln → k from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra).
Conversely, any square matrix with zero trace is a linear combination of the commutators of pairs of matrices.
When the characteristic of the base field is zero, the converse also holds: if tr(Ak) = 0 for all k, then A is nilpotent.
If A is a linear operator represented by a square matrix with real or complex entries and if λ1, ..., λn are the eigenvalues of A (listed according to their algebraic multiplicities), then
This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having λ1, ..., λn on the main diagonal.
Everything in the present section applies as well to any square matrix with coefficients in an algebraically closed field.
Precisely this means that the trace is the derivative of the determinant function at the identity matrix.
A related characterization of the trace applies to linear vector fields.
By the divergence theorem, one can interpret this in terms of flows: if F(x) represents the velocity of a fluid at location x and U is a region in Rn, the net flow of the fluid out of U is given by tr(A) · vol(U), where vol(U) is the volume of U.
In general, given some linear map f : V → V (where V is a finite-dimensional vector space), we can define the trace of this map by considering the trace of a matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix.
The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map.
, which is the Lie algebra of the special linear group of matrices with determinant 1.
The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of infinitesimal sets.
The projection map onto scalar operators can be expressed in terms of the trace, concretely as:
mapping onto scalars, and multiplying by n. Dividing by n makes this a projection, yielding the formula above.
, but the splitting of the determinant would be as the nth root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose:
The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces, and the analog of the Frobenius norm is called the Hilbert–Schmidt norm.
Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.
that operates on block matrices and use it to compute second-order perturbation solutions to dynamic economic models without the need for tensor notation.
The universal property of the tensor product V ⊗ V∗ automatically implies that this bilinear map is induced by a linear functional on V ⊗ V∗.
[9] This fundamental fact is a straightforward consequence of the existence of a (finite) basis of V, and can also be phrased as saying that any linear map V → V can be written as the sum of (finitely many) rank-one linear maps.
In the present perspective, one is considering linear maps S and T, and viewing them as sums of rank-one maps, so that there are linear functionals φi and ψj and nonzero vectors vi and wj such that S(u) = Σφi(u)vi and T(u) = Σψj(u)wj for any u in V. Then for any u in V. The rank-one linear map u ↦ ψj(u)φi(wj)vi has trace ψj(vi)φi(wj) and so Following the same procedure with S and T reversed, one finds exactly the same formula, proving that tr(S ∘ T) equals tr(T ∘ S).
The above proof can be regarded as being based upon tensor products, given that the fundamental identity of End(V) with V ⊗ V∗ is equivalent to the expressibility of any linear map as the sum of rank-one linear maps.
[9] These structures can be axiomatized to define categorical traces in the abstract setting of category theory.