Here an associative algebra is a (not necessarily unital) ring.
If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as
Then a representation of C is a real vector space V, together with an action of C on V (a map
Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry.
A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted
meaning the representation of the abstract algebra
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by
and is used in understanding the structure of a single linear operator on a finite-dimensional vector space.
Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
Eigenvalues and eigenvectors can be generalized to algebra representations.
(i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative).
The case of the eigenvalue of a single operator corresponds to the algebra
is determined by which scalar it maps the generator T to.
A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation).
is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight.
– equivalently, it vanishes on the derived algebra – in terms of matrices, if
(because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra).
In this particularly simple and important case of the polynomial algebra
in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a
corresponding to the eigenvalue of each matrix, and hence geometrically to a point in
These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
generators, it corresponds geometrically to an algebraic variety in
This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.