Square matrix

Any two square matrices of the same order can be added and multiplied.

Square matrices are often used to represent simple linear transformations, such as shearing or rotation.

is a column vector describing the position of a point in space, the product

yields another column vector describing the position of that point after that rotation.

(i = 1, ..., n) form the main diagonal of a square matrix.

They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix.

For instance, the main diagonal of the 4×4 matrix above contains the elements a11 = 9, a22 = 11, a33 = 4, a44 = 10.

The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal.

is called invertible or non-singular if there exists a matrix

exists, it is unique and is called the inverse matrix of

By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors.

[3] A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors

[4] If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.

A symmetric matrix is positive-definite if and only if all its eigenvalues are positive.

Allowing as input two different vectors instead yields the bilinear form associated to A:[6]

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors).

Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:

An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*).

If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal.

If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal.

[7] The trace, tr(A) of a square matrix A is the sum of its diagonal entries.

While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors:

) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.

The determinant of 3×3 matrices involves 6 terms (rule of Sarrus).

Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant.

Interchanging two rows or two columns affects the determinant by multiplying it by −1.

[10] Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix.

[11] This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula.

Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.

[13][14] The number λ is an eigenvalue of an n×n-matrix A if and only if A − λIn is not invertible, which is equivalent to[15]

A square matrix of order 4. The entries form the main diagonal of a square matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements a 11 = 9 , a 22 = 11 , a 33 = 4 , a 44 = 10 .
A linear transformation on given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the orientation , since it turns the counterclockwise orientation of the vectors to a clockwise one.