Pre-multiplying an n-row matrix M by a permutation matrix P, forming PM, results in permuting the rows of M, while post-multiplying an n-column matrix M, forming MP, permutes the columns of M. Every permutation matrix P is orthogonal, with its inverse equal to its transpose:
[2] There are two natural one-to-one correspondences between permutations and permutation matrices, one of which works along the rows of the matrix, the other along its columns.
Here is an example, starting with a permutation π in two-line form at the upper left: The row-based correspondence takes the permutation π to the matrix
The column-based correspondence takes π to the matrix
on either the left or the right will permute either the rows or columns of M by either π or π−1.
Arguing similarly about each of the slots, we find that the output vector is even though we are permuting by
is sometimes called taking the alibi viewpoint, while permuting the indices by
[1]: 25 A similar argument shows that post-multiplying an n-column matrix M by
A related argument proves that, as we claimed above, the transpose of any permutation matrix P also acts as its inverse, which implies that P is invertible.
(Artin leaves that proof as an exercise,[1]: 26 which we here solve.)
term in this sum is the product of two different entries in the
Given two permutations of n elements 𝜎 and 𝜏, the product of the corresponding column-based permutation matrices Cσ and Cτ is given,[1]: 25 as you might expect, by
applies first 𝜏 and then 𝜎, working from right to left:
For the row-based matrices, there is a twist: The product of Rσ and Rτ is given by with 𝜎 applied before 𝜏 in the composed permutation.
When doing linear algebra, they work with linear spaces of row vectors, and they apply a linear map to an argument by using the map's matrix to post-multiply the argument's row vector.
That notation gives us a simpler rule for multiplying row-based permutation matrices: When π is the identity permutation, which has
By the formulas above, those n × n permutation matrices form a group of order n!
is a subgroup of the general linear group
, where the matrix entries belong to F. (Every field contains 0 and 1 with
denote the opposite group, which uses the left-to-right composition "
that takes π to its column-based matrix
The set of all doubly stochastic matrices is called the Birkhoff polytope, and the permutation matrices play a special role in that polytope.
The Birkhoff–von Neumann theorem says that every doubly stochastic real matrix is a convex combination of permutation matrices of the same order, with the permutation matrices being precisely the extreme points (the vertices) of the Birkhoff polytope.
The Birkhoff polytope is thus the convex hull of the permutation matrices.
, and the trace of P is the number of such shared fixed points.
[1]: 322 If the integer k is one of them, then the standard basis vector ek is an eigenvector of P.[1]: 118 To calculate the complex eigenvalues of P, write the permutation
(Permutations of disjoint subsets commute, so it doesn't matter here whether we are composing right-to-left or left-to-right.)
contains v.[6] (Since any permutation matrix is normal and any normal matrix is diagonalizable over the complex numbers,[1]: 259 the algebraic and geometric multiplicities of an eigenvalue v are the same.)
From group theory we know that any permutation may be written as a composition of transpositions.
Therefore, any permutation matrix factors as a product of row-switching elementary matrices, each of which has determinant −1.