An orthogonal matrix Q is necessarily invertible (with inverse Q−1 = QT), unitary (Q−1 = Q∗), where Q∗ is the Hermitian adjoint (conjugate transpose) of Q, and therefore normal (Q∗Q = QQ∗) over the real numbers.
As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection.
Orthogonal matrices preserve the dot product,[1] so, for vectors u and v in an n-dimensional real Euclidean space
Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as QR decomposition.
Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace.
It is common to describe a 3 × 3 rotation matrix in terms of an axis and angle, but this only works in three dimensions.
Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of v. This is a reflection in the hyperplane perpendicular to v (negating any vector component parallel to v).
A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space Rn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn.
Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected.
Thus each orthogonal group falls into two pieces; and because the projection map splits, O(n) is a semidirect product of SO(n) by O(1).
Similarly, SO(n) is a subgroup of SO(n + 1); and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure.
For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra
The exponential of this is the orthogonal matrix for rotation around axis v by angle θ; setting c = cos θ/2, s = sin θ/2,
However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of n indices.
Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage.
For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order n3 to a much more efficient order n. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly.
A number of important matrix decompositions (Golub & Van Loan 1996) involve orthogonal matrices, including especially: Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors.
Write Ax = b, where A is m × n, m > n. A QR decomposition reduces A to upper triangular R. For example, if A is 5 × 3 then R has the form
The linear least squares problem is to find the x that minimizes ‖Ax − b‖, which is equivalent to projecting b to the subspace spanned by the columns of A.
But the lower rows of zeros in R are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition).
Here orthogonality is important not only for reducing ATA = (RTQT)QR to RTR, but also for allowing solution without magnifying numerical problems.
In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful.
With A factored as UΣVT, a satisfactory solution uses the Moore-Penrose pseudoinverse, VΣ+UT, where Σ+ merely replaces each non-zero diagonal entry with its reciprocal.
Floating point does not match the mathematical ideal of real numbers, so A has gradually lost its true orthogonality.
Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices.
In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix.
Stewart (1980) replaced this with a more efficient idea that Diaconis & Shahshahani (1987) later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations).
Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).
The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices.
For the case n ≤ m, matrices with orthonormal columns may be referred to as orthogonal k-frames and they are elements of the Stiefel manifold.