Matrix norm

In the field of mathematics, norms are defined for elements within a vector space.

of either real or complex numbers (or any complete subset thereof), let

Norms are often expressed with double vertical bars (like so:

The only feature distinguishing matrices from rearranged vectors is multiplication.

Geometrically speaking, one can imagine a p-norm unit ball

measures the longest "radius" of the distorted convex shape.

which is simply the maximum absolute column sum of the matrix.

which is simply the maximum absolute row sum of the matrix.

The two values do not coincide in infinite dimensions — see Spectral radius for further discussion.

are the maximum row and column 2-norm of the matrix, respectively.

is an operator norm on the space of square matrices

Moreover, any such norm satisfies the inequality for all positive integers r, where ρ(A) is the spectral radius of A.

For symmetric or hermitian A, we have equality in (1) for the 2-norm, since in this case the 2-norm is precisely the spectral radius of A.

In any case, for any matrix norm, we have the spectral radius formula:

are given in terms of energy norms based on symmetric positive definite matrices

presents n data points in an m-dimensional space.

It is used in robust data analysis and sparse coding.

This norm can be defined in various ways: where the trace is the sum of diagonal entries, and

, and the fact that the trace is invariant under circular shifts.

The Frobenius norm is sub-multiplicative and is very useful for numerical linear algebra.

The sub-multiplicativity of Frobenius norm can be proved using Cauchy–Schwarz inequality.

is the Frobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices) The max norm is the elementwise norm in the limit as p = q goes to infinity: This norm is not sub-multiplicative; but modifying the right-hand side to

Note that in some literature (such as Communication complexity), an alternative definition of max-norm, also called the

The case p = 2 yields the Frobenius norm, introduced before.

is a positive semidefinite matrix, its square root is well defined.

, so it is often used in mathematical optimization to search for low-rank matrices.

[9] The so-called "cut norm" measures how close the associated graph is to being bipartite:

[9][10][11] Equivalent definitions (up to a constant factor) impose the conditions 2|S| > n & 2|T| > m; S = T; or S ∩ T = ∅.

[11] To define the Grothendieck norm, first note that a linear operator K1 → K1 is just a scalar, and thus extends to a linear operator on any Kk → Kk.

Moreover, given any choice of basis for Kn and Km, any linear operator Kn → Km extends to a linear operator (Kk)n → (Kk)m, by letting each matrix element on elements of Kk via scalar multiplication.