In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix.
It is a specialization of the tensor product (which is denoted by the same symbol) from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis.
The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation.
[2][3] The misattribution to Kronecker rather than Zehfuss was due to Kurt Hensel.
to denote truncating integer division and remainder, respectively, and numbering the matrix elements starting from 0, one obtains For the usual numbering starting from 1, one obtains If A and B represent linear transformations V1 → W1 and V2 → W2, respectively, then the tensor product of the two maps is a map V1 ⊗ V2 → W1 ⊗ W2 represented by A ⊗ B.
MATLAB colon notation is used here to indicate submatrices, and Ir is the r × r identity matrix.
If A and C are matrices of the same size, B and D are matrices of the same size, then[7] It follows that A ⊗ B is invertible if and only if both A and B are invertible, in which case the inverse is given by The invertible product property holds for the Moore–Penrose pseudoinverse as well,[7][8] that is In the language of Category theory, the mixed-product property of the Kronecker product (and more general tensor product) shows that the category MatF of matrices over a field F, is in fact a monoidal category, with objects natural numbers n, morphisms n → m are n×m matrices with entries in F, composition is given by matrix multiplication, identity arrows are simply n × n identity matrices In, and the tensor product is given by the Kronecker product.
This operation is related to the tensor product on Lie algebras, as detailed below (#Abstract properties) in the point "Relation to the abstract tensor product".
We have the following formula for the matrix exponential, which is useful in some numerical evaluations.
[10] Kronecker sums appear naturally in physics when considering ensembles of non-interacting systems.
[citation needed] Let Hk be the Hamiltonian of the kth such system.
When the order of the Kronecker product and vectorization is interchanged, the two operations can be linked linearly through a function that involves the commutation matrix,
as follows: where Suppose that A and B are square matrices of size n and m respectively.
Let λ1, ..., λn be the eigenvalues of A and μ1, ..., μm be those of B (listed according to multiplicity).
Then the eigenvalues of A ⊗ B are It follows that the trace and determinant of a Kronecker product are given by If A and B are rectangular matrices, then one can consider their singular values.
Suppose that A has rA nonzero singular values, namely Similarly, denote the nonzero singular values of B by Then the Kronecker product A ⊗ B has rArB nonzero singular values, namely Since the rank of a matrix equals the number of nonzero singular values, we find that The Kronecker product of matrices corresponds to the abstract tensor product of linear maps.
Specifically, if the vector spaces V, W, X, and Y have bases {v1, ..., vm}, {w1, ..., wn}, {x1, ..., xd}, and {y1, ..., ye}, respectively, and if the matrices A and B represent the linear transformations S : V → X and T : W → Y, respectively in the appropriate bases, then the matrix A ⊗ B represents the tensor product of the two maps, S ⊗ T : V ⊗ W → X ⊗ Y with respect to the basis {v1 ⊗ w1, v1 ⊗ w2, ..., v2 ⊗ w1, ..., vm ⊗ wn} of V ⊗ W and the similarly defined basis of X ⊗ Y with the property that A ⊗ B(vi ⊗ wj) = (Avi) ⊗ (Bwj), where i and j are integers in the proper range.
[11] The Kronecker product can be used to get a convenient representation for some matrix equations.
Consider for instance the equation AXB = C, where A, B and C are given matrices and the matrix X is the unknown.
We can use the "vec trick" to rewrite this equation as Here, vec(X) denotes the vectorization of the matrix X, formed by stacking the columns of X into a single column vector.
It now follows from the properties of the Kronecker product that the equation AXB = C has a unique solution, if and only if A and B are invertible (Horn & Johnson 1991, Lemma 4.3.1).
If X and C are row-ordered into the column vectors u and v, respectively, then (Jain 1989, 2.8 Block Matrices and Kronecker Products) The reason is that For an example of the application of this formula, see the article on the Lyapunov equation.
This formula is also useful for representing 2D image processing operations in matrix-vector form.
This can be applied recursively, as done in the radix-2 FFT and the Fast Walsh–Hadamard transform.
Splitting a known matrix into the Kronecker product of two smaller matrices is known as the "nearest Kronecker product" problem, and can be solved exactly[13] by using the SVD.
To split a matrix into the Kronecker product of more than two matrices, in an optimal fashion, is a difficult problem and the subject of ongoing research; some authors cast it as a tensor decomposition problem.
[14][15] In conjunction with the least squares method, the Kronecker product can be used as an accurate solution to the hand–eye calibration problem.
Let the m × n matrix A be partitioned into the mi × nj blocks Aij and p × q matrix B into the pk × qℓ blocks Bkl, with of course Σi mi = m, Σj nj = n, Σk pk = p and Σℓ qℓ = q.
B, of which the (kℓ)-th subblock equals the mi pk × nj qℓ matrix Aij ⊗ Bkℓ.
is the Fourier transform matrix (this result is an evolving of count sketch properties[25]),[21][22] where