Tensor rank decomposition

It was introduced by Frank Lauren Hitchcock in 1927[2] and later rediscovered several times, notably in psychometrics.

A scalar variable is denoted by lower case italic letters,

Indices are denoted by a combination of lowercase and upper case italic letters,

Contrary to the case of matrices, computing the rank of a tensor is NP-hard.

, whose rank can be obtained from the Kronecker–Weierstrass normal form of the linear matrix pencil that the tensor represents.

[7] A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition.

In contrast, the rank of real matrices will never decrease under a field extension to

only forms an open set of positive measure in the Euclidean topology.

The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen.

is some indeterminate closed set in the Zariski topology, is expected to equal the above value.

is the least rank that is expected to occur on a set of positive Euclidean measure.

It is known that the true generic rank always satisfies The Abo–Ottaviani–Peterson conjecture[11] states that equality is expected, i.e.,

The AOP conjecture has been proved completely in a number of special cases.

[12] In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of rank

Presently, the best general upper bound states that the maximum rank

Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini, Lotti, and Romani in 1980.

is called identifiable if every of its tensor rank decompositions is the sum of the same set of

is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rank

For simplicity in notation, assume without loss of generality that the factors are ordered such that

Then, the following statement was proved to be correct using a computer-assisted proof for all spaces of dimension

that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).

It was shown in a 2008 paper by de Silva and Lim[8] that the above standard approximation problem may be ill-posed.

A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed.

, even though the limit of the sequence converges to a tensor of rank strictly higher than

This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms.

It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.

A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant.

Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a bounded inner product strictly less than unity between the rank-1 terms appearing in the sought decomposition.

In applications such as topic modeling, this can be interpreted as the co-occurrence of words in a document.

Then the coefficients in the decomposition of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix