Jacobi eigenvalue algorithm

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization).

It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846,[1] but only became widely used in the 1950s with the advent of computers.

[2] This algorithm is inherently a dense matrix algorithm: it draws little or no advantage from being applied to a sparse matrix, and it will destroy sparseness by creating fill-in.

has a larger sum of squares on the diagonal: Set this equal to 0, and rearrange: if

In order to optimize this effect, Sij should be the off-diagonal element with the largest absolute value, called the pivot.

The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal.

; that is, the sequence of Jacobi rotations converges at least linearly by a factor

The previous estimate yields that is, the sequence of sweeps converges at least linearly with a factor ≈

However the following result of Schönhage[3] yields locally quadratic convergence.

However the search for p requires inspection of all N ≈ ⁠1/2⁠ n2 off-diagonal elements, which means this search dominates the overall complexity and pushes the computational complexity of a sweep in the classical Jacobi algorithm to

We can reduce the complexity of finding the pivot element from O(N) to O(n) if we introduce an additional index array

is the index of the largest element in row i, (i = 1, ..., n − 1) of the current S. Then the indices of the pivot (k, l) must be one of the pairs

should be equal to k or l and the corresponding entry decreased during the update, the maximum over row i has to be found from scratch in O(n) complexity.

Thus, each rotation has O(n) and one sweep O(n3) average-case complexity, which is equivalent to one matrix multiplication.

Typically the Jacobi method converges within numerical precision after a small number of sweeps.

Note that multiple eigenvalues reduce the number of iterations since

An alternative approach is to forego the search entirely, and simply have each sweep pivot every off-diagonal element once, in some predetermined order.

The opportunity for parallelisation that is particular to Jacobi is based on combining cyclic Jacobi with the observation that Givens rotations for disjoint sets of indices commute, so that several can be applied in parallel.

Two processors can perform both rotations in parallel, because no matrix element is accessed for both.

Partitioning the set of index pairs of a sweep into classes that are pairwise disjoint is equivalent to partitioning the edge set of a complete graph into matchings, which is the same thing as edge colouring it; each colour class then becomes a round within the sweep.

The minimal number of rounds is the chromatic index of the complete graph, and equals

Further parallelisation is possible by dividing the work for a single rotation between several processors, but that might be getting too fine-grained to be practical.

The integer state counts the number of components of changed which have the value true.

When implementing the algorithm, the part specified using matrix notation must be performed simultaneously.

This implementation does not correctly account for the case in which one dimension is an independent subspace.

For example, if given a diagonal matrix, the above implementation will never terminate, as none of the eigenvalues will change.

Hence, in real implementations, extra logic must be added to account for this case.

When the eigenvalues (and eigenvectors) of a symmetric matrix are known, the following values are easily calculated.

The following code is a straight-forward implementation of the mathematical description of the Jacobi eigenvalue algorithm in the Julia programming language.

For this case, the method is modified in such a way that S must not be explicitly calculated which reduces the danger of round-off errors.