Gershgorin circle theorem

In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix.

It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931.

be the sum of the absolute values of the non-diagonal entries in the

Find i such that the element of x with the largest absolute value is

to the other side: Therefore, applying the triangle inequality and recalling that

Apply the Theorem to AT while recognizing that the eigenvalues of the transpose are the same as those of the original matrix.

For a diagonal matrix, the Gershgorin discs coincide with the spectrum.

Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal.

One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix.

Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix.

, and each expresses a bound on precisely those eigenvalues whose eigenspaces are closest to one particular axis.

This is however just a happy coincidence; if working through the steps of the proof one finds that it in each eigenvector is the first element that is the largest (every eigenspace is closer to the first axis than to any other axis), so the theorem only promises that the disc for row 1 (whose radius can be twice the sum of the other two radii) covers all three eigenvalues.

In the general case the theorem can be strengthened as follows: Theorem: If the union of k discs is disjoint from the union of the other n − k discs then the former union contains exactly k and the latter n − k eigenvalues of A, when the eigenvalues are counted with their algebraic multiplicities.

Proof: Let D be the diagonal matrix with entries equal to the diagonal entries of A and let We will use the fact that the eigenvalues are continuous in

, and show that if any eigenvalue moves from one of the unions to the other, then it must be outside all the discs for some

are equal to that of A, thus the centers of the Gershgorin circles are the same, however their radii are t times that of A.

lies in the union of the k discs, and the theorem is proven.

Remarks: It is necessary to count the eigenvalues with respect to their algebraic multiplicities.

The union of the first 3 disks does not intersect the last 2, but the matrix has only 2 eigenvectors, e1,e4, and therefore only 2 eigenvalues, demonstrating that theorem is false in its formulation.

The demonstration of the shows only that eigenvalues are distinct, however any affirmation about number of them is something that does not fit, and this is a counterexample.

Added Remark: The Gershgorin circle theorem is useful in solving matrix equations of the form Ax = b for x where b is a vector and A is a matrix with a large condition number.

In this kind of problem, the error in the final result is usually of the same order of magnitude as the error in the initial data multiplied by the condition number of A.

For very high condition numbers, even very small errors due to rounding can be magnified to such an extent that the result is meaningless.

This can be done by preconditioning: A matrix P such that P ≈ A−1 is constructed, and then the equation PAx = Pb is solved for x.

By the Gershgorin circle theorem, every eigenvalue of PA lies within a known area and so we can form a rough estimate of how good our choice of P was.

Use the Gershgorin circle theorem to estimate the eigenvalues of: Starting with row one, we take the element on the diagonal, aii as the center for the disc.

We then take the remaining elements in the row and apply the formula to obtain the following four discs: Note that we can improve the accuracy of the last two discs by applying the formula to the corresponding columns of the matrix, obtaining

Note that this is a (column) diagonally dominant matrix:

This means that most of the matrix is in the diagonal, which explains why the eigenvalues are so close to the centers of the circles, and the estimates are very good.

For a random matrix, we would expect the eigenvalues to be substantially further from the centers of the circles.

This diagram shows the discs in yellow derived for the eigenvalues. The first two disks overlap and their union contains two eigenvalues. The third and fourth disks are disjoint from the others and contain one eigenvalue each.