Divide-and-conquer eigenvalue algorithm

The basic concept behind these algorithms is the divide-and-conquer approach from computer science.

This article covers the basic idea of the algorithm as originally proposed by Cuppen in 1981, which is not numerically stable without additional refinements.

As with most eigenvalue algorithms for Hermitian matrices, divide-and-conquer begins with a reduction to tridiagonal form.

matrix, the standard method for this, via Householder reflections, takes

There are other algorithms, such as the Arnoldi iteration, which may do better for certain classes of matrices; we will not consider this further here.

Consider a block diagonal matrix The eigenvalues and eigenvectors of

This technique can be used to improve the efficiency of many eigenvalue algorithms, but it has special significance to divide-and-conquer.

For the rest of this article, we will assume the input to the divide-and-conquer algorithm is an

real symmetric tridiagonal matrix

The divide part of the divide-and-conquer algorithm comes from the realization that a tridiagonal matrix is "almost" block diagonal.

as a block diagonal matrix, plus a rank-1 correction: The only difference between

The remainder of the divide step is to solve for the eigenvalues (and if desired the eigenvectors) of

This can be accomplished with recursive calls to the divide-and-conquer algorithm, although practical implementations often switch to the QR algorithm for small enough submatrices.

It is now elementary to show that The remaining task has been reduced to finding the eigenvalues of a diagonal matrix plus a rank-one correction.

The case of a zero entry is simple, since if wi is zero, (

The problem has therefore been reduced to finding the roots of the rational function defined by the left-hand side of this equation.

Solving the nonlinear secular equation requires an iterative technique, such as the Newton–Raphson method.

However, each root can be found in O(1) iterations, each of which requires

-degree rational function), making the cost of the iterative part of this algorithm

W will use the master theorem for divide-and-conquer recurrences to analyze the running time.

We can write the recurrence relation: In the notation of the Master theorem,

, so we have Above, we pointed out that reducing a Hermitian matrix to tridiagonal form takes

This dwarfs the running time of the divide-and-conquer part, and at this point it is not clear what advantage the divide-and-conquer algorithm offers over the QR algorithm (which also takes

If this is the case, reduction to tridiagonal form takes

For the QR algorithm with a reasonable target precision, this is

flops for the reduction, the total improvement is from

tend to be numerically sparse, meaning that they have many entries with values smaller than the floating point precision, allowing for numerical deflation, i.e. breaking the problem into uncoupled subproblems.

[citation needed] There exist specialized root-finding techniques for rational functions that may do better than the Newton-Raphson method in terms of both performance and stability.

These can be used to improve the iterative part of the divide-and-conquer algorithm.

The divide-and-conquer algorithm is readily parallelized, and linear algebra computing packages such as LAPACK contain high-quality parallel implementations.