The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently.
At the k-th step (starting with k = 0), we compute the QR decomposition Ak = Qk Rk where Qk is an orthogonal matrix (i.e., QT = Q−1) and Rk is an upper triangular matrix.
Under certain conditions,[4] the matrices Ak converge to a triangular matrix, the Schur form of A.
In testing for convergence it is impractical to require exact zeros,[citation needed] but the Gershgorin circle theorem provides a bound on the error.
arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.
Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm.
elements so that effectively a small block along the diagonal is split off from the bulk of the matrix.
A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.
[clarification needed] The steps of a QR iteration with explicit shift on a real Hessenberg matrix
But now we reach Step 3, and need to start rotating data between columns.
The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form.
The basic QR algorithm can be visualized in the case where A is a positive-definite symmetric matrix.
The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation).
In the event where the large semi-axis of the ellipse is parallel to the x-axis, one iteration of QR does nothing.
In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction.
Eventually though, the algorithm would converge to a different fixed point, but it would take a long time.
Therefore, the problem of approximately finding the eigenvalues is shown to be easy in that case.
An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle.
The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular.
In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues.
This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be deflated, which simply means removing its last row and column.
In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce.
This operation is known as bulge chasing, due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm.
Since in the modern implicit version of the procedure no QR decompositions are explicitly performed, some authors, for instance Watkins,[9] suggested changing its name to Francis algorithm.
Recall that the power algorithm repeatedly multiplies A times a single vector, normalizing after each iteration.
The LR algorithm was developed in the early 1950s by Heinz Rutishauser, who worked at that time as a research assistant of Eduard Stiefel at ETH Zurich.
After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration Ak = LkUk (LU decomposition), Ak+1 = UkLk, applied on a tridiagonal matrix, from which the LR algorithm follows.
[11] This variant of the QR algorithm for the computation of singular values was first described by Golub & Kahan (1965).
The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990).
The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.