QR algorithm


In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

The practical QR algorithm

Formally, let A be a real matrix of which we want to compute the eigenvalues, and let A0:=A. At the k-th step, we compute the QR decomposition Ak=QkRk where Qk is an orthogonal matrix and Rk is an upper triangular matrix. We then form Ak+1 = RkQk. Note that
so all the Ak are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.
Under certain conditions, the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem provides a bound on the error.
In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix A to upper Hessenberg form, with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. Determining the QR decomposition of an upper Hessenberg matrix costs arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular, using it as a starting point reduces the number of steps required for convergence of the QR algorithm.
If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the Ak. This procedure costs arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs operations.
The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue with only one or two iterations, making it efficient as well as robust.

The implicit QR algorithm

In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form as in the explicit version; then, at each step, the first column of is transformed via a small-size Householder similarity transformation to the first column of , where, of degree, is the polynomial that defines the shifting strategy. Then successive Householder transformations of size are performed in order to return the working matrix to upper Hessenberg form. This operation is known as bulge chasing, due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of is sufficiently small.

Renaming proposal

Since in the modern implicit version of the procedure no QR decompositions are explicitly performed, some authors, for instance Watkins, suggested changing its name to Francis algorithm. Golub and Van Loan use the term Francis QR step.

Interpretation and convergence

The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm. Recall that the power algorithm repeatedly multiplies A times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize. For a symmetric matrix A, upon convergence, AQ = , where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.

History

The QR algorithm was preceded by the LR algorithm, which uses the LU decomposition instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm.
The LR algorithm was developed in the early 1950s by Heinz Rutishauser, who worked at that time as a research assistant of Eduard Stiefel at ETH Zurich. Stiefel suggested that Rutishauser use the sequence of moments y0T Ak x0, k = 0, 1, … to find the eigenvalues of A. Rutishauser took an algorithm of Alexander Aitken for this task and developed it into the quotient-difference algorithm or qd algorithm. After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration Ak = LkUk, Ak+1 = UkLk, applied on a tridiagonal matrix, from which the LR algorithm follows.

Other variants

One variant of the QR algorithm, the Golub-Kahan-Reinsch algorithm starts with reducing a general matrix into a bidiagonal one. This variant of the QR algorithm for the computation of singular values was first described by. The LAPACK subroutine implements this iterative method, with some modifications to cover the case where the singular values are very small. Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the routine for the computation of the singular value decomposition. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.