Inverse iteration
From Wikipedia, the free encyclopedia
In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. Based on the power method, this method improves on its performance. Whereas the power method always converges to the largest eigenvalue, inverse iteration also enables the choice of which eigenvalue to converge to.
The basic idea of power iteration is choosing an initial vector b (either an eigenvector approximation or a random vector) and iteratively calculating Ab,A2b,A3b,.... Except for a set of zero measure, for any initial vector, the result will converge to an eigenvector corresponding to the dominant eigenvalue. In practice, the vector should be normalized after every iteration.
Unfortunately, power iteration can only find the eigenvector corresponding to the dominant eigenvalue. We could modify the algorithm so that instead we calculate A − 1b,A − 2b,A − 3b,.... This algorithm would find the eigenvector corresponding to the greatest reciprocal eigenvalue, max{λ − 1}, according to the inverse eigenvalues theorem. This is still not enough flexibility to find all the eigenvalues/eigenvectors.
However, imagine we have an approximation μ to the eigenvalue we are interested in. If μ is closer to λk than any other eigenvalue of A, then λk − μ is the smallest eigenvalue of the matrix (A − μI). By calculating (A − μI) − 1b,(A − μI) − 2b,(A − μI) − 3b,..., we can find the eigenvector corresponding to this eigenvalue. Once we have a suitable eigenvector approximation, we can use the Rayleigh quotient to find the eigenvalue.
The inverse iteration algorithm converges fairly quickly. We can, however, achieve even faster convergence by using a better eigenvalue approximation each time. This is the idea behind Rayleigh quotient iteration.
The inverse iteration algorithm requires solving a linear system at each step. This would appear to require O(n3) operations. However, this can be reduced to O(n2) if we first reduce A to Hessenberg form. This is usually done in practice.