Krylov subspace

In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A (starting from ), that is,

Background

The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931.

Use

Modern iterative methods for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector, b, one computes , then one multiplies that vector by to find and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra.

Issues

Because the vectors usually soon become almost linearly dependent due to the properties of power iteration, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices.

Existing methods

The best known Krylov subspace methods are the Arnoldi, Lanczos, Conjugate gradient, IDR(s) (Induced dimension reduction), GMRES (generalized minimum residual), BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR), and MINRES (minimal residual) methods.


See also

References

    Bibliography

    This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.