Preconditioned conjugate gradient method

From Wikipedia, the free encyclopedia

The conjugate gradient method is a numerical algorithm that solves a system of linear equations

A x= b.\,

where A is symmetric [positive definite]. If the matrix A is ill-conditioned, i.e. it has a large condition number κ(A), it is often useful to use a preconditioning matrix P − 1 that is chosen such that P^{-1} \approx A^{-1} and solve the system

 P^{-1}Ax = P^{-1}b,\,

instead.

To save computional time it is important to take a symmetric preconditioner. This can be Jacobi, symmetric Gauss-Seidel or Symmetric Successive Over Relaxation (SSOR).

The simplest preconditioner is a diagonal matrix that has just the diagonal elements of A. This is known as Jacobi preconditioning or diagonal scaling. Since diagonal matrices are trivial to invert and store in memory, a diagonal preconditioner is a good starting point. More sophisticated choices must trade-off the reduction in κ(A), and hence faster convergence, with the time spent computing P − 1.

[edit] External links