Preconditioner

From Wikipedia, the free encyclopedia

In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P−1A has a smaller condition number than A. Preconditioners are useful when using an iterative method to solve a large, sparse linear system

 Ax = b\,

for x since the rate of convergence for most iterative linear solvers degrades as the condition number of a matrix increases. Instead of solving the original linear system above, one may solve either the left preconditioned system

 P^{-1}Ax = P^{-1}b,\,

via the two solves

c=P^{-1}b, \qquad (P^{-1}A)x=c,\,

or the right preconditioned system

 AP^{-1}Px = b,\,

via the two solves

(AP^{-1})y=b, \qquad x=P^{-1}y,\,

which are both equivalent to solving the original system so long as the preconditioner matrix P is nonsingular.

The goal of this preconditioned system is to reduce the condition number of the left or right preconditioned system matrix

P^{-1}A,\,

or

AP^{-1},\,

respectively.

Typically there is a trade-off in the choice of P. Since the operator P-1 must be applied at each step of the iterative linear solver, it should be very efficient in order to reduce the cost (computing time) of applying the P-1 operation. The most efficient preconditioner would therefore be

 P=I, \quad\text{since then}\quad P^{-1}=I;\,

unfortunately this results in the original linear system and the preconditioner does nothing. At the other limit, if one could choose

 P=A, \quad\text{since then}\quad P^{-1}A = AP^{-1} = I,\,

which has optimal condition number of 1, requiring a single iteration for convergence; however in this case

 P^{-1}=A^{-1},\,

and the preconditioner solve is as difficult as solving the original system. One therefore chooses the matrix P as somewhere between these two extremes, in an attempt to achieve a minimal number of linear iterations while keeping the operator P-1 as simple as possible. Some examples of typical preconditioning approaches are detailed below.

We note that in the above discussion, the preconditioned matrix

 P^{-1}A,\quad\text{resp.}\quad AP^{-1}\,

is almost never explicitly formed. In using an iterative method, only the action of applying the preconditioner solve operation P-1 to a given vector need be computed.

It may also be noted that for symmetric matrices A, the desired effect in a applying a preconditioner is to make the quadratic form of the preconditioned operator P − 1A to be nearly spherical[1]

Contents

[edit] Examples

Two popular preconditioning strategies are based on splitting A into the three components

 A = L + D + U\,

where L is the lower triangular part of A, D is the diagonal part of A, and U is the upper triangular part of A.

[edit] Jacobi preconditioner

The Jacobi preconditioner is one of the simplest forms of preconditioning, in which the preconditioner is chosen to be the diagonal of the matrix only,

 P = D.\,

That is,

P_{ij} = A_{ii}\delta_{ij} = \begin{cases}A_{ii} & i=j \\ 0 &\mbox{otherwise}\end{cases}

and so

P^{-1}_{ij} = \frac{\delta_{ij}}{A_{ii}}.\,

[edit] SOR

The Successive Over Relaxation (SOR) preconditioner takes P to be

P=\left(\frac{D}{\omega}+L\right)\frac{\omega}{2-\omega}D^{-1}\left(\frac{D}{\omega}+U\right)

where ω is a "relaxation parameter" between 0 and 2, exclusive.

The version for symmetric matrices A, in which

 U=L^T,\,

is referred to as Symmetric Successive Over Relaxation, or (SSOR), in which

P=\left(\frac{D}{\omega}+L\right)\frac{\omega}{2-\omega}D^{-1}\left(\frac{D}{\omega}+L^T\right).

[edit] See also

[edit] External links

Languages