Generalized minimal residual method

From Wikipedia, the free encyclopedia

In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986.[1]

Contents

[edit] The method

Denote the system of linear equations which needs to be solved by

Ax = b. \,

The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., ||b|| = 1 (in this article, ||·|| denotes the Euclidean norm).

The nth Krylov subspace for this problem is

K_n = \operatorname{span} \, \{ b, Ab, A^2b, \ldots, A^{n-1}b \}. \,

GMRES approximates the exact solution of Ax = b by the vector xnKn that minimizes the norm of the residual Axnb.

The vectors b, Ab, …, An−1b are almost linearly dependent, so instead of this basis, the Arnoldi iteration is used to find orthonormal vectors

q_1, q_2, \ldots, q_n \,

which form a basis for Kn. Hence, the vector xnKn can be written as xn = Qnyn with ynRn, where Qn is the m-by-n matrix formed by q1, …, qn.

The Arnoldi process also produces an (n+1)-by-n upper Hessenberg matrix \tilde{H}_n with

AQ_n = Q_{n+1} \tilde{H}_n. \,

Because Qn is orthogonal, we have

\| Ax_n - b \| = \| \tilde{H}_ny_n - e_1 \|, \,

where

e_1 = (1,0,0,\ldots,0) \,

is the first vector in the standard basis of Rn+1. Hence, xn can be found by minimizing the norm of the residual

r_n = \tilde{H}_n y_n - e_1.

This is a linear least squares problem of size n.

This yields the GMRES method. At every step of the iteration:

  1. do one step of the Arnoldi method;
  2. find the yn which minimizes ||rn||;
  3. compute xn = Qnyn;
  4. repeat if the residual is not yet small enough.

At every iteration, a matrix-vector product Aqn must be computed. This costs about 2m2 floating-point operations for general dense matrices of size m, but the cost can decrease to O(m) for sparse matrices. In addition to the matrix-vector product, O(n m) floating-point operations must be computed at the nth iteration.

[edit] Convergence

The nth iterate minimizes the residual in the Krylov subspace Kn. Since every subspace is contained in the next subspace, the residual decreases monotonically. After m iterations, where m is the size of the matrix A, the Krylov space Km is the whole of Rm and hence the GMRES method arrives at the exact solution. However, the idea is that after a small number of iterations (relative to m), the vector xn is already a good approximation to the exact solution.

This does not happen in general. Indeed, a theorem of Greenbaum, Pták and Strakoš states that for every monotonically decreasing sequence a1, …, am−1, am = 0, one can find a matrix A such that the ||rn|| = an for all n, where rn is the residual defined above. In particular, it is possible to find a matrix for which the residual stays constant for m − 1 iterations, and only drops to zero at the last iteration.

In practice, though, GMRES often performs well. This can be proven in specific situations. If A is positive definite, then

\|r_n\| \leq \left( 1-\frac{\lambda_{\mathrm{min}}(A^\top + A)}{2 \lambda_{\mathrm{max}}(A^T + A)} \right)^{n/2} \|r_0\|,

where λmin and λmax denote the smallest and largest eigenvalue, respectively.

If A is symmetric and positive definite, then we even have

\|r_n\| \leq \left( \frac{\kappa_2^2(A)-1}{\kappa_2^2(A)} \right)^{n/2} \|r_0\|.

where κ2(A) denotes the condition number of A in the Euclidean norm.

In the general case, where A is not positive definite, we have

\|r_n\| \le \inf_{p \in P_n} \|p_n(A)\| \le \kappa_2(V) \inf_{p \in P_n} \max_{\lambda \in \sigma(A)} |p(\lambda)|, \,

where Pn denotes the set of polynomials of degree at most n with p(0) = 1, V is the matrix appearing in the spectral decomposition of A, and σ(A) is the spectrum of A. Roughly speaking, this says that fast convergence occurs when the eigenvalues of A are clustered away from the origin and A is not too far from normality.[2]

All these inequalities bound only the residuals instead of the actual error, that is, the distance between the current iterate xn and the exact solution.

[edit] Extensions of the method

Like other iterative methods, GMRES is usually combined with a preconditioning method in order to speed up convergence.

The cost of the iterations grow like O(n2), where n is the iteration number. Therefore, the method is sometimes restarted after a number, say k, of iterations, with xk as initial guess. The resulting method is called GMRES(k).

[edit] Comparison with other solvers

The Arnoldi iteration reduces to the Lanczos iteration for symmetric matrices. The corresponding Krylov subspace method is the minimal residual method (MinRes) of Paige and Saunders. Unlike the unsymmetric case, the MinRes method is given by a three-term recurrence relation. It can be shown that there is no Krylov subspace method for general matrices, which is given by a short recurrence relation and yet minimizes the norms of the residuals, as GMRES does.

Another class of methods builds on the unsymmetric Lanczos iteration, in particular the BiCG method. These use a three-term recurrence relation, but they do not attain the minimum residual, and hence the residual does not decrease monotonically for these methods. Convergence is not even guaranteed.

The third class is formed by methods like CGS and BiCGSTAB. These also work with a three-term recurrence relation (hence, without optimality) and they can even terminate prematurely without achieving convergence. The idea behind these methods is to choose the generating polynoms of the iteration sequence suitably.

None of these three classes is the best for all matrices; there are always examples in which one class outperforms the other. Therefore, multiple solvers are tried in practice to see which one is the best for a given problem.

[edit] Implementation

[edit] Solving the least squares problem

One part of the GMRES method is to find the vector yn which minimizes

\| \tilde{H}_n y_n - e_1 \|. \,

This can be done by computing a QR decomposition: find an (n+1)-by-(n+1) orthogonal matrix Ωn and an (n+1)-by-n upper triangular matrix \tilde{R}_n such that

\Omega_n \tilde{H}_n = \tilde{R}_n.

The triangular matrix has one more row than it has columns, so its bottom row consists of zero. Hence, it can be decomposed as

\tilde{R}_n = \begin{bmatrix} R_n \\ 0 \end{bmatrix},

where Rn is an n-by-n (thus square) triangular matrix.

The QR decomposition can be updated cheaply from one iteration to the next, because the Hessenberg matrices differ only by a row of zeros and a column:

\tilde{H}_{n+1} = \begin{bmatrix} \tilde{H}_n & h_n \\ 0 & h_{n+1,n} \end{bmatrix},

where hn = (h1n, … hnn)T. This implies that premultiplying the Hessenberg matrix with Ωn, augmented with a zero row and column, yields almost a triangular matrix:

\begin{bmatrix} \Omega_n & 0 \\ 0 & 0 \end{bmatrix} \tilde{H}_{n+1} = \begin{bmatrix} R_n & r_k \\ 0 & \rho \\ 0 & \sigma \end{bmatrix}

This would be triangular if σ is zero. To remedy this, one needs the Givens rotation

G_n = \begin{bmatrix} I_{n-1} & 0 & 0 \\ 0 & c_n & s_n \\ 0 & -s_n & c_n \end{bmatrix}

where

c_n = \frac{\rho}{\sqrt{\rho^2+\sigma^2}} \quad\mbox{and}\quad s_n = \frac{\sigma}{\sqrt{\rho^2+\sigma^2}}.

With this Givens rotation, we form

\Omega_{n+1} = G_n \begin{bmatrix} \Omega_n & 0 \\ 0 & 0 \end{bmatrix}.

Indeed,

\Omega_{n+1} \tilde{H}_{n+1} = \begin{bmatrix} R_n & r_n \\ 0 & r_{nn} \\ 0 & 0 \end{bmatrix} \quad\text{with}\quad r_{nn} = \sqrt{\rho^2+\sigma^2}

is a triangular matrix.

Given the QR decomposition, the minimization problem is easily solved by noting that

\| \tilde{H}_n y_n - e_1 \| = \| \Omega_n (\tilde{H}_n y_n - e_1) \| = \| \tilde{R}_n y_n - \Omega_n e_1 \|.

Denoting the vector Ωne1 by

\tilde{g}_n = \begin{bmatrix} g_n \\ \gamma_n \end{bmatrix}

with gnRn and γnR, this is

\| \tilde{H}_n y_n - e_1 \| = \| \tilde{R}_n y_n - \Omega_n e_1 \| = \left\| \begin{bmatrix} R_n \\ 0 \end{bmatrix} y - \begin{bmatrix} g_n \\ \gamma_n \end{bmatrix} \right\|.

The vector y that minimizes this expression is given by

y_n = R_n^{-1} g_n.

Again, the vectors gn are easy to update.[3]

[edit] Pseudocode

Inputs: A, b, x0.

r_0 \leftarrow b - Ax_0 \,
if r0 = 0 then

return x0

end if
v_1 = \frac{r_0}{|r_0|_2} \,
γ1 = | r0 | 2
for j ← 1, …, n do

for i ← 1, …, j do
h_{ij} \leftarrow v_i^\top Av_j \,
end for
w_j \leftarrow Av_j - \sum_{i=1}^j h_{ij} v_i \,
h_{j+1,j} \leftarrow |w_j|_2 \,
for i ← 1, …, j − 1 do
\begin{pmatrix} h_{ij} \\ h_{i+1,j} \end{pmatrix} \leftarrow \begin{pmatrix} c_{i+1} & s_{i+1} \\ s_{i+1} & -c_{i+1} \end{pmatrix} \begin{pmatrix} h_{ij} \\ h_{i+1,j} \end{pmatrix}
end for
\beta \leftarrow \sqrt{h_{jj}^2 + h_{j+1,j}^2} \,
s_{j+1} \leftarrow \frac{h_{j+1,j}}{\beta} \,
c_{j+1} \leftarrow \frac{h_{jj}}{\beta} \,
h_{jj} \leftarrow \beta \,
\gamma_{j+1} \leftarrow s_{j+1} \gamma_j \,
\gamma_j \leftarrow c_{j+1} \gamma_j \,.
if \gamma_{j+1} \neq 0 then
v_{j+1} \leftarrow \frac{w_j}{h_{j+1,j}} \,
else
for ij, …, 1 do
\alpha_i \leftarrow \frac{1}{h_{ii}} \left( \gamma_i - \sum_{k=i+1}^j h_{ik} \alpha_k \right) \,
end for
x \leftarrow x_0 + \sum_{i=1}^j \alpha_i v_i \,
return x
end if

end for

[edit] Notes

  1. ^ Saad and Schultz
  2. ^ Trefethen & Bau, Thm 35.2
  3. ^ Stoer and Bulirsch, §8.7.2

[edit] References

In other languages