Biconjugate gradient method

From Wikipedia, the free encyclopedia

In mathematics, more specifically in numerical analysis, the biconjugate gradient method is an algorithm to solve systems of linear equations

A x= b.\,

Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A * .

[edit] The algorithm

  1. Choose x0, y0, a regular preconditioner M (frequently M − 1 = 1 is used) and c;
  2. r_0 \leftarrow b-A x_0, s_0 \leftarrow c-A^* y_0\,;
  3. d_0 \leftarrow M^{-1} r_0, f_0 \leftarrow M^{-*} s_0\,;
  4. for k=0, 1, \dots\, do
  5. \alpha_k \leftarrow {s^*_k M^{-1} r_k \over f_k^* A d_k}\,;
  6. x_{k+1} \leftarrow x_k+ \alpha_k d_k\, \left( y_{k+1} \leftarrow y_k + \overline{\alpha_k} f_k \right)\,;
  7. r_{k+1} \leftarrow r_k- \alpha_k A d_k \,, s_{k+1} \leftarrow s_k- \overline{\alpha_k} A^* f_k \, (rk = bAxk and sk = cA * yk are the residuums);
  8. \beta_k \leftarrow {s_{k+1}^* M^{-1} r_{k+1} \over s^*_k M^{-1} r_k}\,;
  9. d_{k+1} \leftarrow M^{-1} r_{k+1} + \beta_k d_k\,, f_{k+1} \leftarrow M^{-*} s_{k+1} + \overline{\beta_k} f_k\,.


[edit] Discussion

The bicg method is numerically unstable, but very important from theoretical point of view: Define the iteration steps by x_k:=x_j+ P_k A^{-1}\left(b - A x_j \right) and y_k:=y_j+A^{-*}P_k^*\left(c-A^* y_j \right) (j < k) using the related projection

P_k:= \mathbf{u_k} \left(\mathbf{v_k}^* A \mathbf{u_k} \right)^{-1} \mathbf{v_k}^* A,

where \mathbf{u_k}=\left(u_0, u_1, \dots u_{k-1} \right) and \mathbf{v_k}=\left(v_0, v_1, \dots v_{k-1} \right). These related projections may be iterated themselves, as

P_{k+1}= P_k+ \left( 1-P_k\right) u_k \otimes {v_k^* A\left(1-P_k \right) \over v_k^* A\left(1-P_k \right) u_k}.

The new directions d_k:= \left(1-P_k \right) u_k and f_k:= \left( A \left(1- P_k \right) A^{-1} \right)^* v_k then are orthogonal to the residuums: v_i^* r_k= f_i^* r_k=0 and s_k^* u_j = s_k^* d_j= 0, which themselves satisfy r_k= A \left( 1- P_k \right) A^{-1} r_j and s_k= \left( 1- P_k \right) ^* s_j(i,j < k).

The biconjugate gradient method now makes a special choice and uses the setting

uk: = M − 1rk and vk: = M − * sk.

This special choice allows to avoid direct evaluations of Pk and A − 1, and subsequently leads to the algorithm as stated above.

[edit] Properties

  • In finite dimensions xn = A − 1b, at the latest when Pn = 1: The biconjugate gradient method returns the exact solution after iterating the full space and thus is a direct method.
  • The sequences produced by the algorithm are biorthogonal: f_i^* A d_j = 0 and s_i^* M^{-1} r_j=0 for i \ne j.
  • if pj' is a polynomial with deg\left( p_{j'} \right) +j <k, then s_k^* p_{j'}\left(M^{-1} A \right) u_j=0. The algorithm thus produces projections onto the Krylov subspace;
  • if pi' is a polynomial with i+deg\left( p_{i'} \right) <k, then v_i^* p_{i'}\left(A M^{-1} \right) r_k=0.
In other languages