Biconjugate gradient method
From Wikipedia, the free encyclopedia
In mathematics, more specifically in numerical analysis, the biconjugate gradient method is an algorithm to solve systems of linear equations
Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A * .
[edit] The algorithm
- Choose x0, y0, a regular preconditioner M (frequently M − 1 = 1 is used) and c;
- ;
- ;
- for do
- ;
- ;
- , (rk = b − Axk and sk = c − A * yk are the residuums);
- ;
- , .
[edit] Discussion
The BiCG method is numerically unstable, but very important from theoretical point of view: Define the iteration steps by and (j < k) using the related projection
- ,
where and . These related projections may be iterated themselves, as
- .
The new directions and then are orthogonal to the residuums: and , which themselves satisfy and (i,j < k).
The biconjugate gradient method now makes a special choice and uses the setting
- uk: = M − 1rk and vk: = M − * sk.
This special choice allows to avoid direct evaluations of Pk and A − 1, and subsequently leads to the algorithm as stated above.
[edit] Properties
- If A = A * is self-adjoint, y0 = x0 and c = b, then rk = sk, dk = fk, and the conjugate gradient method produces the same sequence xk = yk.
- In finite dimensions xn = A − 1b, at the latest when Pn = 1: The biconjugate gradient method returns the exact solution after iterating the full space and thus is a direct method.
- The sequences produced by the algorithm are biorthogonal: and for .
- if pj' is a polynomial with , then . The algorithm thus produces projections onto the Krylov subspace;
- if pi' is a polynomial with , then .