Gauss–Seidel method

From Wikipedia, the free encyclopedia

The Gauss–Seidel method is a technique used to solve a linear system of equations. The method is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel. The method is an improved version of the Jacobi method. It is defined on matrices with non-zero diagonals, but convergence is only guaranteed if the matrix is either diagonally dominant or symmetric and positive definite.

We seek the solution to a set of linear equations, expressed in matrix terms as

 A x = b.\,

The Gauss–Seidel iteration is

 
x^{(k+1)}  = \left( {D + L} \right)^{ - 1} \left( b - {U x^{(k)} } \right),

where A = DLU; the matrices D, L, and U represent the diagonal, negative strictly lower triangular, and negative strictly upper triangular parts of the coefficient matrix A; and k is the iteration count. This matrix expression is mainly used to analyze the method. When implementing Gauss–Seidel, an explicit entry-by-entry approach is used:

 
x^{(k+1)}_i  = \frac{1}{a_{ii}} \left(b_i - \sum_{j<i}a_{ij}x^{(k+1)}_j-\sum_{j>i}a_{ij}x^{(k)}_j\right),\, i=1,2,\ldots,n.

Note that the computation of x^{(k+1)}_i uses only those elements of x^{(k+1)}\, that have already been computed and only those elements of x^{(k)}\, that have yet to be advanced to iteration k + 1. This means that no additional storage is required, and the computation can be done in place (x^{(k+1)}\, replaces x^{(k)}\,). While this might seem like a rather minor concern, for large systems it is unlikely that every iteration can be stored. Thus, unlike the Jacobi method, we do not have to do any vector copying should we wish to use only one storage vector. The iteration is generally continued until the changes made by an iteration are below some tolerance.

Contents

[edit] Algorithm

Choose an initial approximation x0
for k := 1 step 1 until convergence do
for i := 1 step until n do
σ = 0
for j := 1 step until i-1 do
 \sigma  = \sigma  + a_{ij} x_j^{(k)}
end (j-loop)
for j := i+1 step until n do
 \sigma  = \sigma  + a_{ij} x_j^{(k-1)}
end (j-loop)
 x_i^{(k)}  = {{\left( {b_i  - \sigma } \right)} \over {a_{ii} }}
end (i-loop)
check if convergence is reached
end (k-loop)
x\approx x^{(k)}

A particular implementation in C:

 while(converge(x) == false){
   for(int i = 0;i < n; i++){
     var = 0;
     for(int j = 0; j < m; j++){
       if(j != i){
         var = var + (a[i][j]*x[j]);
       }
     }
     temp[i] = x[i];
     x[i]=(b[i] - var)/a[i][i];
   }
 }

[edit] See also

[edit] Convergence

The method will always converge if the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:

\left | a_{ii} \right | > \sum_{i \ne j} {\left | a_{ij} \right |}.

The Gauss–Seidel method sometimes converges even if this condition is not satisfied. It is necessary, however, that the diagonal terms in the matrix are greater (in magnitude) than the other terms.

[edit] External links