Jacobi method

From Wikipedia, the free encyclopedia

The Jacobi method is an algorithm in linear algebra for determining the solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after German mathematician Carl Gustav Jakob Jacobi.

We seek the solution to set of linear equations, written in matrix terms as

A \phi = b.\,

The definition of the Jacobi method can be expressed as :

\phi^{(k+1)}  = D^{ - 1} \left[\left( {L + U} \right)\phi^{(k)}  +  b\right]

where D, L, and U represent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix A and k is the iteration count. This matrix expression is mainly of academic interest, and is not used to program the method. Rather, an element-based approach is used:

\phi^{(k+1)}_i  = \frac{1}{a_{ii}} \left(b_i -\sum_{j\ne i}a_{ij}\phi^{(k)}_j\right),\, i=1,2,\ldots,n.

Note that the computation of \phi^{(k+1)}_i requires each element in \phi^{(k)}\, except itself. Then, unlike in the Gauss-Seidel method, we can't overwrite \phi^{(k)}_i\, with \phi^{(k+1)}_i, as that value will be needed by the rest of the computation. This is the most meaningful difference between the Jacobi and Gauss-Seidel methods. The minimum amount of storage is two vectors of size n, and explicit copying will need to take place.

[edit] Algorithm

Chose an initial guess φ0 to the solution

for k := 1 step 1 until convergence do
for i := 1 step until n do
σ = 0
for j := 1 step until n do
if j != i then
\sigma  = \sigma  + a_{ij} \phi_j^{(k-1)}
end if
end (j-loop)
\phi_i^{(k)}  = {{\left( {b_i  - \sigma } \right)} \over {a_{ii} }}
end (i-loop)
check if convergence is reached
end (k-loop)

[edit] Convergence

The method will always converge if the matrix A is diagonally dominant, meaning that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:

\left | a_{ii} \right | > \sum_{i \ne j} {\left | a_{ij} \right |}.

Usually, but not always, the method converges even if this condition is not satisfied. It is required, however, that the diagonal terms in the matrix are greater (in magnitude) than the other terms.

[edit] External links