Successive over-relaxation

From Wikipedia, the free encyclopedia

Successive over-relaxation (SOR) is a numerical method used to speed up convergence of the Gauss–Seidel method for solving a linear system of equations. A similar method can be used for any slowly converging iterative process.

Contents

[edit] Formulation

We seek the solution to a set of linear equations

A \phi = b. \,

Write the matrix A as A = D + U + L, where D, U and L denote the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively.

The successive over-relaxation (SOR) iteration is defined by the recurrence relation

(D+\omega L) \phi^{(k+1)} = (-\omega U + (1-\omega)D) \phi^{(k)} + \omega b. \qquad (*)

Here, φ(k) denotes the kth iterate and ω is a relaxation factor. This iteration reduces to the Gauss–Seidel iteration for ω = 1. As with the Gauss–Seidel method, the computation may be done in place, and the iteration is continued until the changes made by an iteration are below some tolerance.

The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive-definite matrices it can be proven that 0 < ω < 2 will lead to convergence, but we are generally interested in faster convergence rather than just convergence.

As in the Gauss–Seidel method, it is not necessary to solve a linear system in order to implement the iteration (∗); indeed, given that the goal is to solve the linear system Aφ = b in the first place, it would be silly to use an iterative method in which a linear system must be solved in each step. Instead, the new iterate can be obtained with the formula

\phi^{(k+1)}_i  = (1-\omega)\phi^{(k)}_i+\frac{\omega}{a_{ii}} \left(b_i - \sum_{j=1}^{i-1} a_{ij}\phi^{(k+1)}_j - \sum_{j=i+1}^n a_{ij}\phi^{(k)}_j\right),\, i=1,2,\ldots,n.

[edit] Algorithm

Inputs: A , b, ω
Output: φ

Choose an initial guess φ to the solution
repeat until convergence

for i from 1 until n do
σ ← 0
for j from 1 until i − 1 do
σ ← σ + aij φj
end (j-loop)
for j from i + 1 until n do
σ ← σ + aij φj
end (j-loop)
\phi_i \leftarrow (1-\omega)\phi_i + \frac{\omega}{a_{ii}} (b_i - \sigma)
end (i-loop)
check if convergence is reached

end (repeat)

[edit] Other applications of the method

A similar technique can be used for any iterative method. Values of ω > 1 are used to speedup convergence of a slow-converging process, while values of ω < 1 are often help to establish convergence of diverging iterative process.

There are various methods that "intelligently" set the relaxation parameter ω based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others

[edit] External links

[edit] References

In other languages