Successive over-relaxation

From Wikipedia, the free encyclopedia

Successive over-relaxation (SOR) is a numerical method used to speed up convergence of the Gauss–Seidel method for solving a linear system of equations. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young and by H. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods have been used before the work of Young and Frankel. For instance, the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, and they required some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young.

Contents

[edit] Formulation

We seek the solution to a set of linear equations

 A \phi = b. \,

Write the matrix A as A = D + L + U, where D, L and U denote the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively.

The successive over-relaxation (SOR) iteration is defined by the recurrence relation

 
(D+\omega L) \phi^{(k+1)} = (-\omega U + (1-\omega)D) \phi^{(k)} + \omega b. \qquad (*)

Here, φ(k) denotes the kth iterate and ω is a relaxation factor. This iteration reduces to the Gauss–Seidel iteration for ω = 1. As with the Gauss–Seidel method, the computation may be done in place, and the iteration is continued until the changes made by an iteration are below some tolerance.

The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive-definite matrices it can be proven that 0 < ω < 2 will lead to convergence, but we are generally interested in faster convergence rather than just convergence.

As in the Gauss–Seidel method, in order to implement the iteration (∗) it is only necessary to solve a triangular system of linear equations, which is much simpler than solving the original arbitrary system. The iteration formula is:

 
\phi^{(k+1)}_i  = (1-\omega)\phi^{(k)}_i+\frac{\omega}{a_{ii}} \left(b_i - \sum_{j=1}^{i-1} a_{ij}\phi^{(k+1)}_j - \sum_{j=i+1}^n a_{ij}\phi^{(k)}_j\right),\, i=1,2,\ldots,n.

[edit] Algorithm

Inputs: A , b, ω
Output: φ

Choose an initial guess φ to the solution
repeat until convergence

for i from 1 until n do
σ ← 0
for j from 1 until i − 1 do
σ ← σ + aij φj
end (j-loop)
for j from i + 1 until n do
σ ← σ + aij φj
end (j-loop)
 \phi_i \leftarrow (1-\omega)\phi_i + \frac{\omega}{a_{ii}} (b_i - \sigma)
end (i-loop)
check if convergence is reached

end (repeat)

[edit] Other applications of the method

A similar technique can be used for any iterative method. Values of ω > 1 are used to speedup convergence of a slow-converging process, while values of ω < 1 are often used to help establish convergence of a diverging iterative process.

There are various methods that adaptively set the relaxation parameter ω based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others

[edit] References

[edit] External links