Talk:Successive over-relaxation

From Wikipedia, the free encyclopedia

The algorighm section has a k loop and also a superscript k. I don't think these k's are related, but I'm not sure. If they are not related, I think we should choose a different variable for the loop.Epachamo 00:26, 3 March 2006 (UTC)

They are related: φ(k) denotes the guess for the solution in the kth iteration. Of course, in a real implementation, one wouldn't keep all the previous φ in memory, but only the current guess. -- Jitse Niesen (talk) 00:37, 3 March 2006 (UTC)
That makes sense. Thanks! Epachamo 05:39, 3 March 2006 (UTC)

[edit] for future reference

in case anyone gets up the gumption to add a section on convergence analysis, rewriting the iteration:

(D + ωL(k + 1) = ( − ωU + (1 − ω)D(k) + ωb

in the corrector form:

φ(k + 1) = φ(k) + ω(D + ωL) − 1r(k)

is useful. (the residual is r(k) = bAφ(k).) since the error and residual are related by Ae = r, we get:

e(k + 1) = (I − ω(D + ωL) − 1A)e(k).

also, perhaps SSOR should be made into a redirect that points here, and a section should be added. that is, SSOR is just two applications of different flavors of SOR one right after the other. one, in the "forward" direction, is as above; the other, in the "backward" direction, is as above but with U in place of L. the two iterations of SOR can be combined in a multiplicative (the usual) or additive fasion. even though SSOR is usually slower than SOR (each with its own optimal damping ω; see, for instance, Young's "Iterative solution of large linear systems," p462, 1971), there is the advantage that SSOR is a symmetric procedure and allows it to be used as a preconditioner in methods that respect symmetry. Lunch 05:52, 15 September 2006 (UTC)