Kaczmarz method

From Wikipedia, the free encyclopedia

The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear systems of equations Ax=b. It was first discovered by the Polish mathematician Stefan Kaczmarz,[1] and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART).[2]

It is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method.[3]

It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS).[4][5]

Algorithm 1: Randomized Kaczmarz algorithm

Let Ax=b be a linear system and x_{{0}} be arbitrary initial approximation to the solution of Ax=b. For k=0,1,... compute:

x^{{k+1}}=x^{{k}}+{\frac  {b_{{i}}-\langle a_{{i}},x^{{k}}\rangle }{\lVert a_{{i}}\rVert ^{2}}}a_{{i}}

where i is chosen from the set {1,2,...,m} at random, with probability proportional to {\lVert a_{{i}}\rVert ^{2}}.

Under such circumstances x_{{k}} converges exponentially fast to the solution of Ax=b, and the rate of convergence depends only on the scaled condition number \kappa (A).

Theorem

Let x be the solution of Ax=b. Then Algorithm 1 converges to x in expectation, with the average error:

E{\lVert x_{{k}}-x\rVert ^{2}}\leq (1-\kappa (A)^{{-2}})^{{k}}\cdot {\lVert x_{{0}}-x\rVert ^{2}}.

Proof

There holds

{\begin{aligned}\sum _{{j=1}}^{{m}}|\langle z,a_{j}\rangle |^{2}\geq {\frac  {\lVert z\rVert ^{2}}{\lVert A^{{-1}}\rVert ^{2}}}\qquad \qquad \qquad \qquad (1)\end{aligned}} for all z\in {\mathbb  C}^{n}.

Using the fact that {\lVert A\rVert ^{2}}=\sum _{{j=1}}^{{m}}{\lVert a_{j}\rVert ^{2}} we can write (1) as

{\begin{aligned}\sum _{{j=1}}^{{m}}{\frac  {{\lVert a_{j}\rVert ^{2}}}{\lVert A\rVert ^{2}}}\left|\left\langle z,{\frac  {a_{j}}{\lVert a_{j}\rVert }}\right\rangle \right|^{2}\geq \kappa (A)^{{-2}}{\lVert z\rVert ^{2}}\qquad \qquad \qquad \qquad (2)\end{aligned}} for all z\in {\mathbb  C}^{n}.


The main point of the proof is to view the left hand side in (2) as an expectation of some random variable. Namely, recall that the solution space of the j-th equation of Ax=b is the hyperplane {y:\langle y,a_{j}\rangle =b_{j}}, whose normal is {\frac  {a_{j}}{\lVert a_{j}\rVert ^{2}}}. Define a random vector Z whose values are the normals to all the equations of Ax=b, with probabilities as in our algorithm:

Z={\frac  {a_{j}}{\lVert a_{j}\rVert }} with probability {\frac  {\lVert a_{j}\rVert ^{2}}{\lVert A\rVert ^{2}}}\qquad \qquad \qquad j=1,\cdots ,m


Then (2) says that

{\begin{aligned}{\mathbb  E}|\langle z,Z\rangle |^{2}\geq \kappa (A)^{{-2}}{\lVert z\rVert ^{2}}\qquad \qquad (3)\end{aligned}} for all z\in {\mathbb  C}^{n}.


The orthogonal projection P onto the solution space of a random equation of Ax=b is given by Pz=z-\langle z-x,Z\rangle Z.

Now we are ready to analyze our algorithm. We want to show that the error {\lVert x_{k}-x\rVert ^{2}} reduces at each step in average (conditioned on the previous steps) by at least the factor of (1-\kappa (A)^{{-2}}). The next approximation x_{k} is computed from x_{{k-1}} as x_{k}=P_{k}x_{{k-1}}, where P_{1},P_{2},\cdots are independent realizations of the random projection P. The vector x_{{k-1}}-x_{l} is in the kernel of P_{k}. It is orthogonal to the solution space of the equation onto which P_{k} projects, which contains the vector x_{k}-x (recall that x is the solution to all equations). The orthogonality of these two vectors then yields {\lVert x_{k}-x\rVert ^{2}}={\lVert x_{{k-1}}-x\rVert ^{2}}-{\lVert x_{{k-1}}-x_{k}\rVert ^{2}}. To complete the proof, we have to bound {\lVert x_{{k-1}}-x_{k}\rVert ^{2}} from below. By the definition of x_{k}, we have {\lVert x_{{k-1}}-x_{k}\rVert }=\langle x_{{k-1}}-x,Z_{k}\rangle

where Z_{1},Z_{2},\cdots are independent realizations of the random vector Z.

Thus {\lVert x_{k}-x\rVert ^{2}}\leq \left(1-\left|\left\langle {\frac  {x_{{k-1}}-x}{\lVert x_{{k-1}}-x\rVert }},Z_{k}\right\rangle \right|^{2}\right){\lVert x_{{k-1}}-x\rVert ^{2}}.

Now we take the expectation of both sides conditional upon the choice of the random vectors Z_{1},\cdots ,Z_{{k-1}} (hence we fix the choice of the random projections P_{1},\cdots ,P_{{k-1}} and thus the random vectors x_{1},\cdots ,x_{{k-1}} and we average over the random vector Z_{k}). Then

{\mathbb  E}_{{{Z_{1},\cdots ,Z_{{k-1}}}}}{\lVert x_{k}-x\rVert ^{2}}\leq \left(1-{\mathbb  E}_{{{Z_{1},\cdots ,Z_{{k-1}}}}}\left|\left\langle {\frac  {x_{{k-1}}-x}{\lVert x_{{k-1}}-x\rVert }},Z_{k}\right\rangle \right|^{2}\right){\lVert x_{{k-1}}-x\rVert ^{2}}.

By (3) and the independence,

{\mathbb  E}_{{{Z_{1},\cdots ,Z_{{k-1}}}}}{\lVert x_{k}-x\rVert ^{2}}\leq (1-\kappa (A)^{{-2}}){\lVert x_{{k-1}}-x\rVert ^{2}}.


Taking the full expectation of both sides, we conclude that

{\mathbb  E}{\lVert x_{k}-x\rVert ^{2}}\leq (1-\kappa (A)^{{-2}}){\mathbb  E}{\lVert x_{{k-1}}-x\rVert ^{2}}.


\blacksquare

Algorithm 2: Randomized Kaczmarz algorithm with relaxation

Given a real or complex m\times n matrix A and a real or complex vector b, respectively, the Kaczmarz's algorithm iteratively computes an approximation of the solution of the linear systems of equations Ax=b. It does so by converging to the vector x^{*}=A^{T}(AA^{T})^{{-1}}b without the need to invert the matrix AA^{T}, which is algorithm's main advantage, especially when the matrix A has a large number of rows.[6] Most generally, algorithm is defined as follows:

x^{{k+1}}=x^{{k}}+\lambda _{k}{\frac  {b_{{i}}-\langle a_{{i}},x^{{k}}\rangle }{\lVert a_{{i}}\rVert ^{2}}}a_{{i}}

where i=k\,{\bmod  \,}m+1, a_{i}^{T} is the i-th row of the matrix A, b_{i} is the i-th component of the vector b, and \lambda _{k} is a relaxation parameter. The above formulae gives a simple iteration routine. There are various ways for choosing the i-th equation \langle a_{{i}},x_{{k}}\rangle =b_{i} and the relaxation parameter \lambda _{k} at the k-th iteration.[3]

If the linear system is consistent, the ART converges to the minimum-norm solution, provided that the iterations start with the zero vector. There are versions of the ART that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method. [7]

Advances

Recently, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Strohmer and Vershynin[8] in which the i-th equation is selected with probability proportional to \lVert a_{{i}}\rVert ^{2}. The superiority of this selection was illustrated with the reconstruction of a bandlimited function from its nonuniformly spaced sampling values. However, it has been pointed out[9] that the reported success by Strohmer and Vershynin depends on the specific choices that were made there in translating the underlying problem, whose geometrical nature is to find a common point of a set of hyperplanes, into a system of algebraic equations. There will always be legitimate algebraic representations of the underlying problem for which the selection method in [8] will perform in an inferior manner.[8][9][10]

Notes

  1. Kaczmarz (1937)
  2. Gordon, Bender & Herman (1970)
  3. 3.0 3.1 Herman (2009)
  4. Censor & Zenios (1997)
  5. Aster, Borchers & Thurber (2004)
  6. Chong & Zak (2008:226)
  7. See Herman (2009) and references therein.
  8. 8.0 8.1 8.2 Strohmer & Vershynin (2009)
  9. 9.0 9.1 Censor, Herman & Jiang (2009)
  10. Strohmer & Vershynin (2009b)

References

  • Kaczmarz, Stefan (1937), "Angenäherte Auflösung von Systemen linearer Gleichungen" (PDF), Bulletin International de l'Académie Polonaise des Sciences et des Lettres. Classe des Sciences Mathématiques et Naturelles. Série A, Sciences Mathématiques, 35: 355–357 
  • Chong, Edwin K. P.; Zak, Stanislaw H. (2008), An Introduction to Optimization (3rd ed.), John Wiley & Sons, pp. 226–230 
  • Gordon, Richard; Bender, Robert; Herman, Gabor (1970), "Algebraic reconstruction techniques (ART) for threedimensional electron microscopy and x-ray photography", Journal of Theoretical Biology 29,: 471–481 
  • Herman, Gabor (2009), Fundamentals of computerized tomography: Image reconstruction from projection (2nd ed.), Springer 
  • Censor, Yair; Zenios, S.A. (1997), Parallel optimization: theory, algorithms, and applications, New York: Oxford University Press 
  • Aster, Richard; Borchers, Brian; Thurber, Clifford (2004), Parameter Estimation and Inverse Problems, Elsevier 
  • Strohmer, Thomas; Vershynin, Roman (2009), "A randomized Kaczmarz algorithm for linear systems with exponential convergence" (PDF), Journal of Fourier Analysis and Applications 15: 262–278 
  • Censor, Yair; Herman, Gabor; Jiang, M. (2009), "A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin", Journal of Fourier Analysis and Applications 15: 431–436 
  • Strohmer, Thomas; Vershynin, Roman (2009b), "Comments on the randomized Kaczmarz method", Journal of Fourier Analysis and Applications 15: 437–440 
  • Vinh Nguyen, Quang; Lumban Gaol, Ford (2011), "Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science", Springer 2: 465–469 

External links

  • A randomized Kaczmarz algorithm with exponential convergence
  • Comments on the randomized Kaczmarz method
This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.