Kaczmarz method

The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems  A x = b . It was first discovered by the Polish mathematician Stefan Kaczmarz,[1] and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART).[2] ART includes the positivity constraint, making it nonlinear.[3]

The Kaczmarz method is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method.[4]

It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS).[5][6]

Algorithm 1: Randomized Kaczmarz algorithm

Let  A x = b be a linear system, let a_{i} be the  i th row of complex-valued matrix A, and let x_{0} be arbitrary complex-valued initial approximation to the solution of  Ax=b . For  k=0,1,... compute:


  x_{k+1} 
  = 
  x_{k} 
  + 
  \frac{b_{i} - \langle a_{i}, x_{k} \rangle}{\lVert a_{i} \rVert^2} \overline{a_{i}}

where \overline{a_i} denotes complex conjugation of a_i, and  i is chosen from the set  {1,2,...,m} at random, with probability proportional to  {\lVert a_{i} \rVert^2} .

Under such circumstances  x_{k} converges exponentially fast to the solution of  Ax=b , and the rate of convergence depends only on the scaled condition number  \kappa(A) .

Theorem

Let  x be the solution of  Ax=b . Then Algorithm 1 converges to  x in expectation, with the average error:

 E{\lVert x_{k}-x \rVert^2} \leq (1-\kappa(A)^{-2})^{k} \cdot {\lVert x_{0}-x \rVert^2}.

Proof

We have


\sum_{j=1}^{m}|\langle z,a_j \rangle|^2 \geq \frac{\lVert z \rVert^2}{\lVert A^{-1} \rVert^2} \qquad\qquad\qquad\qquad (1)
for all  z \in \mathbb C^n .

Using the fact that  {\lVert A \rVert^2}=\sum_{j=1}^{m}{\lVert a_j \rVert^2} we can write (1) as


\begin{align}
\sum_{j=1}^{m} \frac{{\lVert a_j \rVert^2}}{\lVert A \rVert^2}\left|\left\langle z,\frac {a_j}{\lVert a_j \rVert}\right\rangle \right|^2 \geq \kappa(A)^{-2}{\lVert z \rVert^2} \qquad\qquad\qquad\qquad (2)
\end{align}
for all  z \in \mathbb C^n .

The main point of the proof is to view the left hand side in (2) as an expectation of some random variable. Namely, recall that the solution space of the j-th equation of  Ax=b is the hyperplane  {y : \langle y,a_j \rangle = b_j} , whose normal is  \frac{a_j}{\lVert a_j \rVert^2}. Define a random vector Z whose values are the normals to all the equations of  Ax=b , with probabilities as in our algorithm:

 Z=\frac {a_j}{\lVert a_j \rVert} with probability  \frac{\lVert a_j \rVert^2}{\lVert A \rVert^2} \qquad\qquad\qquad j=1,\cdots,m

Then (2) says that


\begin{align}
\mathbb E|\langle z,Z\rangle|^2 \geq\kappa(A)^{-2}{\lVert z \rVert^2} \qquad\qquad (3)
\end{align}
for all  z \in \mathbb C^n .

The orthogonal projection P onto the solution space of a random equation of  Ax=b is given by  Pz= z-\langle z-x, Z\rangle Z.

Now we are ready to analyze our algorithm. We want to show that the error {\lVert x_k-x \rVert^2} reduces at each step in average (conditioned on the previous steps) by at least the factor of  (1-\kappa(A)^{-2}). The next approximation  x_k is computed from  x_{k-1} as  x_k= P_kx_{k-1}, where  P_1,P_2,\cdots are independent realizations of the random projection  P. The vector  x_{k-1}-x_k is in the kernel of  P_k. It is orthogonal to the solution space of the equation onto which  P_k projects, which contains the vector  x_k-x (recall that  x is the solution to all equations). The orthogonality of these two vectors then yields  {\lVert x_k-x \rVert^2}={\lVert x_{k-1}-x \rVert^2}-{\lVert x_{k-1}-x_k \rVert^2}. To complete the proof, we have to bound  {\lVert x_{k-1}-x_k \rVert^2} from below. By the definition of  x_k , we have  {\lVert x_{k-1}-x_k \rVert}=\langle x_{k-1}-x,Z_k\rangle

where  Z_1,Z_2,\cdots are independent realizations of the random vector  Z.

Thus  {\lVert x_k-x \rVert^2} = \left(1-\left|\left\langle\frac{x_{k-1}-x}{\lVert x_{k-1}-x \rVert},Z_k\right\rangle\right|^2\right){\lVert x_{k-1}-x \rVert^2}.

Now we take the expectation of both sides conditional upon the choice of the random vectors  Z_1,\cdots,Z_{k-1} (hence we fix the choice of the random projections  P_1,\cdots,P_{k-1} and thus the random vectors  x_1,\cdots,x_{k-1} and we average over the random vector  Z_k ). Then

 \mathbb E_{{Z_1,\cdots,Z_{k-1}}}{\lVert x_k-x \rVert^2} = \left(1-\mathbb E_{{Z_1,\cdots,Z_{k-1}}}\left|\left\langle\frac{x_{k-1}-x}{\lVert x_{k-1}-x \rVert},Z_k\right\rangle\right|^2\right){\lVert x_{k-1}-x \rVert^2}.

By (3) and the independence,

 \mathbb E_{{Z_1,\cdots,Z_{k-1}}}{\lVert x_k-x \rVert^2} \leq (1-\kappa(A)^{-2}){\lVert x_{k-1}-x \rVert^2}.

Taking the full expectation of both sides, we conclude that

 \mathbb E{\lVert x_k-x \rVert^2} \leq (1-\kappa(A)^{-2})\mathbb E{\lVert x_{k-1}-x \rVert^2}.

 \blacksquare

Algorithm 2: Randomized Kaczmarz algorithm with relaxation

Given a real or complex  m \times n matrix  A and a real or complex vector  b , respectively, the Kaczmarz's algorithm iteratively computes an approximation of the solution of the linear systems of equations  A x = b . It does so by converging to the vector x^*=A^H (AA^H )^{-1} b (where A^H is the conjugate transpose of A) without the need to invert the matrix AA^H, which is algorithm's main advantage, especially when the matrix A has a large number of rows.[7] Most generally, algorithm is defined as follows:


  x^{k+1} 
  = 
  x^{k} 
  + 
  \lambda_k 
  \frac{b_{i} - \langle a_{i}, x^{k} \rangle}{\lVert a_{i} \rVert^2} \overline{a_{i}}

where  i = k \, \bmod \, m + 1 ,  a_i is the i-th row of the matrix  A ,  b_i is the i-th component of the vector  b , and  \lambda_k is a relaxation parameter. The above formulae gives a simple iteration routine. There are various ways for choosing the i-th equation  \langle a_{i}, x_{k} \rangle = b_i and the relaxation parameter  \lambda_k at the k-th iteration.[4]

If the linear system is consistent, the ART converges to the minimum-norm solution, provided that the iterations start with the zero vector. There are versions of the ART that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method.[8]

Advances

Recently, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Strohmer and Vershynin[9] in which the i-th equation is selected with probability proportional to  \lVert a_{i} \rVert ^2 . The superiority of this selection was illustrated with the reconstruction of a bandlimited function from its nonuniformly spaced sampling values. However, it has been pointed out[10] that the reported success by Strohmer and Vershynin depends on the specific choices that were made there in translating the underlying problem, whose geometrical nature is to find a common point of a set of hyperplanes, into a system of algebraic equations. There will always be legitimate algebraic representations of the underlying problem for which the selection method in [9] will perform in an inferior manner.[9][10][11]

Notes

References

External links