Sherman–Morrison formula

From Wikipedia, the free encyclopedia

In mathematics, in particular linear algebra, the Sherman–Morrison formula,[1][2][3] named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix A and the dyadic product, uvT, of a column vector u and a row vector vT. The Sherman–Morrison formula is a special case of the Woodbury formula.

Contents

[edit] Statement

Suppose A is an invertible square matrix and u, v are vectors. Suppose furthermore that 1 + v^T A^{-1}u \neq 0. Then the Sherman–Morrison formula states that

(A+uv^T)^{-1} = A^{-1} - {A^{-1}uv^T A^{-1} \over 1 + v^T A^{-1}u}.

Here, uvT is the dyadic product of two vectors u and v. The general form shown here is the one published by Bartlett[4].

[edit] Application

If the inverse of A is already known, the formula provides a numerically cheap way to compute the inverse of A corrected by the matrix uvT (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of A + uvT does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) A − 1. Using unit vectors for u or v, individual columns or rows of A may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way.

In the general case, where A − 1 is a n times m matrix and u and v are arbitrary vectors (in which case the whole matrix is updated)[4], the computation takes 3mn scalar multiplications[5].

If u is a unit vector, only one row of A − 1 is updated[1] and the computation takes only 2mn scalar multiplications. The same goes if v is a unit vector, in which case only one column of A − 1 is updated.

If both u and v are unit vectors, only a single element of A − 1 is updated[2] and the computation takes only mn scalar multiplications.

[edit] Verification

We verify the properties of the inverse. A matrix Y (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix X (in this case A + uvT) if and only if XY = YX = I.

We first verify that the right hand side (Y) satisfies XY = I. Note that vTA − 1u is a scalar and can be factored out.

XY = (A + uv^T)\left( A^{-1} - {A^{-1} uv^T A^{-1} \over 1 + v^T A^{-1}u}\right)
= AA^{-1} +  uv^T A^{-1} - {AA^{-1}uv^T A^{-1} + uv^T A^{-1}uv^T A^{-1} \over 1 + v^TA^{-1}u}
= I +  uv^T A^{-1} - {uv^T A^{-1} + uv^T A^{-1}uv^T A^{-1} \over 1 + v^T A^{-1}u}
= I + uv^T A^{-1} - {(1 + v^T A^{-1}u) uv^T A^{-1} \over 1 + v^T A^{-1}u}
= I + uv^T A^{-1} - uv^T A^{-1}\,
= I.

In the same way, we can verify that

YX = \left(A^{-1} - {A^{-1}uv^T A^{-1} \over 1 + v^T A^{-1}u}\right)(A + uv^T) = I.

[edit] See also

[edit] References

  1. ^ a b Jack Sherman and Winifred J. Morrison, Adjustment of an Inverse Matrix Corresponding to Changes in the Elements of a Given Column or a Given Row of the Original Matrix (abstract), Annals of Mathematical Statistics, Volume 20, p. 621 (1949).
  2. ^ a b Jack Sherman, Winifred J. Morrison, Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix, Annals of Mathematical Statistics, Vol. 21, No. 1, pp. 124-127 (1950) MR35118
  3. ^ William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, pp.73+. Cambridge University Press (1992). ISBN 0-521-43108-5.
  4. ^ a b M. S. Bartlett, An Inverse Matrix Adjustment Arising in Discriminant Analysis, Annals of Mathematical Statistics, Vol. 22, No. 1, pp. 107-111 (1951) MR40068
  5. ^ Update of the inverse matrix by the Sherman-Morrison formula

[edit] External links