Matrix determinant lemma
From Wikipedia, the free encyclopedia
In mathematics, in particular linear algebra, the matrix determinant lemma[1][2] computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT.
Contents |
[edit] Statement
Suppose A is an invertible square matrix and u, v are column vectors. Then the matrix determinant lemma states that
Here, uvT is the dyadic product of two vectors u and v.
[edit] Application
If the determinant and inverse of A are already known, the formula provides a numerically cheap way to compute the determinant of A corrected by the matrix uvT. The computation is relatively cheap because the determinant of A+uvT does not have to be computed from scratch (which in general is expensive). Using unit vectors for u and/or v, individual columns, rows or elements[3] of A may be manipulated and a correspondingly updated determinant computed relatively cheaply in this way.
When the matrix determinant lemma is used in conjunction with the Sherman-Morrison formula, both the inverse and determinant may be conveniently updated together.
[edit] Generalization
Suppose A is an invertible n-by-n matrix and U, V are n-by-m matrices. Then
In the special case this is Sylvester's theorem for determinants.
Given additionally an invertible m-by-m matrix W, the relationship can also be expressed as
[edit] See also
- The Sherman-Morrison formula, which shows how to update the inverse, A-1, to obtain (A+uvT)-1.
- The Woodbury formula, which shows how to update the inverse, A-1, to obtain (A+UVT)-1.
[edit] References
- ^ Harville, D. A. (1997). Matrix Algebra From a Statistician’s Perspective. Springer-Verlag.
- ^ Brookes, M. (2005). The Matrix Reference Manual [online].
- ^ (1992) Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, p.73. ISBN 0-521-43108-5.