Binomial inverse theorem

In mathematics, the Binomial Inverse Theorem is useful for expressing matrix inverses in different ways.

If A, U, B, V are matrices of sizes p×p, p×q, q×q, q×p, respectively, then


\left(\mathbf{A}+\mathbf{UBV}\right)^{-1}=
\mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{UB}\left(\mathbf{B}+\mathbf{BVA}^{-1}\mathbf{UB}\right)^{-1}\mathbf{BVA}^{-1}

provided A and B + BVA−1UB are nonsingular. Note that if B is invertible, the two B terms flanking the quantity inverse in the right-hand side can be replaced with (B−1)−1, which results in


\left(\mathbf{A}+\mathbf{UBV}\right)^{-1}=
\mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{U}\left(\mathbf{B}^{-1}+\mathbf{VA}^{-1}\mathbf{U}\right)^{-1}\mathbf{VA}^{-1}.

This is the matrix inversion lemma, which can also be derived using matrix blockwise inversion.

Verification

First notice that

\left(\mathbf{A} + \mathbf{UBV}\right) \mathbf{A}^{-1}\mathbf{UB} = \mathbf{UB} + \mathbf{UBVA}^{-1}\mathbf{UB} = \mathbf{U} \left(\mathbf{B} + \mathbf{BVA}^{-1}\mathbf{UB}\right).

Now multiply the matrix we wish to invert by its alleged inverse

\left(\mathbf{A} + \mathbf{UBV}\right) \left( \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{UB}\left(\mathbf{B} + \mathbf{BVA}^{-1}\mathbf{UB}\right)^{-1}\mathbf{BVA}^{-1} \right)
= \mathbf{I}_p + \mathbf{UBVA}^{-1} - \mathbf{U} \left(\mathbf{B} + \mathbf{BVA}^{-1}\mathbf{UB}\right) \left(\mathbf{B} + \mathbf{BVA}^{-1}\mathbf{UB}\right)^{-1}\mathbf{BVA}^{-1}
= \mathbf{I}_p + \mathbf{UBVA}^{-1} - \mathbf{U BVA}^{-1} = \mathbf{I}_p \!

which verifies that it is the inverse.

So we get that—if A−1 and \left(\mathbf{B} + \mathbf{BVA}^{-1}\mathbf{UB}\right)^{-1} exist, then \left(\mathbf{A} + \mathbf{UBV}\right)^{-1} exists and is given by the theorem above.[1]

Special cases

If p = q and U = V = Ip is the identity matrix, then


\left(\mathbf{A}+\mathbf{B}\right)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{B}\left(\mathbf{B}+\mathbf{BA}^{-1}\mathbf{B}\right)^{-1}\mathbf{BA}^{-1}.

Remembering the identity


\left(\mathbf{A} \mathbf{B}\right)^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1} .

we can also express the previous equation in the simpler form as


\left(\mathbf{A}+\mathbf{B}\right)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\left(\mathbf{I}+\mathbf{B}\mathbf{A}^{-1}\right)^{-1}\mathbf{B}\mathbf{A}^{-1}.

If B = Iq is the identity matrix and q = 1, then U is a column vector, written u, and V is a row vector, written vT. Then the theorem implies


\left(\mathbf{A}+\mathbf{uv}^\mathrm{T}\right)^{-1} = \mathbf{A}^{-1}- \frac{\mathbf{A}^{-1}\mathbf{uv}^\mathrm{T}\mathbf{A}^{-1}}{1+\mathbf{v}^\mathrm{T}\mathbf{A}^{-1}\mathbf{u}}.

This is useful if one has a matrix A with a known inverse A−1 and one needs to invert matrices of the form A+uvT quickly.

If we set A = Ip and B = Iq, we get

\left(\mathbf{I}_p + \mathbf{UV}\right)^{-1} = \mathbf{I}_p - \mathbf{U}\left(\mathbf{I}_q + \mathbf{VU}\right)^{-1}\mathbf{V}.

In particular, if q = 1, then

\left(\mathbf{I}+\mathbf{uv}^\mathrm{T}\right)^{-1} = \mathbf{I} - \frac{\mathbf{uv}^\mathrm{T}}{1+\mathbf{v}^\mathrm{T}\mathbf{u}}.

See also

References

  1. Gilbert Strang (2003). Introduction to Linear Algebra (3rd edition ed.). Wellesley-Cambridge Press: Wellesley, MA. ISBN 0-9614088-9-8.