User:Ashigabou/Matrix calculus

From Wikipedia, the free encyclopedia

In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices, where it defines the matrix derivative. This notation is well-suited to describing systems of differential equations, and taking derivatives of matrix-valued functions with respect to matrix variables. This notation is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.

Contents

[edit] Notation

Let M(n,m) denote the space of real n×m matrices with n rows and m columns, whose elements will be denoted F, X, Y, etc. An element of M(n,1), that is, a column vector,is denoted with a boldface miniscule variable x, while xT denotes its transpose row vector. An element of M(1,1) is a scalar, and denoted a, b, c, f, t etc. All functions are assumed to be of differentiability class C1 unless otherwise noted.

[edit] Vector calculus

Because the space M(n,1) is identified with the Euclidean space Rn and M(1,1) is identified with R, the notations developed here can accommodate the usual operations of vector calculus.

[edit] Matrix calculus

For the purposes of defining derivatives of simple functions, not much changes with matrix spaces; the space of n×m matrices is after all isomorphic as a vector space to Rnm. The three derivatives familiar from vector calculus have close analogues here, though beware the complications that arise in the identities below.

  • The tangent vector of a curve F : RM(n,m)
    \frac{\partial \mathbf{F}}{\partial t} = \begin{bmatrix} \frac{\partial F_{1,1}}{\partial t} & \cdots & \frac{\partial F_{1,m}}{\partial t}\\ \vdots & \ddots & \vdots\\ \frac{\partial F_{n,1}}{\partial t} & \cdots & \frac{\partial F_{n,m}}{\partial t}\\ \end{bmatrix}.
  • The gradient of a scalar function f : M(n,m) → R
    \frac{\partial f}{\partial \mathbf{X}} = \begin{bmatrix} \frac{\partial f}{\partial X_{1,1}} & \cdots & \frac{\partial f}{\partial X_{n,1}}\\ \vdots & \ddots & \vdots\\ \frac{\partial f}{\partial X_{1,m}} & \cdots & \frac{\partial f}{\partial X_{n,m}}\\ \end{bmatrix}.
    Notice that the indexing of the gradient with respect to X is transposed as compared with the indexing of X. The directional derivative of f in the direction of matrix Y is given by
    \nabla_\mathbf{Y} f = \operatorname{tr} \left(\frac{\partial f}{\partial \mathbf{X}} \mathbf{Y}\right),
    where tr denotes the trace.
  • The differential or the matrix derivative of a function F : M(n,m) → M(p,q) is an element of M(p,q) ⊗ M(m,n), a fourth rank tensor (the reversal of m and n here indicates the dual space of M(n,m)). In short it is an m×n matrix each of whose entries is a p×q matrix.
    \frac{\partial\mathbf{F}} {\partial\mathbf{X}}= \begin{bmatrix} \frac{\partial\mathbf{F}}{\partial X_{1,1}} & \cdots & \frac{\partial \mathbf{F}}{\partial X_{n,1}}\\ \vdots & \ddots & \vdots\\ \frac{\partial\mathbf{F}}{\partial X_{1,m}} & \cdots & \frac{\partial \mathbf{F}}{\partial X_{n,m}}\\ \end{bmatrix},
    and note that each ∂F/∂Xi,j is a p×q matrix defined as above. Note also that this matrix has its indexing transposed; m rows and n columns. The derivative of F for an n×m matrix Y in M(n,m) is then
    d\mathbf{F}(\mathbf{Y}) = \operatorname{tr}\left(\frac{\partial\mathbf{F}} {\partial\mathbf{X}}\mathbf{Y}\right).
    Note that this definition encompasses all of the preceding definitions as special cases.

[edit] Identities

Note that matrix multiplication is not commutative, so in these identities, the order must not be changed.

[edit] Examples

[edit] Derivative of linear functions

This section lists some commonly used vector derivative formulas for linear equations evaluating to a vector.

\frac{\partial \; \textbf{a}^T\textbf{x}}{\partial \; \textbf{x}} = \frac{\partial \; \textbf{x}^T\textbf{a}}{\partial \; \textbf{x}} = \textbf{a}
\frac{\partial \; \textbf{A}\textbf{x}}{\partial \; \textbf{x}} = \textbf{A}^T
\frac{\partial \; \textbf{x}^T \textbf{A}}{\partial \; \textbf{x}} = \textbf{A}

[edit] Derivative of quadratic functions

This section lists some commonly used vector derivative formulas for quadratic matrix equations evaluating to a scalar.

\frac{\partial \; \textbf{x}^T \textbf{A}\textbf{x}}{\partial \; \textbf{x}} =  (\textbf{A}^T + \textbf{A}) \textbf{x}
\frac{\partial \; (\textbf{A}\textbf{x} + \textbf{b})^T \textbf{C} (\textbf{D}\textbf{x} + \textbf{e})     }{\partial \; \textbf{x}} =  \textbf{A}^T \textbf{C}(\textbf{D}\textbf{x} + \textbf{e}) + \textbf{D}^T \textbf{C}^T (\textbf{A}\textbf{x} + \textbf{b})

[edit] Derivative of matrix traces

This section shows examples of matrix differentiation of common trace equations.

\frac{\partial \; \operatorname{tr}( \textbf{A}^T \textbf{X} \textbf{B})}{\partial \; \textbf{X}} = \frac{\partial \; \operatorname{tr}( \textbf{B}^T \textbf{X}^T \textbf{A})}{\partial \; \textbf{X}} = \textbf{A} \textbf{B}
\frac{\partial \; \operatorname{tr}( \textbf{A} \textbf{X} \textbf{B} \textbf{X}^T \textbf{C}) }{\partial \; \textbf{X}} = \textbf{A}^T \textbf{C}^T \textbf{X} \textbf{B}^T + \textbf{C} \textbf{A} \textbf{X} \textbf{B}

[edit] Origin of the formula

[edit] definition of the Fréchet derivative

Most of those formula can be better understood in the context of Fréchet derivative. For E and F being Banach spaces, the function

\begin{matrix}     f: & E & \rightarrow & F \\        & x & \mapsto & f(x)     \end{matrix}

Is said to be differentiable if for an openset U \subseteq E and a \in U, there exist a linear map Da from E to F such as:

\lim_{h \to 0} \frac{ \| f(a + h) - f(a) - D_a(h) \| }{ \|h\| } = 0.

Intuitively, f is differentiable at the point a if it can be approximated by a linear map (to be exact, the Frechet derivative imposes the linear map to be bounded, which is always the case in finite dimension for any norm). When f is differentiable at a, we usually note df(a) the linear map. If it exists, it is unique. As the differential is a Bounded linear operator, it is continue, and the function f is said to be C1.

[edit] example

If we take E = F = R, if f has a derivative at the point a, then, by definition of the usual derivative for numeric functions:

\lim_{h \to 0} \frac{ | f(a + h) - f(a) - h.f^'(a)| }{ |h| } = 0.

thus, the linear map

\begin{matrix}D_a: & R & \rightarrow & R \\ & t & \mapsto & t.f^{'}(a) \end{matrix}

fullfill the requirements of Frechet derivative definition. we see that Frechet differentiability and conventional derivability for scalar functions are equivalent.

[edit] partial derivative

When E = Rn, we can see there is a link between Frechet derivative and the partial derivative. If f is differentiable at a, then the partial derivative exist, and the linear map defined as follow:

\begin{matrix}     l_a: & \Bbb{R}^n & \rightarrow & F \\        & h & \mapsto & \sum_{i=1}^{n}{\frac{\partial f}{\partial x_i}(a)h_i}     \end{matrix}

is equal to df(a) (ei being the canonical basis of Rn, and h = (h_sub>1, ..., h_sub>n) a vector of E in this basis). Take care, though, as the existence of partial derivative does not imply the existence of Frechet derivative. There is equivalence between the existence of Frechet derivative and the existence of continuous partial derivative; The continuity is essential.

This equivalence shows that in most cases, the formula can be understood as matrix representation of the linear map.

[edit] function from Rn to R

If E = Rn and F = R. If f is Frechet differentiable at the point a, the function

\begin{matrix}     D_a: & \Bbb{R}^n & \rightarrow & \Bbb{R} \\        & x & \mapsto & \nabla f_a x     \end{matrix}

is the Frechet derivative of f at the point a.

The gradient is a special case of Frechet derivative when the partial derivative are continuous.

[edit] Application of Frechet derivative to some vector and matrix derivative formula

The Frechet derivative makes it really easy to find some derivative. The following formula show how we can easily find the baisc formula from above, and then using the chain rules and product rule to deduct the others:

[edit] linear function with domain and/or codomain being vector space

For the function f(x) = Ax where A is a fixed matrix m x n, and x a vector of R^n:

\begin{matrix}     f(x+h) - f(x) & = & A(x+h) - Ax \\                   & = & Ah \\     \end{matrix}

Thus if we define the linear map:

\begin{matrix}     D_x : & R^n & \rightarrow & R^m \\           & h   & \mapsto     & Ah     \end{matrix}

Then it is easy to see that Dx fullfill the Frechet derivative definition, and thus by unicity of the differential, df(x)(h) = Ah.

[edit] quadratic function with domain and/or codomain being vector space

Let's find the derivative formula for the function xtAx for A a fixed matrix. Noting h a small vector (near meaning here that h is in a neighborhood of x), we want to find a linear map lx from Rn to R such as the f(x+h) -f(x) - lx(h):

\begin{matrix}     f(x+h) - f(x) & = & x^t(A + A^t)h + h^tAh     \end{matrix}

Defining lx as :lx(h) = xt(A + At)h for all h in a neighbourhood of x:

\begin{matrix}          f(x+h) - f(x) - l_x(h) & = & h^tAh     \end{matrix}

Then we can prove that l_x(h) fullfill Frechet derivative definition (how ? I don't know how to prove that \frac{|h^tAh|}{\|h\|} tends toward 0, but this is intuitive, as we have basically \frac{{\|h|\}^2}{\|h\|}).

[edit] Matrix function of matrix

A being a square matrix, and f being

\begin{matrix}     f: & M_{\Bbb{R}}(m, n) & \rightarrow &  M_{\Bbb{R}}(n, n) \\        & A & \mapsto & A^t A     \end{matrix}

It is easy to find that the function

\begin{matrix}     D_A: & M_{\Bbb{R}}(m, n) & \rightarrow &  M_{\Bbb{R}}(n, n) \\        & H & \mapsto & A^t H + H^t A     \end{matrix}

is the differential (just shows that (f(A+H) - f(a) - DA) / (h) tends toward 0 when the norm of A converge to 0, and concludes by using the unicity of the differential).

[edit] inverse of matrix

Here, using the definition doesn't seem enough. Indeed, using the product rule leads to an easy answer:

\begin{matrix}     AA^{-1} & = & I \\     \frac{\partial AA^{-1}}{\partial A} & = & \frac{\partial I}{\partial A} \\     \frac{\partial A}{\partial A} A^{-1} + A \frac{\partial A^{-1}}{\partial A} & = & 0 \\     \frac{\partial A}{\partial A} A^{-1} + A \frac{\partial A^{-1}}{\partial A} & = & 0 \\     A \frac{\partial A^{-1}}{\partial A} & = &  - \frac{\partial A}{\partial A} A^{-1} \\     \frac{\partial A^{-1}}{\partial A} & = &  - A^{-2} \\     \end{matrix}

(this method is really not rigorous: the differentiability is not proved before computing).

[edit] Relation to other derivatives

There are other commonly used definitions for derivatives in multivariable spaces. For topological vector spaces, the most familiar is the Fréchet derivative, which makes use of a norm. In the case of matrix spaces, there are several matrix norms available, all of which are equivalent since the space is finite-dimensional. However the matrix derivative defined in this article makes no use of any topology on M(n,m). It is defined solely in terms of partial derivatives, which are sensitive only two variations in a single dimension at a time, and thus are not bound by the full differentiable structure of the space. For example, it is possible for a map to have all partial derivatives exist at a point, and yet not be continuous in the topology of the space. See for example Hartogs' theorem. The matrix derivative is not a special case of the Fréchet derivative for matrix spaces, but rather a convenient notation for keeping track of many partial derivatives for doing calculations, though in the case that a function is Fréchet differentiable, the two derivatives will agree.

Since the matrix derivative does not depend make use of a topology, it makes sense for any real vector space. Thus it can be defined for spaces where the Fréchet derivative or the more general Gâteaux derivative cannot be, although since it is not equivalent, it cannot properly be referred to as a generalization.

[edit] Usages

Matrix calculus is, among other places, used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:

[edit] Alternatives

The tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. Note that a matrix can be considered simply a tensor of rank two.

[edit] See also

[edit] External references

  • Matrix calculus appendix from Introduction to Finite Element Methods book on University of Colorado at Boulder.
  • Matrix Calculus Matrix Reference Manual , Imperial College London.</math>