Matrix exponential

From Wikipedia, the free encyclopedia

In mathematics, the matrix exponential is a function on square matrices analogous to the ordinary exponential function. Abstractly, the matrix exponential gives the connection between a matrix Lie algebra and the corresponding Lie group.

Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series

e^X = \sum_{k=0}^\infty{X^k \over k!}.

The above series always converges, so the exponential of X is well-defined. Note that if X is a 1×1 matrix the matrix exponential of X corresponds with the ordinary exponential of X thought of as a number.

Contents

[edit] Properties

Let X and Y be n×n complex matrices and let a and b be arbitrary complex numbers. We denote the n×n identity matrix by I and the zero matrix by 0. The matrix exponential satisfies the following properties:

[edit] Linear differential equations

One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. Indeed, it follows from equation (1) below that the solution of

\frac{d}{dt} y(t) = Ay(t), \quad y(0) = y_0,

where A is a matrix, is given by

y(t) = e^{At} y_0. \,

The matrix exponential can also be used to solve the inhomogeneous equation

\frac{d}{dt} y(t) = Ay(t) + z(t), \quad y(0) = y_0.

See the section on applications below for examples.

There is no closed-form solution for differential equations of the form

\frac{d}{dt} y(t) = A(t) \, y(t), \quad y(0) = y_0,

where A is not constant, but the Magnus series gives the solution as an infinite sum.

[edit] The exponential of sums

We know that the exponential function satisfies ex + y = exey for any numbers x and y. The same goes for commuting matrices: If the matrices X and Y commute (meaning that XY = YX), then

e^{X+Y} = e^Xe^Y. \,

However, if they do not commute, then the above equality does not necessarily hold. In that case, we can use the Baker-Campbell-Hausdorff formula to compute eX + Y.

[edit] The exponential map

Note that the exponential of a matrix is always a non-singular matrix. The inverse of eX is given by eX. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map

\exp \colon M_n(\mathbb C) \to \mbox{GL}(n,\mathbb C)

from the space of all n×n matrices to the general linear group, i.e. the group of all non-singular matrices. In fact, this map is surjective which means that every non-singular matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field C of complex numbers and not R). The matrix logarithm gives an inverse to this map.

For any two matrices X and Y, we have

\| e^{X+Y} - e^X \| \le \|Y\| e^{\|X\|} e^{\|Y\|},

where || · || denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Mn(C).

The map

t \mapsto e^{tX}, \qquad t \in \mathbb R

defines a smooth curve in the general linear group which passes through the identity element at t = 0. In fact, this gives a one-parameter subgroup of the general linear group since

e^{tX}e^{sX} = e^{(t+s)X}.\,

The derivative of this curve (or tangent vector) at a point t is given by

\frac{d}{dt}e^{tX} = Xe^{tX}. \qquad (1)

The derivative at t = 0 is just the matrix X, which is to say that X generates this one-parameter subgroup.

More generally,

\frac{d}{dt}e^{X(t)} = \int_0^1 e^{(1-\alpha) X(t)} \frac{dX(t)}{dt} e^{\alpha X(t)}\,d\alpha.

[edit] Computing the matrix exponential

[edit] Diagonalizable case

If a matrix is diagonal:

A=\begin{bmatrix} a_1 & 0 & \ldots & 0 \\ 0 & a_2 & \ldots & 0  \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & a_n \end{bmatrix},

then its exponential can be obtained by just exponentiating every entry on the main diagonal:

e^A=\begin{bmatrix} e^{a_1} & 0 & \ldots & 0 \\ 0 & e^{a_2} & \ldots & 0  \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & e^{a_n} \end{bmatrix}.

This also allows one to exponentiate diagonalizable matrices. If A = UDU−1 and D is diagonal, then eA = UeDU−1. Application of Sylvester's matrix theorem yields the same result.

[edit] Nilpotent case

A matrix N is nilpotent if Nq = 0 for some integer q. In this case, the matrix exponential eN can be computed directly from the series expansion, as the series terminates after a finite number of terms:

e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q-1)!}N^{q-1}.

[edit] General case

An arbitrary matrix X (over an algebraically closed field) can be expressed uniquely as sum

X = A + N \,

where

  • A is diagonalizable
  • N is nilpotent
  • A commutes with N (i.e. AN = NA)

This means we can compute the exponential of X by reducing to the previous two cases:

e^X = e^{A+N} = e^A e^N. \,

Note that we need the commutativity of A and N for the last step to work.

Another (closely related) method is to work with the Jordan form of X. Suppose J is the Jordan form of X, with P the transition matrix. Then

e^{X}=Pe^{J}P^{-1}.\,

Also, since

J=J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n),
e^{J}\, = \exp \big( J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n) \big)
= \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus\cdots\oplus \exp \big( J_{a_k}(\lambda_k) \big).

Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form

J_{a}(\lambda) = \lambda I + N \,

where N is a special nilpotent matrix. The matrix exponential of this block is given by

e^{\lambda I + N} = e^{\lambda}e^N. \,

[edit] Calculations

Consider the matrix

B=\begin{bmatrix} 21 & 17 & 6 \\ -5 & -1 & -6 \\ 4 & 4 & 16 \end{bmatrix}

which has Jordan form

J=\begin{bmatrix} 16 & 1 & 0 \\ 0 & 16 & 0 \\ 0 & 0 & 4 \end{bmatrix}

and transition matrix

P=\begin{bmatrix} -1 & 1 & {5 \over 8} \\ 1 & -1 & -{1\over 8} \\ 0 & 2 & 0 \end{bmatrix}

Now,

J=J_2(16)\oplus J_1(4)

and

e^B = P e^{J} P^{-1} = P (e^{J_2(16)} \oplus e^{J_1(4)} ) P^{-1}.

So,

\exp \left( 16I+\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) = e^{16}\left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + {1 \over 2!}\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}+\cdots\right)=\begin{bmatrix} e^{16} & e^{16} \\ 0 & e^{16} \end{bmatrix}

The exponential calculation for a 1×1 matrix is clearly trivial, with eJ1(4)=e4 so,

e^B = P\begin{bmatrix} e^{16} & e^{16} & 0 \\ 0 & e^{16} & 0 \\ 0 & 0 & e^4 \end{bmatrix}P^{-1} = {1\over 4}\begin{bmatrix} 5e^4-e^{16} & 5e^4 - 5 e^{16} & -2e^{16} \\ -e^4 + e^{16} & -e^4 + 5e^{16} & 2e^{16} \\ 0 & 0 & 4e^{16} \end{bmatrix}

Clearly, to calculate the Jordan form and to evaluate the exponential this way is very tedious. Often, it will suffice to calculate the action of the exponential matrix upon some vector in applications, and there are other methods available to achieve this.

[edit] Applications

[edit] Linear differential equations

The matrix exponential has applications to systems of linear differential equations. Recall from earlier in this article that a differential equation of the form

y′ = Cy

has solution eCxy(0). If we consider the vector

\mathbf{y}(x) = \begin{pmatrix} y_1(x) \\ \vdots \\y_n(x) \end{pmatrix}

we can express a system of coupled linear differential equations as

\mathbf{y}'(x) = A\mathbf{y}(x)+\mathbf{b}

If we make an ansatz and use an integrating factor of eAx and multiply throughout, we obtain

e^{-Ax}\mathbf{y}'(x)-e^{-Ax}A\mathbf{y} = e^{-Ax}\mathbf{b}
D (e^{-Ax}\mathbf{y}) = e^{-Ax}\mathbf{b}

If we can calculate eAx, then we can obtain the solution to the system.

[edit] Example (homogeneous)

Say we have the system

\begin{matrix} x' &=& 2x&-y&+z \\ y' &=&   &3y&-1z \\ z' &=& 2x&+y&+3z \end{matrix}

We have the associated matrix

M=\begin{bmatrix} 2 & -1 &  1 \\ 0 &  3 & -1 \\ 2 &  1 &  3 \end{bmatrix}

In the example above, we have calculated the matrix exponential

e^{tM}=\begin{bmatrix}       2e^t - 2te^{2t} & -2te^{2t}    & 0 \\ -2e^t + 2(t+1)e^{2t} & 2(t+1)e^{2t} & 0 \\             2te^{2t} & 2te^{2t}     & 2e^t\end{bmatrix}

so the general solution of the system is

\begin{bmatrix}x \\y \\ z\end{bmatrix}= C_1\begin{bmatrix}2e^t - 2te^{2t} \\-2e^t + 2(t+1)e^{2t}\\2te^{2t}\end{bmatrix} +C_2\begin{bmatrix}-2te^{2t}\\2(t+1)e^{2t}\\2te^{2t}\end{bmatrix} +C_3\begin{bmatrix}0\\0\\2e^t\end{bmatrix}

that is,

\begin{matrix} x &=& C_1(2e^t - 2te^{2t}) + C_2(-2te^{2t})\\ y &=& C_1(-2e^t + 2(t+1)e^{2t})+C_2(2(t+1)e^{2t})\\ z &=& (C_1+C_2)(2te^{2t})+2C_3e^t\end{matrix}

[edit] Inhomogeneous case - variation of parameters

For the inhomogeneous case, we can use a method akin to variation of parameters. We seek a particular solution of the form yp(t) = exp(tA)z(t) :

\mathbf{y}_p' = (e^{tA})'\mathbf{z}(t)+e^{tA}\mathbf{z}'(t)
= Ae^{tA}\mathbf{z}(t)+e^{tA}\mathbf{z}'(t)
= A\mathbf{y}_p(t)+e^{tA}\mathbf{z}'(t)

For yp to be a solution:

e^{tA}\mathbf{z}'(t) = \mathbf{b}(t)
\mathbf{z}'(t) = (e^{tA})^{-1}\mathbf{b}(t)
\mathbf{z}(t) = \int_0^t e^{-uA}\mathbf{b}(u)\,du+\mathbf{c}

So,

\mathbf{y}_p = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du+e^{tA}\mathbf{c}
= \int_0^t e^{(t-u)A}\mathbf{b}(u)\,du+e^{tA}\mathbf{c}

where c is determined by the initial conditions of the problem.

[edit] Example (inhomogeneous)

Say we have the system

\begin{matrix} x' &=& 2x&-y&+z&+e^{2t} \\ y' &=&   &3y&-1z& \\ z' &=& 2x&+y&+3z&+e^{2t} \end{matrix}

So we then have

M=\begin{bmatrix} 2 & -1 &  1 \\ 0 &  3 & -1 \\ 2 &  1 &  3 \end{bmatrix}

and

\mathbf{b}=e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}

From before, we have the general solution to the homogeneous equation, Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, now we only need to find the particular solution (via variation of parameters).

We have, above:

\mathbf{y}_p = e^{t}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}
\mathbf{y}_p = e^{t}\int_0^t \begin{bmatrix}       2e^u - 2ue^{2u} & -2ue^{2u}    & 0 \\  \\ -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\  \\             2ue^{2u} & 2ue^{2u}     & 2e^u\end{bmatrix}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}
\mathbf{y}_p = e^{t}\int_0^t \begin{bmatrix} e^{2u}( 2e^u - 2ue^{2u}) \\  \\   e^{2u}(-2e^u + 2(1 + u)e^{2u}) \\  \\   2e^{3u} + 2ue^{4u}\end{bmatrix}+e^{tA}\mathbf{c}
\mathbf{y}_p = e^{t}\begin{bmatrix} -{1 \over 24}e^{3t}(3e^t(4t-1)-16) \\  \\ {1 \over 24}e^{3t}(3e^t(4t+4)-16) \\  \\ {1 \over 24}e^{3t}(3e^t(4t-1)-16)\end{bmatrix}+ \begin{bmatrix}       2e^t - 2te^{2t} & -2te^{2t}    & 0 \\  \\ -2e^t + 2(t+1)e^{2t} & 2(t+1)e^{2t} & 0 \\  \\             2te^{2t} & 2te^{2t}     & 2e^t\end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix}

which can be further simplified to get the requisite particular solution determined through variation of parameters.

[edit] See also

[edit] References

  • Roger A. Horn and Charles R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991. ISBN 0-521-46713-6.
In other languages