Logarithm of a matrix

From Wikipedia, the free encyclopedia

In mathematics, the logarithm of a matrix is a generalization of the scalar logarithm to matrices. It is in some sense an inverse function of the matrix exponential.

Contents

[edit] Definition

A matrix B is a logarithm of a given matrix A if the matrix exponential of B is A:

e^B = A. \,

[edit] Properties

A matrix has a logarithm if and only if it is invertible. However, this logarithm may be complex even if all the entries in the matrix are real numbers. In any case, the logarithm is not unique.

[edit] Calculating the logarithm of a diagonalizable matrix

A method for finding ln A for a diagonalizable matrix A is the following:

Find the matrix V of eigenvectors of A (each column of V is an eigenvector of A).
Find the inverse V−1 of V.
Let
A' = V^{-1}  A  V.\,
Then A′ will be a diagonal matrix whose diagonal elements are eigenvalues of A.
Replace each diagonal element of A′ by its (natural) logarithm in order to obtain lnA'.
Then
\ln A = V ( \ln A' ) V^{-1}. \,

That the logarithm of A might be a complex matrix even if A is real then follows from the fact that a matrix with real entries might nevertheless have complex eigenvalues (this is true for example for rotation matrices). The non-uniqueness of the logarithm of a matrix follows from the non-uniqueness of the logarithm of a complex number.

[edit] The logarithm of a non-diagonalizable matrix

The algorithm illustrated above does not work for non-diagonalizable matrices, such as

\begin{bmatrix}1 & 1\\ 0 & 1\end{bmatrix}.

For such matrices one needs to find its Jordan decomposition and, rather than computing the logarithm of diagonal entries as above, one would calculate the logarithm of the Jordan blocks.

The latter is accomplished by noticing that one can write a Jordan block as

B=\begin{pmatrix} \lambda & 1       & 0       & 0      & \cdots  & 0 \\ 0       & \lambda & 1       & 0      & \cdots  & 0 \\ 0       & 0       & \lambda & 1      & \cdots  & 0 \\ \vdots  & \vdots  & \vdots  & \ddots & \ddots  & \vdots \\ 0       & 0       & 0       & 0      & \lambda & 1       \\ 0       & 0       & 0       & 0      & 0       & \lambda \\\end{pmatrix} = \lambda \begin{pmatrix} 1 & \lambda^{-1}       & 0       & 0      & \cdots  & 0 \\ 0       & 1 & \lambda^{-1}       & 0      & \cdots  & 0 \\ 0       & 0       & 1 & \lambda^{-1}      & \cdots  & 0 \\ \vdots  & \vdots  & \vdots  & \ddots & \ddots  & \vdots \\ 0       & 0       & 0       & 0      & 1 & \lambda^{-1}       \\ 0       & 0       & 0       & 0      & 0       & 1 \\\end{pmatrix}=\lambda(I+K)

where K is a matrix with zeros on and under the main diagonal. (The number λ is nonzero by the assumption that the matrix whose logarithm one attempts to take is invertible.)

Then, by the formula

\ln (1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots

one gets

\ln B=\ln \big(\lambda(I+K)\big)=\ln (\lambda I) +\ln (I+K)= (\ln \lambda) I + K-\frac{K^2}{2}+\frac{K^3}{3}-\frac{K^4}{4}+\cdots

This series in general does not converge for any matrix K, as it would not for any real number, however, this particular K is a nilpotent matrix, so the series actually has a finite number of terms (Km is zero if m is the dimension of K).

Using this approach one finds

\ln \begin{bmatrix}1 & 1\\ 0 & 1\end{bmatrix} =\begin{bmatrix}0 & 1\\ 0 & 0\end{bmatrix}.

[edit] A functional analysis perspective

A square matrix represents a linear operator on the Euclidean space Rn where n is the dimension of the matrix. Since such a space is finite-dimensional, this operator is actually bounded.

Using the tools of holomorphic functional calculus, given a holomorphic function f(z) defined on an open set in the complex plane and a bounded linear operator T, one can calculate f(T) as long as f(z) is defined on the spectrum of T.

The function f(z)=ln z can be defined on any simply connected open set in the complex plane not containing the origin, and it is holomorphic on such a domain. This implies that one can define ln T as long as the spectrum of T does not contain the origin and there is a path going from the origin to infinity not crossing the spectrum of T (as such, if the spectrum of T is a circle with the origin inside of it, it is impossible to define ln T).

Back to the particular case of an Euclidean space, the spectrum of a linear operator on this space is the set of eigenvalues of its matrix, and so is a finite set. As long as the origin is not in the spectrum (the matrix is invertible), one obviously satisfies the path condition from the previous paragraph, and such, the theory implies that ln T is well-defined. The non-uniqueness of the matrix logarithm then follows from the fact that one can choose more than one branch of the logarithm which is defined on the set of eigenvalues of a matrix.

[edit] See also

In other languages