Spectral theorem

From Wikipedia, the free encyclopedia

In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. See also spectral theory for a historical perspective.

Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.

The spectral theorem also provides a canonical decomposition, called the spectral decomposition, or eigendecomposition, of the underlying vector space on which it acts.

In this article we consider mainly the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.

Contents

[edit] Finite-dimensional case

[edit] Hermitian matrices

We begin by considering a symmetric operator A on a finite-dimensional real or complex inner product space V with the standard Hermitian inner product; in Dirac's bra-ket notation, the symmetry condition means

\langle A x \mid y \rangle =  \langle x \mid A y \rangle

for all x, y elements of V. Recall that an eigenvector of a linear operator A is a (non-zero) vector x such that Ax = rx for some scalar r. The value r is the corresponding eigenvalue.

Theorem. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.

This result is of such importance in many parts of mathematics, that we provide a sketch of a proof for the case wherein the underlying field of scalars is the complex numbers. First we show that all the eigenvalues are real. Suppose that λ is an eigenvalue of A with corresponding eigenvector x. Thus

\overline{\lambda} \langle x \mid x \rangle= \langle A x \mid x \rangle = \langle  x \mid A x \rangle = \lambda \langle  x \mid x \rangle .

Since x is non-zero, it follows that λ equals its own conjugate and is therefore real.

To prove the existence of an eigenvector basis, we use induction on the dimension of V. In fact it suffices to show A has at least one non-zero eigenvector e. For then we can consider the space K of vectors v orthogonal to e. This is finite-dimensional because it is a subspace of a finite dimensional space, and A has the property that it maps every vector w in K into K. This is shown as follows: If wK, then using the symmetry property of A,

\langle A w \mid e \rangle = \langle  w \mid A e \rangle =  \langle  w \mid \lambda e \rangle = \lambda \langle  w \mid e \rangle = 0.

Moreover, A considered as a linear operator on K is also symmetric, so by the induction hypothesis there is a basis for V consisting of eigenvectors of A.

It remains, however, to show that A has at least one eigenvector. Since the ground field is algebraically closed, the polynomial function (called the characteristic polynomial of A)

p(λ) = det(λIA)

has a complex root r. This implies the linear operator ArI is not invertible and hence maps a non-zero vector e to 0. This vector e is a non-zero eigenvector of A. This implies that r is an eigenvalue, so is actually a real number. This completes the proof.

Notice the second part of the proof works for any square matrices. Clearly any square matrix has at least one eigenvector. Therefore crucial to the argument is the following consequence of the Hermiticity of A: If A is Hermitian and e is an eigenvector of A, then not only is the linear span of e an invariant subspace of A, but so is its orthogonal complement.

The argument is also valid for symmetric operators on finite-dimensional real inner product spaces. A real symmetric matrix has real eigenvalues, therefore eigenvectors with real entries.

The spectral decomposition of an operator A which has an orthonormal basis of eigenvectors is obtained by grouping together all vectors corresponding to the same eigenvalue. Thus

V_\lambda = \{\,v \in V: A v = \lambda v\,\}.

Note that these spaces are invariantly defined, in that the definition does not depend on any choice of specific eigenvectors.

As an immediate consequence of the spectral theorem for symmetric operators we get the spectral decomposition theorem: V is the orthogonal direct sum of the spaces Vλ where the index ranges over eigenvalues. Another equivalent formulation, letting Pλ be the orthogonal projection onto Vλ (P_\lambda P_\mu=0 \quad \mbox{if } \lambda \neq \mu) and λ1,..., λm the eigenvalues of A, is

A =\lambda_1 P_{\lambda_1} +\cdots+\lambda_m P_{\lambda_m}.

The spectral decomposition is a special case of the Schur decomposition. It is also a special case of the singular value decomposition.

If A is a real symmetric matrix, it follows by the real version of the spectral theorem for symmetric operators that there is an orthogonal matrix U such that UAUT is diagonal and all the eigenvalues of A are real.

[edit] Normal matrices

The spectral theorem extends to a more general class of matrices. Let A be an operator on a finite-dimensional inner product space. A is said to be normal if A* A = A A*. One can show that A is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have A = U T U*, where U is unitary and T upper-triangular. Since A is normal, T T* = T* T. Therefore T must be diagonal. The converse is also obvious.

In other words, A is normal if and only if there exists a unitary matrix U such that

A=U \Lambda U^* \;

where Λ is the diagonal matrix the entries of which are the eigenvalues of A. The column vectors of U are the eigenvectors of A and they are orthonormal. Unlike the Hermitian case, the entries of Λ need not be real.

[edit] The spectral theorem for compact self-adjoint operators

In Hilbert spaces in general, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.

Theorem. Suppose A is a compact self-adjoint operator on a Hilbert space V. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.

As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. To prove this, we cannot rely on determinants to show existence of eigenvalues, but instead one can use a maximization argument analogous to the variational characterization of eigenvalues. The above spectral theorem holds for real or complex Hilbert spaces.

If the compactness assumption is removed, it is not true that a self adjoint operator has eigenvectors.

[edit] Generalization to non-symmetric matrices

For a non-symmetric but square (N \times N dimensional) matrix \mathbf{A}, the right eigenvectors \mathbf{r}_{k} are defined by

\mathbf{A} \cdot \mathbf{r}_{k} = \lambda_{k} \mathbf{r}_{k}

whereas the left eigenvectors \mathbf{l}_{k} are defined by

\mathbf{l}_{k} \cdot \mathbf{A} = \lambda_{k} \mathbf{l}_{k}

or, equivalently,

\mathbf{A}^{T} \cdot \mathbf{l}_{k} = \lambda_{k} \mathbf{l}_{k}

where \mathbf{A}^{T} represents the transpose of \mathbf{A}. In these equations, the eigenvalues λk are the same, being the roots of the same characteristic polynomial

\det \left| \mathbf{A} - \lambda I \right| =  \det \left| \mathbf{A}^{T} - \lambda I \right| = 0

If \mathbf{A} is a symmetric matrix, the right and left eigenvectors are also the same, i.e., \mathbf{r}_{k} = \mathbf{l}_{k}.


If the eigenvalues are distinct, the left and right eigenvectors each form a complete basis and can be scaled to satisfy the orthonormality condition

\mathbf{l}_{m} \cdot \mathbf{r}_{n} = \delta_{mn}

where δmn is the Kronecker delta function. Therefore, an arbitrary N-dimensional vector \mathbf{x} can be represented by the expansion

\mathbf{x} = \sum_{k=1}^{N} \left( \mathbf{x} \cdot \mathbf{l}_{k} \right) \ \mathbf{r}_{k}

This expansion is always possible when the eigenvalues are distinct and usually possible even when they are not, by using Gram-Schmidt orthogonalization to define right and left eigenvectors that satisfy the orthonormality condition. However, if the orthonormality condition cannot be satisfied (i.e., if the expansion is impossible), then \mathbf{A} is said to be a defective matrix.

[edit] Functional analysis

The next generalization we consider is that of bounded self-adjoint operators A on a Hilbert space V. Such operators may have no eigenvalues: for instance let A be the operator multiplication by t on L2[0, 1], that is

[A \varphi](t) = t \varphi(t). \;

Theorem. Let A be a bounded self-adjoint operator on a Hilbert space H. Then there is a measure space (X, Σ, μ) and a real-valued measurable function f on X and a unitary operator U:HL2μ(X) such that

U^* T U = A \;

where T is the multiplication operator:

[T \varphi](x) = f(x) \varphi(x). \;

This is the beginning of the vast research area of functional analysis called operator theory.

There is also an analogous spectral theorem for normal operators on Hilbert spaces. In this case it is more common to express the spectral theorem as an integral of the coordinate function over the spectrum against a projection-valued measure.

When the normal operator in question is compact, this spectral theorem reduces to the finite-dimensional spectral theorem above, except that the operator is expressed as a linear combination of possibly infinitely many projections.

[edit] The spectral theorem for general self-adjoint operators

Many important linear operators which occur in analysis, such as differential operators are unbounded. There is however a spectral theorem for self-adjoint operators that applies in many of these cases. To give an example, any constant coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed the unitary operator that implements this equivalence is the Fourier transform.

[edit] See also

[edit] References

  • Sheldon Axler, Linear Algebra Done Right, Springer Verlag, 1997
  • Paul Halmos, "What Does the Spectral Theorem Say?", American Mathematical Monthly, volume 70, number 3 (1963), pages 241-247
In other languages