Eigenvalue perturbation

Main article: Perturbation theory

In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues. This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.[1]

The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra[2] or numerical functional analysis.

Example

Suppose we have solutions to the generalized eigenvalue problem,

\mathbf{K}_0 \mathbf{x}_{0i} = \lambda_{0i} \mathbf{M}_0 \mathbf{x}_{0i}. \qquad (0)

where \mathbf{K} and \mathbf{M} are matrices. That is, we know the eigenvalues λ0i and eigenvectors x0i for i = 1, ..., N. Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of

\mathbf{K} \mathbf{x}_i = \lambda_i \mathbf{M} \mathbf{x}_i \qquad (1)

where

\begin{align}
\mathbf{K} &= \mathbf{K}_0 + \delta \mathbf{K}\\
\mathbf{M} &= \mathbf{M}_0 + \delta \mathbf{M}
\end{align}

with the perturbations \delta\mathbf{K} and \delta\mathbf{M} much smaller than \mathbf{K} and \mathbf{M} respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations:

\begin{align}
\lambda_i &= \lambda_{0i}+\delta\lambda_{i} \\
\mathbf{x}_i &= \mathbf{x}_{0i} + \delta\mathbf{x}_{i} 
\end{align}

Steps

We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that

\mathbf{x}_{0j}^\top \mathbf{M}_0\mathbf{x}_{0i} = \delta_{ij} \qquad(2)

where δij is the Kronecker delta. Now we want to solve the equation

\mathbf{K}\mathbf{x}_i = \lambda_i \mathbf{M} \mathbf{x}_i.

Substituting, we get

(\mathbf{K}_0+\delta \mathbf{K})(\mathbf{x}_{0i} + \delta \mathbf{x}_{i}) = \left (\lambda_{0i}+\delta\lambda_{i} \right ) \left (\mathbf{M}_0+ \delta \mathbf{M} \right ) \left (\mathbf{x}_{0i}+\delta\mathbf{x}_{i} \right ),

which expands to

\begin{align}
\mathbf{K}_0\mathbf{x}_{0i} &+ \delta \mathbf{K}\mathbf{x}_{0i} + \mathbf{K}_0\delta \mathbf{x}_i + \delta \mathbf{K}\delta \mathbf{x}_i = \\[6pt]
&=\lambda_{0i}\mathbf{M}_0\mathbf{x}_{0i}+\lambda_{0i}\mathbf{M}_0\delta\mathbf{x}_i + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} +\delta\lambda_i\mathbf{M}_0\mathbf{x}_{0i} + \lambda_{0i} \delta \mathbf{M} \delta\mathbf{x}_i + \delta\lambda_i \delta \mathbf{M}\mathbf{x}_{0i} + \delta\lambda_i\mathbf{M}_0\delta\mathbf{x}_i + \delta\lambda_i \delta \mathbf{M} \delta\mathbf{x}_i.
\end{align}

Canceling from (1) leaves

\begin{align}
\delta \mathbf{K} \mathbf{x}_{0i} + \mathbf{K}_0\delta \mathbf{x}_i + \delta \mathbf{K}\delta \mathbf{x}_i = \lambda_{0i}\mathbf{M}_0\delta\mathbf{x}_i + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i\mathbf{M}_0\mathbf{x}_{0i} + \lambda_{0i} \delta \mathbf{M} \delta\mathbf{x}_i + \delta\lambda_i \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i\mathbf{M}_0\delta\mathbf{x}_i +  \delta\lambda_i \delta \mathbf{M} \delta\mathbf{x}_i.
\end{align}

Removing the higher-order terms, this simplifies to

\mathbf{K}_0 \delta\mathbf{x}_i+ \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i}\mathbf{M}_0 \delta \mathbf{x}_i + \lambda_{0i}\delta \mathbf{M} \mathrm{x}_{0i} + \delta \lambda_i \mathbf{M}_0\mathbf{x}_{0i}. \qquad(3)

When the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct

\delta \mathbf{x}_i = \sum_{j=1}^N \varepsilon_{ij} \mathbf{x}_{0j} \qquad (4)

where the εij are small constants that are to be determined. Substituting (4) into (3) and rearranging gives

\begin{align}
\mathbf{K}_0 \sum_{j=1}^N \varepsilon_{ij} \mathbf{x}_{0j} + \delta \mathbf{K} \mathbf{x}_{0i} &= \lambda_{0i} \mathbf{M}_0 \sum_{j=1}^N \varepsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i \mathbf{M}_0\mathbf{x}_{0i} && (5) \\
\sum_{j=1}^N \varepsilon_{ij} \mathbf{K}_0 \mathbf{x}_{0j} + \delta \mathbf{K} \mathbf{x}_{0i} &= \lambda_{0i} \mathbf{M}_0 \sum_{j=1}^N \varepsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i \mathbf{M}_0 \mathbf{x}_{0i} && \text{Applying } \mathbf{K}_0 \text{ to the sum} \\
\sum_{j=1}^N \varepsilon_{ij} \lambda_{0j} \mathbf{M}_0 \mathbf{x}_{0j} + \delta \mathbf{K} \mathbf{x}_{0i} &= \lambda_{0i} \mathbf{M}_0 \sum_{j=1}^N \varepsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i \mathbf{M}_0 \mathbf{x}_{0i} && \text{Using Eq. } (1) 
\end{align}

Because the eigenvectors are M0-orthogonal when M0 is positive definite, we can remove the summations by left multiplying by \mathbf{x}_{0i}^\top:

\mathbf{x}_{0i}^\top \varepsilon_{ii} \lambda_{0i} \mathbf{M}_0 \mathbf{x}_{0i} + \mathbf{x}_{0i}^\top \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top \mathbf{M}_0 \varepsilon_{ii} \mathbf{x}_{0i} + \lambda_{0i}\mathbf{x}_{0i}^\top \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i\mathbf{x}_{0i}^\top \mathbf{M}_0 \mathbf{x}_{0i}.

By use of equation (1) again:

\mathbf{x}_{0i}^\top \mathbf{K}_0 \varepsilon_{ii} \mathbf{x}_{0i} + \mathbf{x}_{0i}^\top \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top \mathbf{M}_0\varepsilon_{ii} \mathbf{x}_{0i} + \lambda_{0i}\mathbf{x}_{0i}^\top \delta \mathbf{M}\mathbf{x}_{0i} + \delta\lambda_i\mathbf{x}_{0i}^\top \mathbf{M}_0 \mathbf{x}_{0i}. \qquad (6)

The two terms containing εii are equal because left-multiplying (1) by \mathbf{x}_{0i}^\top gives

\mathbf{x}_{0i}^\top\mathbf{K}_0\mathbf{x}_{0i} = \lambda_{0i}\mathbf{x}_{0i}^\top \mathbf{M}_0 \mathbf{x}_{0i}.

Canceling those terms in (6) leaves

\mathbf{x}_{0i}^\top \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top \delta \mathbf{M} \mathbf{x}_{0i} + \delta\lambda_i \mathbf{x}_{0i}^\top \mathbf{M}_0\mathbf{x}_{0i}.

Rearranging gives

\delta\lambda_i  = \frac{\mathbf{x}^\top_{0i} \left (\delta \mathbf{K}- \lambda_{0i} \delta \mathbf{M} \right )\mathbf{x}_{0i}}{\mathbf{x}_{0i}^\top\mathbf{M}_0 \mathbf{x}_{0i}}

But by (2), this denominator is equal to 1. Thus

\delta\lambda_i  = \mathbf{x}^\top_{0i} \left (\delta \mathbf{K} - \lambda_{0i} \delta \mathbf{M} \right )\mathbf{x}_{0i}.

Then, by left-multiplying equation (5) by x0k:

\varepsilon_{ik} = \frac{\mathbf{x}^\top_{0k} \left (\delta \mathbf{K} - \lambda_{0i}\delta \mathbf{M} \right )\mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0k}}, \qquad i\neq k.

Or by changing the name of the indices:

\varepsilon_{ij} = \frac{\mathbf{x}^\top_{0j} \left (\delta \mathbf{K} - \lambda_{0i} \delta \mathbf{M} \right )\mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0j}}, \qquad i\neq j.

To find εii, use the fact that:

\mathbf{x}^\top_i \mathbf{M} \mathbf{x}_i = 1

implies:

\varepsilon_{ii}=-\tfrac{1}{2}\mathbf{x}^\top_{0i} \delta \mathbf{M} \mathbf{x}_{0i}.

Summary

\begin{align}
\lambda_i &= \lambda_{0i} + \mathbf{x}^\top_{0i} \left (\delta \mathbf{K} - \lambda_{0i}\delta \mathbf{M} \right ) \mathbf{x}_{0i} \\ 
\mathbf{x}_i &= \mathbf{x}_{0i} \left (1 - \tfrac{1}{2} \mathbf{x}^\top_{0i} \delta \mathbf{M} \mathbf{x}_{0i} \right ) + \sum_{j=1\atop j\neq i}^N \frac{\mathbf{x}^\top_{0j}\left (\delta \mathbf{K} - \lambda_{0i}\delta \mathbf{M} \right ) \mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0j}} \mathbf{x}_{0j}
\end{align}

for infinitesimal δK and δM (the high order terms in (3) being negligible)

Results

This means it is possible to efficiently do a sensitivity analysis on λi as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing Kk will also change Kk, hence the (2 − δk) term.)

\begin{align}
\frac{\partial \lambda_i}{\partial \mathbf{K}_{(k\ell)}} &= \frac{\partial}{\partial \mathbf{K}_{(k\ell)}}\left(\lambda_{0i} + \mathbf{x}^\top_{0i} \left (\delta \mathbf{K} - \lambda_{0i} \delta \mathbf{M} \right ) \mathbf{x}_{0i} \right) = x_{0i(k)} x_{0i(\ell)} \left (2 - \delta_{k\ell} \right ) \\
\frac{\partial \lambda_i}{\partial \mathbf{M}_{(k\ell)}} &= \frac{\partial}{\partial \mathbf{M}_{(k\ell)}}\left(\lambda_{0i} + \mathbf{x}^\top_{0i} \left (\delta \mathbf{K} - \lambda_{0i} \delta \mathbf{M} \right ) \mathbf{x}_{0i}\right) = \lambda_i x_{0i(k)} x_{0i(\ell)} \left (2- \delta_{k\ell} \right ).
\end{align}

Similarly

\begin{align}
\frac{\partial\mathbf{x}_i}{\partial \mathbf{K}_{(k\ell)}} &= \sum_{j=1\atop j\neq i}^N \frac{x_{0j(k)} x_{0i(\ell)} \left (2-\delta_{k\ell} \right )}{\lambda_{0i}-\lambda_{0j}}\mathbf{x}_{0j} \\
\frac{\partial \mathbf{x}_i}{\partial \mathbf{M}_{(k\ell)}} &= -\mathbf{x}_{0i}\frac{x_{0i(k)}x_{0i(\ell)}}{2}(2-\delta_{k\ell}) - \sum_{j=1\atop j\neq i}^N \frac{\lambda_{0i}x_{0j(k)} x_{0i(\ell)}}{\lambda_{0i}-\lambda_{0j}}\mathbf{x}_{0j} \left (2-\delta_{k\ell} \right ).
\end{align}

Existence of eigenvectors

Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of N linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have N linearly independent eigenvectors, though a sufficient condition is that \mathbf{K} and \mathbf{M} be simultaneously diagonalisable.

See also

References

  1. Rayleigh, J. W. S. (1894). Theory of Sound I (2nd ed.). London: Macmillan. pp. 115–118. ISBN 1-152-06023-6.
  2. Trefethen, Lloyd N. (1997). Numerical Linear Algebra. SIAM (Philadelphia, PA). p. 258. ISBN 0-89871-361-7.

Further reading