Talk:Eigenvalue, eigenvector and eigenspace

From Wikipedia, the free encyclopedia

Former featured article Eigenvalue, eigenvector and eigenspace is a former featured article. Please see the links under Article Milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophy

This article appeared on Wikipedia's Main Page as Today's featured article on November 1, 2005.

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics.
Mathematics grading
A Class High Importance
 Field: Algebra
Other languages WikiProject Echo has identified Eigenvalue, eigenvector and eigenspace as a foreign language featured article. You may be able to improve this article with information from the Chinese,  Italian or Spanish language Wikipedias.
Peer review This page has been selected for the release version of Wikipedia and rated B-Class on the assessment scale. It is in the category Math.

Old discussion can be found in the archive.

Contents

[edit] Vibrational modes - erroneous treatment in the article

The current version of the article claims that the eigenvalue for a standing-wave problem is the amplitude. This is an absurd and totally nonstandard way to formulate the problem, if it even can be made to make sense at all. For vibrational modes, one writes down the equation for the acceleration in the form:

\frac{\partial^2 \mathbf{x}}{\partial t^2} = - \hat{A} \mathbf{x}

where \mathbf{x} is the amplitude and -\hat{A} is the operator giving the acceleration = force/mass (i.e. from the coupled force equations). In a linear, lossless system, this is a real-symmetric (Hermitian) linear operator. (More generally, e.g. if the density is not constant, one writes it as a generalized Hermitian eigenproblem.) Ordinarily, \hat{A} is positive semi-definite too.

To get the normal modes, one writes \mathbf{x} in the form of a harmonic mode: \mathbf{x} = \mathbf{v}\exp(-i\omega t) for a frequency ω. (Of course, the physical solution is the real part of this.) Then one obtains the eigenequation:

\hat{A} \mathbf{v} = \omega^2 \mathbf{v}

and so one obtains the frequency from the eigenvalue. Since \hat{A} is Hermitian and positive semi-definite, the frequencies are real, corresponding to oscillating modes (as opposed to decaying/growing modes for complex ω). Furthermore, because \hat{A} is Hermitian the eigenvectors are orthogonal (hence, the normal modes), and form a complete basis (at least, for a reasonable physical \hat{A}). Other nice properties follow, too (e.g. the eigenvalues are discrete in an infinite-dimensional problem with compact support.)

If losses (damping) are included, then \hat{A} becomes non-Hermitian, leading to complex ω that give exponential decay.

Notice that the amplitude per se is totally arbitrary, as usual for eigenvalue problems. One can scale \mathbf{v} by any constant and still have the same normal mode.

User:Jitse Niesen claimed at Wikipedia:Featured article review/Eigenvalue, eigenvector and eigenspace that the article was talking about the eigenvalues of the "time-evolution operator" \exp(-\hat{A}t). The first problem with this is that \exp(-\hat{A}t) is not the time-evolution operator, because this is a second-order problem and is not determined by the initial value \mathbf{x}(0). You could convert it to a set of first-order equations of twice the size, but then your eigenvector is not the mode shape any more, it is the mode shape along with the velocity profile. Even then, the eigenvalue is only a factor multiplying the amplitude, since the absolute amplitude is still arbitrary. Anyway, it's a perverse approach; the whole point of working with harmonic modes is to eliminate t in favor of ω.

Note that the discussion above is totally general and applies to all sorts of oscillation problems, from discrete oscillators (e.g. coupled pendulums) to vibrating strings, to acoustic waves in solids, to optical resonators.

As normal modes of oscillating systems, from jumpropes to drumheads, are probably the most familiar example of eigenproblems to most people, and in particular illustrate the important case of Hermitian eigenproblems, this subject deserves to be treated properly. (I'm not saying that the initial example needs the level of detail above; it can just be a brief summary, with more detail at the end, perhaps for a specific case. But it should summarize the right approach in any case.)

—Steven G. Johnson 15:29, 24 August 2006 (UTC)

I readily agree that my comment at FAR was sloppy, and I'm glad you worked out what I had in mind. Originally, I thought that the "perverse" approach was not a bad idea to explain the concept of eigenvalues/functions, but I now think that it's too confusing for those that have already seen the standard approach. -- Jitse Niesen (talk) 12:14, 25 August 2006 (UTC)
Thanks for your thoughtful response. Let me point out another problem with saying that the "amplitude" is the eigenvalue. Knowing the amplitude at any given time is not enough to know the behavior or frequency. You need to know the amplitude for at least two times. By simply calling the eigenvalue "the" amplitude, you've underspecified the result. —Steven G. Johnson 15:23, 25 August 2006 (UTC)


[edit] Reqest for Clarificaion of the Standing wave example for eigen values

The Standing wave example of eigen values isn't very clear. It is just stated that the standing waves are the eigen values. Why is this the case? How do they fit the definition / satisfy the criterion for being an eigen value? —The preceding unsigned comment was added by 67.80.149.169 (talk • contribs) .

Actually, the wave is the eigenfunction, not the eigenvalue. Did you not see this part?:

The standing waves correspond to particular oscillations of the rope such that the shape of the rope is scaled by a factor (the eigenvalue) as time passes. Each component of the vector associated with the rope is multiplied by this time-dependent factor. This factor, the eigenvalue, oscillates as times goes by.

I don't see any way to improve that. —Keenan Pepper 03:10, 7 September 2006 (UTC)
Except that calling the amplitude the "time-dependent eigenvalue" is horribly misleading and bears little relation to how this problem is actually studied, as I explained above. Sigh. —Steven G. Johnson 20:54, 7 September 2006 (UTC)

I think the new version of the rope example is exactly what I was trying to avoid! I think the time-evolution operator is something anybody can understand. This is also the only operator which is interesting from an experimental point of view. It doesn't require any math knowlege. It doesn't even require the rope to be an Hamiltonian system. The shape of the rope is an eigenfunction of this operator if it remains proportional to itself as time passes by. That it is an eigenvector of the frequence operator (the Hamiltonian) is irrelevant and also valid only if the system has an Hamiltonian! Vb

As I explained above, the shape of the rope isn't the eigenvector of this operator. Because it is second order in time, you would need the shape of the rope plus the velocity profile. And there are other problems as well, as I explained above. —Steven G. Johnson 12:48, 8 September 2006 (UTC)
Sorry for the delay. Perhaps then the statment "the standing was is the egienfunction" could be elobrated on a little more. I'm having trouble visualising what that means, and how the notition of a vector whos direction reamains unchanged by the transformation applies to this example. My apologies for the confusion. --165.230.132.126 23:35, 11 October 2006 (UTC)

[edit] Orthogonality

When are eigenvectors orthogonal? Symmetric matrix says "Another way of stating the spectral theorem is that the eigenvectors of a symmetric matrix are orthogonal." So A is symmetric => A has orthogonal eigenvectors, but does that relation go both ways? If not, is there some non-trivial property of A such that A has orthogonal eigenvectors iff ___? —Ben FrantzDale 20:36, 7 September 2006 (UTC)

eigenvectors corresponding to distinct eigenvalues are orthogonal iff the matrix is normal. in general, eigenvectors corresponding to distinct eigevalues are linear independent. Mct mht 22:31, 7 September 2006 (UTC)
btw, "symmetric matrix" in the above quote should mean symmetric matrix with real entries. Mct mht 22:34, 7 September 2006 (UTC)
Thanks. Normal matrix does say this; I'm updating other places which should refer to it such as symmetric matrix, spectral theorem, and eigenvector. —Ben FrantzDale 13:30, 8 September 2006 (UTC)

[edit] An arrow from the center of the Earth to the Geographic South Pole would be an eigenvector of this transformation

i think the geographic pole is different from the magnetic pole...-- Boggie 09:57, 21 November 2006 (UTC)

You are correct that the geographic south pole is different from the magnetic south pole. However, it is the geographic pole that we need here, isn't it? -- Jitse Niesen (talk) 11:33, 21 November 2006 (UTC)
yes, u r right, i just mixed them up;-) -- Boggie 16:51, 22 November 2006 (UTC)

[edit] Existence of Eigenvectors, eigenvalues and eigenspaces

Some confusion about the existence of eigenvectors, eigenvalues and eigenspaces for transformations can possibly arise when reading this article (as it did when I asked a question at the mathematics reference desk, after reading the article half-asleep - my bad!).

My question here would be, what could be done to correct it. The existence of eigenvectors is a property of the transformation not a property of the eigenvector itself, making it unclear which of the pages needs revision. Does anybody have any ideas?

Also this article seems a bit long, I suggest placing the applications in a separate article (as are some of the theorems in the earlier sections).

    • Putting the application elsewhere would be very damaging. Math without applications is useless, i.e. not interesting!

[edit] Continuous spectrum

Figure 3 in the article has a nice picture of the absorption spectrum of chlorine. The caption says that the sharp lines correspond to the discrete spectrum and that the rest is due to the continuous spectrum. Could someone explain some things?:

  • What operator it is that this is the spectrum of?
  • If the discrete spectrum corresponds to eigenvalues, are those eigenvalues shown on the x or y axis of the graph?
    • The x-values, i.e. the energies of the tomic eigenstates.
  • What would the corresponding eigenfunctions be for elements of the spectrum?

Thanks. —Ben FrantzDale 15:35, 20 December 2006 (UTC)

[edit] Going back to featured status!

I am so sad this article is not featured anymore. I did the job to get it featured. I have no time to improve it now. I wish someone could do the job. The article is not paedagogic anymore. Such a pain! Vb 09:38, 28 December 2006 (UTC)~

[edit] minor typo?

Excerpt from the text: "Having found an eigenvalue, we can solve for the space of eigenvectors by finding the nullspace of A − (1)I = 0."

Shouldn't this be ...finding the nullspace of A - λI = 0?

216.64.121.34 04:06, 8 February 2007 (UTC)

[edit] Left and right eigenvectors

Where to put this in the article? Something to the effect: A right eigenvector v corresponding to eigenvalue λ satisfies the equation Av = λv for matrix A. Contrast this with the concept of left eigenvector u which satisfies the equation uTA = λuT. ?? --HappyCamper 04:39, 15 March 2007 (UTC)

It's briefly mentioned in the section "Entries from a ring". Are you saying that you want to have it somewhere else? -- Jitse Niesen (talk) 06:16, 15 March 2007 (UTC)
No no...just that I didn't see the note on the first reading. I think the little note is plenty for the article already. --HappyCamper 23:36, 16 March 2007 (UTC)

[edit] Just a Note

I think the Mona Lisa picture and it's description help to clarify these concepts very much! Thanks to the contributor(s)!