Talk:Euler's formula

From Wikipedia, the free encyclopedia

Can you show a proof of Euler's equation?

There is another way of demonstrating the formula

which I find to be more beautiful:

Let z = cos t + i sin t

then dz = (-sin t + i cos t) dt = i (cos t + i sin t) dt = i z dt.

Integrating:

int dz/z = int i dt

or

ln z = i t.

Exponentiating:

z = exp i t.

let lnz = it + C1
z = e^{it +C_1} = e^{C_1}\cdot e^{it}
C = e^{C_1} \rightarrow z = Ce^{it}


The proof using Taylor series is silly! If one is allowed to assume the Taylor expansions of exp(x), sin(x) and cos(x), then just add the series for cos x + i sin x and note that it is the same as the series for exp(i x). --zero 09:38, 12 Oct 2003 (UTC)

You have an error anyway in your proof: i(-sin t + i cos t) = - (cos t + i sin t) = -z. I don't think you can differentiate like you're doing in any case since z is a complex variable (I could be wrong, I haven't done any complex analysis stuff for a while). Dysprosia 10:03, 12 Oct 2003 (UTC)

No, that part of the proof is fine. The only problematic step is the integration, since it really gives ln z = i t + C for a constant C. One then has to find an argument that C=0. --zero 12:46, 12 Oct 2003 (UTC)

The argument that C=0 can be easily found by substituting t=0 and evaluating. --Komp, 10th Sept 2004.

Contents

[edit] Taylor Series for e^x

I'm a little confused about one thing for the e^ix = cosx + (i)sinx derivation. It looks like the Taylor Series of e^ix is exanded around the point a = 0. Wouldn't that mean the proof is only valid near x = 0?

The series is valid for all x.

Charles Matthews 09:42, 18 Dec 2003 (UTC)

Radius of convergence of exp x is infinite, btw. Dysprosia 09:48, 18 Dec 2003 (UTC)

That explains it, thanks a lot!

You could expand it about any point, and as long as you took all (an infinite number of) the elements, it would still work. If you're only going to use a few terms you should expand it about whatever local operating point you're using. moink 05:12, 13 Jan 2004 (UTC)

I would like to suggest moving the complex analysis to the top, above the other one. In my experience it's much more common. moink 05:12, 13 Jan 2004 (UTC) How about more -- split this into 2 articles; the two results have almost nothing to do with each other. They don't belong together.


[edit] Defintion of proof

I think it should be more emphisized in the article that before you can prove the theorem you need a definition of what e^(ix) is. The first proof does give such a definition in passing but that is all.

I propose replacing the e^ix = cosx + (i)sinx derivation by the following simplified version.

It is known that exp(x), sin(x), and cos(x) have Talyor series which converge for all complex x:

\exp(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots
\sin(x) =  x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots
\cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + \cdots

Adding the series for cos(x) to i times the series for sin(x) gives the series for exp(ix).


It misses some parts of the proof though; the periodicy where i^2 = -1 ; i^3 = -i ; i^4 = 1. ✏ Sverdrup 14:52, 6 May 2004 (UTC)
Sverdup is right, but I think the notation in the current proof is more a hindrance than a help. It's much easier to visually see what's going on by writing "dot-dot-dot"'s and collecting terms than by using a jillion sigma notations.
I suggest we use the proof on top of this talk page to motivate the formula, and keep the current taylor series proof as the proof. We need to be accurate, and we are also elegant if the math is done right with summation etc. ✏ Sverdrup 22:27, 6 May 2004 (UTC)
I'm not sure what you mean by "the" proof -- most results have multiple proofs, and this is no exception. The proof using "dz/z = it dt" is good motivation, yes, but it's also a completely rigorous proof, as well, so by including it as a "real" proof, we would not lose any accuracy. I still maintain that the Taylor series proof is much easier to understand without sigma notation, without losing any rigor -- "dot-dot-dot's" are fully rigorous, as long as it's obvious what is intended, which is the case here if enough terms are spelled out. Having four different summations with "4n", "4n+1", "4n+2", and "4n+3" is only going to confuse people who aren't used to the notation for partitioning integers into congruence classes -- they will have to spell out what the sums say for themselves, so why not do it for them? (BTW, in case you wonder why the "dz/z = it dt" proof is rigorous, it comes down to this. We are basically dealing with the analytic continuation of the real exponential to the entire complex plane -- this is known to exist because the Taylor series at z = 0, say, has infinite radius of convergence. So, we can define the exponential as exp(z) = Taylor series. It's pretty trivial to show that d/dz(exp(z)) = exp(z) for all z, everything's abs/unif converg, etc. By the chain rule, d/dz(exp(iz)) = i*exp(iz), i.e. exp(iz) satisfies the diff eq w' = iw. Now, note that if w = cos(z) + isin(z), then this w also satisfies the equation; this means w = C*exp(iz) for some constant C; z = 0 gives 1 = w = C, so w = exp(iz) = cos(z) + isin(z). Now, just take z = x to be real. This is basically what is going on with the shorthand notation "dz/z = it dt". The shorthand notation proof is somewhat glossy over a couple of these details, but then again, a lot of proofs at wikipedia are really just "sketches of a proof".) Let me put a copy of what I would have as my Taylor series proof here, so people can see it and compare.

Here is my proposal to replace the current Taylor series proof:

[edit] Derivation

Here is a derivation of Euler's formula using Taylor series expansions as well as basic facts about the powers of i:

i^0 = 1, \ i^1 = i, \ i^2 = -1, \ i^3 = -i, \ i^4 = 1, \ i^5 = i, \ i^6 = -1, \ i^7 = -i, \ i^8 = 1, \dots

The functions ex, cos(x) and sin(x) (assuming x is a real) can be written as:

e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots
\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots
\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots

and for complex z we define each of these function by its series. This is possible because the radius of convergence of each series is infinite.

Now, take z = ix, where x is real, and note that

e^{ix} = 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots
= 1 + ix - \frac{x^2}{2!} - \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} - \frac{x^6}{6!} - \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots
= \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} + \cdots \right) + i\left( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \right)
= cos(x) + isin(x)

The rearrangement of terms is justified because each series is absolutely convergent.

QED

I think people will find it much easier to follow this proof.

One problem is that you wrote "z = ix" but z is not defined and it does not appear elsewhere. Also, the proof works for all complex x but you limited it to real x. --Zero 09:27, 7 May 2004 (UTC)
I see how might not be entirely clear -- actually, I did say what z is, when I said, "for complex z, we define each of these functions by these series". Also, yes, the proof works for all complex z, but "Euler's formula" is usually taken to mean when z is purely imaginary, primarily for historical reasons (Euler "derived" it for ix, not z); also, when most people say "Euler's formula", they're usually intimating at the periodic nature of exp around the unit circle. But, it's certainly true for any z, and I can add this.
If z = a + bi; ez = eaebi, so there is no problem with complex x. ✏ Sverdrup 13:12, 7 May 2004 (UTC)
I'm convinced, this looks very good. ✏ Sverdrup 13:12, 7 May 2004 (UTC)
The "dz/z = i dt" argument can be made even more legit for most folks by taking the 2nd order linear diff eq, w'' = -w, gotten by iterating the 1st order one, then everything is real and you don't have to think about analytic continuation, etc.
It would be interesting to note how Euler actually "discovered" this. The way he "proved" it is completely backwards from how it is usually presented in modern form -- he assumed DeMoivre's identity, and did some clever fooling around with i's (infinitely small numbers) and w's (infinitely large numbers), treating them as ordinary numbers. Of course, not rigorous at all, but historically very interesting, providing insight into Euler's brain circuitry.

[edit] Moved Euler characteristic material

I moved the material about the euler characteristic to the Euler_characteristic page. My reasons were

  • elimination of duplicated material
  • article should only focus on one topic and not on two totally unrelated topics.

For people who are looking for the euler characteristic on this page I have put a short note at the top.

I have tried to fix all broken links but I probably forgot some.MathMartin 22:18, 2 Aug 2004 (UTC)


[edit] Orignal proof

Does anyone know how Eulers orignal proof went? Also are we certain that Eulers "proof" really was a proof? For one thing there would not have been any commonly accepted definition of e^(ix) I imagine.


[edit] The "by calculus" proof

...is wrong because it ignores the constant of integration. Please fix it! --Zero 03:23, 5 Dec 2004 (UTC)

[edit] Independently discovered by Ramanujan?

What is the source for the claim: The formula was independently discovered by the Indian mathematician Srinivasa Ramanujan at the age of 11 (circa 1898-99).? Paul August 21:42, 28 December 2005 (UTC)

Good point. I would remove that from the text anyway. It is known that half or so of Ramanujan's results were not new, as while he was a mathematical genius, he did most of his work in isolation (at least until gettting to Britain anyway). So, are we now go visit all the articles for which Ramanujan rediscovered a given concept and mention that? If that article has a history section, and one can fit this observation along the original discoverer and other info, I am fine. Otherwise I would be against it. Oleg Alexandrov (talk) 21:52, 28 December 2005 (UTC)
Actually, this article does have a history section. So back to Paul's original question. :) Oleg Alexandrov (talk) 21:53, 28 December 2005 (UTC)
Just to set the record straight, and as a CYA, please note that I did not add this comment to the article, and I have no knowledge of its authenticity or lack thereof. But I did make some edits after the claim was added for the reasons noted in the history. I also suspected that this discussion would result. I don't know who added the sentence or what his source is. But I felt it was important to make the changes that I did in the meantime. -- Metacomet 22:01, 28 December 2005 (UTC)

I don't think it is worth mentioning. Probably it has been "rediscovered" many times. --Zero 22:18, 28 December 2005 (UTC)

I am not an expert on this topic, but I agree with Zero and Oleg. Even if it is true, I don't think it is important enough to merit a mention in this article. If nobody objects, I will remove it from the article within the next few days or so. -- Metacomet 18:31, 29 December 2005 (UTC)

Well unlike Zero, I doubt that it has been "rediscovered" many times. So if true I think it would be reasonably significant, so I'm not opposed to it being in the article — but of course it needs a source. Without one it should definitely be removed. — Paul August 19:32, 29 December 2005 (UTC)

I myself rediscovered Euler's formula at the age of 17, right after my high school calculus teacher wrote it on the blackboard.  ;-) Sorry, I couldn't resist a little humor (okay, very little). -- Metacomet 19:39, 29 December 2005 (UTC)
As you may have noticed, I just removed the sentence from the article for the reasons mentioned in the revision history. If someone does eventually find a verifiable source for this claim, I would recommend adding the sentence to the article on Srinivasa Ramanujan with a link to this article, but not including the sentence here. -- Metacomet 00:04, 2 January 2006 (UTC)

[edit] "Using Calculus" proof

Hi, I would really like to understand this proof. I understand everything, up to the part that it says that:

integral of (dz\z) = integral of (i).

This is ok, but then the continuation is that:

ln z = ix + c.

The right side of the equation is understood. but why does the integral of (dz\z) = ln z?

I know that the integral of (1/z) = ln z, but this is not the case, the case is (dz\z), and dz is not equal to 1. So how come you can say that the integral of (z'/z) is like the integral of (1\z)?

I'd really be grateful for an explanation.

Whenever intergating, you need to specify which variable you are working with. The integral of (dz/z) is just a simplified way of writing the integral of (1/z) with respect to z (the 'dz'). I'm not sure how to use the formulas on Wikipedia, so I made an image of it and put it on my talk page. Hope this helps. timrem 03:22, 21 March 2006 (UTC)
I think I figured this out...
\int\frac{1}{z}\,dz=\int\frac{dz}{z} timrem 21:00, 30 March 2006 (UTC)

[edit] Calculus method oversight

For some real-valued variable x, \int \frac{dx}{x} = \ln{|x|}  + C. I'm not well-informed on how complex numbers affect integration rules, but is there any justification for dropping the absolute value when the variable is complex, as the calculus method does? -- anon

Things are much more complex for complex variables. |x| is no longer +/-x, and the log, at least its principal branch, is no longer defined for z real and negative. I could offer a longer explanation, but the short answer is that the log in the complex plane is a very different function than the log on the real line (for example, log(ab)=log(a)+log(b) may not hold. Oleg Alexandrov (talk) 18:57, 18 May 2006 (UTC)
This calculus method is strange anyway. Why not just show that \frac{\cos x+i\sin x}{e^{ix}} has vanishing derivative?--gwaihir 13:06, 18 May 2006 (UTC)

[edit] Minor Move

Ok, It's probably not as important as the discussions on the proofs, but, I have moved the "see also" section so that it is ahead of the references and external links, Purely cosmetic, as I think it looks better to have the internal links ahead of the external ones. I hope no one minds. Help plz 16:19, 25 June 2006 (UTC)


[edit] Feynman

"Richard Feynman called Euler's formula "our jewel" and "the most remarkable formula in mathematics" (Feynman, p. 22-10)."

If I'm correct, Feynman was referring to Eulers identity in particular, not the formula. Should this be changed? -- He Who Is[ Talk ] 12:47, 10 July 2006 (UTC)

In addition, that citation needs to refer to which work of Feynman's that is from. Then maybe we can look it up to see which Euler's he was talking about. I say it goes. Lizz612 02:22, 1 August 2006 (UTC)

Me, I am wondering why we should listen to the opion of a mere physicist :-)

[edit] About absolute values

Let \begin{matrix} x=ln(y) \end{matrix}

So \begin{matrix} y = e^{x} \end{matrix}

\begin{matrix} \frac{dy}{dx} = e^{x} \end{matrix}

\frac{dx}{dy} = \frac{1}{e^{x}}

\frac{dx}{dy} = \frac{1}{y}

dx = \frac{1}{y} dy

Integrating both sides

\int 1 dx = \int \frac{1}{y} dy

x = \int \frac{1}{y} dy

ln(y) = \int \frac{1}{y} dy

Although the function may not be defined for some values, I don’t think an absolute value is necessary in this case. --Sav chris13 12:43, 25 July 2006 (UTC)

I removed this section (again). There is no complex-differentiable function "ln" on all of C×, so it would be necessary to explain what is meant by "ln" and why it does not matter which branch is chosen and so on. Much too complicated IMHO.--gwaihir 08:04, 27 July 2006 (UTC)

I removed the proof. We have enough proofs, and this proof is not very correct. You are using the integrating factor, but it does not work for complex variables. It can be fixed, but things are subtle in complex analysis, see antiderivative (complex analysis). Oleg Alexandrov (talk) 02:46, 28 July 2006 (UTC)

[edit] Another proof using calculus (under construction)

I hope this makes things clearer.

I intended to show full working for this proof, should I remove some intermediate steps?

What do you mean "There is no complex-differentiable function "ln"...."? The natural logarithm is defined for complex arguments and its derivative is 1/x. Or do you mean something else?

Anyhow the point is moot. This method is verifiable, see the following sources:

http://mathworld.wolfram.com/EulerFormula.html

http://www.answers.com/topic/euler-s-formula

http://everything2.com/index.pl?node_id=138398

http://mathforum.org/dr.math/faq/faq.euler.equation.html

http://www-structmed.cimr.cam.ac.uk/Course/Adv_diff1/Euler.html --Sav chris13 13:41, 27 July 2006 (UTC)

Let Z be a complex number

Z = \cos(\phi) + i \cdot \sin(\phi)

Where \begin{matrix} \phi \end{matrix} is the angle Z makes with the real axis (see the above diagram). So \phi = \arg(Z)

Differentiate with respect to \begin{matrix} \phi \end{matrix}
\frac{dZ}{d\phi} = -\sin(\phi) + i \cdot \cos(\phi)
\frac{dZ}{d\phi} = i \cdot i \cdot \sin(\phi) + i \cdot \cos(\phi)
\frac{dZ}{d\phi} = i [i \cdot \sin(\phi) + \cos(\phi)]
\frac{dZ}{d\phi} = i [\cos(\phi) + i \cdot \sin(\phi)]

Now remember Z = \cos(\phi) + i \cdot \sin(\phi)

So

\frac{dZ}{d\phi} = iZ
dZ = i \cdot Z d\phi
\frac{1}{Z}dZ = i d\phi

Integrating both sides

\int \frac{1}{Z}dZ = \int i d\phi
\begin{matrix} \ln (Z) & = & i \phi + C \end{matrix}
\begin{matrix} Z = e^{i \phi + C} \end{matrix}

To find the C value, consider that when Z=1, \phi = \arg(1) = 0

\begin{matrix} 1 = e^{i \cdot 0 + C} \end{matrix}
\begin{matrix} 1 = e^{C} \end{matrix}
\begin{matrix} C=0 \end{matrix}

Therefore

\begin{matrix} Z = e^{i \phi} \end{matrix}

Recall that

\begin{matrix} Z = \cos(\phi) + i \cdot \sin(\phi) \end{matrix}
e^{i \phi} = \cos(\phi) + i \cdot \sin(\phi)

Q.E.D.

Good method on how to get the C value. But who can fix the posted proof? If
f'(x)\,
= \displaystyle\frac{(-\sin x+i\cos x)\cdot e^{ix} - (\cos x+i\sin x)\cdot i\cdot e^{ix}}{(e^{ix})^2} \
= \displaystyle\frac{-\sin x-i^2\sin x}{e^{ix}} \
= 0 \


Therefore, f must be a constant function. Thus,
f(x)=f(0)=\frac{\cos 0 + i \sin 0}{e^0}=1
But where did this come from? If I had
f(x)=f(2 \pi / 3)=\frac{\cos 2 \pi / 3 + i \sin 2 \pi / 3}{e^{2 \pi / 3}} \ne 1
or
f(x)=f(\pi)=\frac{\cos \pi + i \sin \pi}{e^{\pi}} \ne 1
instead of
f(x)=f(0)=\frac{\cos 0 + i \sin 0}{e^0}
 ?
Please explain... I think the correct expression should be
f(x) = \frac{\cos x+i\sin x}{e^{ix}} \ = C
and then you will just find an argument that C = 1. Please help... --Kevin philippines 12:00, 9 September 2006 (UTC)

i disagree with your two inequalities. why do you say that the result is not equal to 1? - oh, i see, you dropped two i\ factors:

f(x) = f(2 \pi / 3)=\frac{\cos 2 \pi / 3 + i \sin 2 \pi / 3}{e^{i 2 \pi / 3}} = 1
f(x) = f(\pi) = \frac{\cos \pi + i \sin \pi}{e^{i \pi}} = 1


r b-j 01:34, 12 November 2006 (UTC)

Try this version.--gwaihir 15:25, 11 September 2006 (UTC)