Talk:Taylor's theorem

From Wikipedia, the free encyclopedia

Contents

[edit] Proof

Three expressions for R are available. Two are shown below

It is a bit odd to say nothing about the 3rd. - Patrick 00:37 Mar 24, 2003 (UTC)

There ought to be a proof of this theorem.

Note that f(x) = e^(-1/x^2) (with f(0) = 0) is m times continously differentiable everywhere for any m > 0, since derivatives are made up from terms like x^(-n) e^(-1/x^2) all having 0 as limit in 0. So all the derivatives vanish at 0. The Taylor theorem holds, but an infinite expension similar to say e^x does not exist around 0. This could be worth mentioning. (A classic example for non convergence of the Taylor series. The sequence of functions made up from the Taylor expansions without the remainder term does not always converge to the function.)

70.113.52.79 18:04, 14 November 2006 (UTC)

Actually, the entry 'Taylor series' has a complete explanation. A link to that entry should suffice.

Sustik 20:47, 14 November 2006 (UTC)

[edit] Removed text

I removed the following text from the article, which was added on Feb 5, 2003 by an anonymous contributor.

Proof:
Assume that :f(x) is a function that can be expressed in terms of a polynomial (it does not have to appear to be one). The n-th derivative of that function will have a constant term as well as other terms. The "zeroth derivative" of the function (plain old :f(x)) has what we will call the "zeroth term" (term with the zeroth power of x) as its constant term. The first derivative will have as a constant the coefficient of the first term times the power of the first term, namely, 1. The second derivative will have as a constant the coefficient of the second term times the power of the second term times the power of the first term: coefficient * 2 * 1. The next will be: coefficient * 3 * 2 * 1. The general pattern is that the n-th derivative's constant term is equal to the n-th term's coefficient times n factorial. Since a polynomial, and by extension, one of its derivatives, equals its constant term at x=0, we can say:
f(n)(0) = axn!
\frac{f^{(n)}(0)}{n!} = a_x
So we now have a formula for determining the coefficient of any term for the polynomial version of  :f(x). If you put these together, you get a polynomial approximation for the function.

I am not sure what this is supposed to prove, but it appears to be meant as a proof of Taylor's theorem. In that case, it does not seem quite right to me; in particular, the assumption in the first sentence ("f is a function that can be expressed in terms of a polynomial") is rather vague and appears to be just what needs to be proven. Hence, I took the liberty of replacing the above text with a new proof. -- Jitse Niesen 12:50, 20 Feb 2004 (UTC)

Jitse, your proof gives no basis to the fact that the remainder term gets smaller as n increases, nor any idea of the interval of convergence of the series. Therefore I will provide a different proof, taken from [i]Complex Variables and Applications[/i] by Ruel V. Churchill of the University of Michigan. It involves complex analysis, however, and if you can provide a basis for the convergence of the series in your proof, I will put it back. Scythe33 22:03, 19 September 2005 (UTC)
The theorem, as formulated in the article, does not claim that the remainder term gets smaller as n increases; in fact, this only happens if the function is analytic. The articles on power series talks about the interval of convergence, and holomorphic functions are analytic proves that the series converges.
In my opinion, the convergence of a Taylor series is an important point, which could be explained more fully in the article (as the overall organization should be improved), and you're very welcome to do so. Proving that the series converges does not seem necessary for the reasons I gave in the previous paragraph. -- Jitse Niesen (talk) 13:33, 27 September 2005 (UTC)

I appreciate the proof of Taylor's theorem in one variable, it is very good. The explanation is short and clear. Could we site the original source on the main page? ("Complex Variables and Applications" by Ruel V. Churchill) Yoderj 20:17, 8 February 2006 (UTC)

The proof was not taken from that book. It is a completely standard proof which will be in many text books. -- Jitse Niesen (talk) 20:48, 8 February 2006 (UTC)

[edit] What is ξ?

In the Lagrange form of the remainder term, is ξ meant to be any number between a and x or is the theorem supposed to state that there exists such a ξ? I would guess the latter (because the proof uses the Mean Value Theorem), but the article doesn't make it totally clear. Eric119 15:51, 23 Sep 2004 (UTC)

Thanks for picking this up. I rewrote that part to clarify (hopefully). -- Jitse Niesen 16:00, 24 Sep 2004 (UTC)

[edit] A few suggestions

I have a few, more stylistic, concerns for the article. I think it should be noted that, to state the Cauchy form of the remainder term, the n+1 derivative of f (the polynomial in the hypothesis of the theorem) must be integrable. I have a similar concern for the proof of the theorem in one variable; if proving the integral version of the theorem, in the inductive step, we must assume that the theorem holds true for a function whose first n derivatives are continuous, and whose n+1 derivative is integrable (this is the "n" case). Then to prove the theorem holds true for n+1, we assume that a new function f has n+1 derivatives -- all of which are continuous -- and that its n+2 derivative is integrable. Since f's n+1 derivative is continuous, it is integrable, and then we may apply the inductive hypothesis to write an expression for f, with the remainder term written as an integral of the n+1 derivative. Then using integration by parts, we can make a substitution to complete the induction. T.Tyrrell 05:13, 3 May 2006 (UTC)

[edit] Vector Exponents

What exactly is meant by (xa)α in the multi-variable case? I didn't think you could take powers of Rn vectors. Maybe I just don't understand the notation. At the very least it seems a little ambiguous.

This is multi-index notation. I moved the link to this article closer to the formula; hopefully it's clearer now. -- Jitse Niesen (talk) 13:26, 5 October 2006 (UTC)

[edit] Lagrange error bound

I noticed that "Lagrange error bound" is redirected here but is not specificly mentioned. I suggest that someone make the relation somewhere.—The preceding unsigned comment was added by 132.170.52.13 (talk • contribs).

[edit] minor point

small point (and my experience is limited) but in the proof when xf'(x) is expanded I think it is really expanding x(f'(x)) so I think x should stay outside the integral symbol. So instead of what's given (see second term of second line):

\begin{align}  f(x) &= f(a)+xf'(x)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+\int_a^x \, xf''(t) \,dt+xf'(a)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+(x-a)f'(a)+\int_a^x \, (x-t)f''(t) \, dt.  \end{align}


maybe this would be clearer:

\begin{align}  f(x) &= f(a)+xf'(x)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+x\int_a^x \, f''(t) \,dt+xf'(a)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+(x-a)f'(a)+\int_a^x \, (x-t)f''(t) \, dt.  \end{align}

Phillipshowardhamilton 18:49, 4 February 2007 (UTC)