Talk:Taylor series
From Wikipedia, the free encyclopedia
[edit] Multivariate Taylor Series
Why was the section on `multivariate Taylor series' removed by 203.200.95.130? (Compare the version of 17:53, 2006-09-20 vs that of 17:55, 2006-09-20). I am going to add it again, unless someone provides a good reason not to. -- Pouya Tafti 14:32, 5 October 2006 (UTC)
- I agree with Pouya as well! There's no separate article on multivariate Taylor series on wikipedia, so it should be mentioned here.Lavaka 22:22, 17 January 2007 (UTC)
-
- I have recovered the section titled `Taylor series for several variables' from the edition of 2006-09-20, 17:53. Please check for possible inaccuracies. —Pouya D. Tafti 10:37, 14 March 2007 (UTC)
The notation used in the multivariate series, e.g. fxy is not defined. Ma-Ma-Max Headroom (talk) 08:46, 9 February 2008 (UTC)
[edit] Unheadered junk
The Taylor series expansion for arccos is notably missing. I tried simplifying it myself, but I guess I'm not the sharpest knife in the drawer. Maybe someone else can figure it out an add it? --Jlenthe 01:08, 9 October 2006 (UTC)
How did Maclaurin publish his special case of the Taylor theorem in the 17th century (i.e. 1600's) if he was born in 1698? I suspect this is a mistake.
--For the "List of Taylor series" I would like to have the first few terms of each series written out for quick reference. I could doit myself, but I don't want to mess anything up.
-
- Here's a start: I added the first few terms of tan x. 24.118.99.41 06:58, 24 April 2006 (UTC)
"Note that there are examples of infinitely often differentiable functions f(x) whose Taylor series converge but are not equal to f(x). For instance, all the derivatives of f(x) = exp(-1/x²) are zero at x = 0, so the Taylor series of f(x) is zero, and its radius of convergence is infinite, even though the function most definitely is not zero."
f(x) has no Taylor series for a=0, since f(0) is not defined. You have to state explicitly that you've defined f(x)=exp(-1/x²) for x not equal to 0 and f(0)=0 . This is merely lim[x->0] f(x), but it is a requirement for rigor.
- Don't complain, fix! Wikipedia:Be bold in editing pages. -- Tim Starling 02:03 16 Jun 2003 (UTC)
By the way, would people call a Taylor series? Or does it have a name at all? If someone said something about a Taylor series of a 2D (or n D) function, I'd guess they meant something like that... Also, can the term analytic function refer to a 2D function? Κσυπ Cyp 19:01, 17 Oct 2003 (UTC)
- 1st question: sure, see e.g. http://www.csit.fsu.edu/~erlebach/course/numPDE1_f2001/norms.pdf - Patrick 19:51, 17 Oct 2003 (UTC)
-
- I just shoved it quickly into the article, at the bottom. Κσυπ Cyp 21:49, 17 Oct 2003 (UTC)
-
-
- Does the double sum in 2D form (in that PDF file) mean that I first have to go through the whole range of "r" and then increase "s" by one and yet again go for "r"s? I'm slightly confused (probably of my fault). 83.8.149.147 18:42, 17 January 2007 (UTC)
-
Shouldn't the article include something about the "Taylor" for whom the series are named? If I knew, I'd do it myself Dukeofomnium 16:41, 5 Mar 2004 (UTC)
- Good idea. Often a good way to start investigating such things is to click the "What links here" link on the article page. In this case, that reveals that the Brook Taylor page links to the article. -- Dominus 19:00, 5 Mar 2004 (UTC)
What is a "formulat"? it's on the last line. A typo or a word I'm unfamiliar with? Goodralph 16:28, 2 Apr 2004 (UTC)
Edited the geometric series to include cases where n might not start from zero. Stealth 17:22, Feb 19, 2005 (UTC)
In the Taylor series formula, what happens if x=a? when n=0 we get 0 raised to the 0th power, which is undefined. The formula is correct if we define 0^0=1.
The Taylor's series is also alternately defiend as follows (I'm using the LaTeX notation here): f(x + h) = f(x) + h f^{\prime}(x) + (h^2/2!) f^{\prime \prime}(x + \theta h) for some 0 < \theta < 1 I'm new to this field, so I'm reading up on this a bit before I can add this to the article with suitable comments, but I didn't find this form mentioned on Mathworld and most pages in the top 10 google hits for Taylor's Series.
how can u use taylor series for intergration? also could sumeone actulally put maclaurin series in it so i can see how much it differs from taylor series
[edit] Madhava
Madhava didn't invent the Taylor series, but he may have discovered the equivalent series expansion for a few limited cases, which is very different (but still impressive):
Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma (see for example [2]) to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava. This is discussed in detail in [4].
That quote is taken from [1], which is also the first external link listed on the Madhava article. I left in the credit to Madhava despite the fact that the above source calls into question whether he or one of his followers (two centuries later) discovered the aforementioned examples, because I'm in no position to weigh the validity of such claims. I think it's still significant enough to merit mention, since these examples from Indian mathematicians do seem to be the earliest known examples.
- Actually, on re-reading I think the above quote is only calling into question the authorship of Mahajyanayana prakara and not the discoverer of the series expansions that work contains. In any event, it's still clearly a limited result and needs to be described as such. --Wclark 21:25, 30 September 2005 (UTC)
Sounds good to me. --Pranathi 22:32, 30 September 2005 (UTC)
[edit] Error Estimates
I think something on the error estimates for a truncated series would be useful. That's exactly what I'm looking for right now. User:NeilenMarais
I'm looking for the exact same thing. 69.140.90.164 01:15, 10 January 2006 (UTC)
I'm studying for a test, and looking for the same thing too. For now I'll stop being lazy and read my textbook. Maybe I can add something on the subject later when I have time. Eumedemito 03:26, 21 October 2007 (UTC)
- Oh, it's at Taylor's theorem (see "15 Where are the error bounds?"). Maybe there could be a mention to that section, though. —Preceding unsigned comment added by Eumedemito (talk • contribs) 03:38, 21 October 2007 (UTC)
[edit] Madhava of Sanfamagrama
Actually, I think Archimedes should be accredited with the first use of the Taylor series, since he used the same method as Madhava: using an infinite summation to achieve a finite trigonometric result. Liu Hui independently employed a similar method 400 years later, but still about 800 years prior to Madhava's work, although the Wikipedia article on Liu Hui does not reflect this.
In fact, it would have been quite easy for them to perform the same task as Madhava. It isn't difficult to square an arc (albeit in an infinite number of steps) using simple Euclidean geometry. I believe that Archimedes and later Liu Hui were aware of this. Last time I heard about it was at a History and Philosophy of Mathematics conference in 1998 at the Center for Philosphy of Science, University of Pittsburgh. Anyone care to dredge up a reference? 151.204.6.171
[edit] Feb. 08 2006----Possible error in proof of multivariable form of Taylors Thm
Hi, I'm not positive I've found an error so I'm just referring it to you for checking. In the proof of the multivariable form of Taylor's theorem, I believe that a paramaterizing variable 't' has been assigned a false value. In the article it is assigned a value of zero, while I think that is should be of value 'one'. I'm only a lowely undergrad, so I'm most likely wrong here, but all the same, I'd appreciate it if you would check it out and let me know if it does indeed need correcting.
I'm also a little leery of the explanation for the coefficiant of 'a' in same proof: i!*C(i,alpha) does not equal 1/alpha! by my intuition, but instead equals 1/[(alpha!*(i-alpha)!]. Perhaps my confusion stems from a sloppy transition from n=1 to n=N. This seems probable, but then would need considerable re-writing.
http://en.wikipedia.org/wiki/Taylor%27s_theorem P.S. I'd really appreciate feedback, Thanks! --student4life 04:06, 9 February 2006 (UTC) Also I'm going to edit a few errors in the explicit taylor series expansions of both ln(1+x) and e^x/sinx. student4life 22:03, 9 February 2006 (UTC)
Hey, I think you are right about this. I am also an undergraduate, and dont have that much experience in this area, but with my knowledge their is always only one solution to ,but if one wanted the sum over vectors that have an absolute value of 1, then there would be many solutions and would actually need to be put in summation notation. --RETROFUTURE
[edit] Taylor series of f
Is it just me or should the taylor series of f be written "T(f)" rather than "T(x)" ....? Fresheneesz 02:26, 6 March 2006 (UTC)
-
- As a formal object, the Taylor series depends on the function f and the center a, so the notation T(f) would be better, or T(f,a) would be better still. However, on a more concrete level, the Taylor series should be viewed as a function, which I suppose the notation T(x) is meant to indicate. Notation is of course arbitrary. I am actually not aware of any standard notatoin for Taylor series, so I don't know whether there is a good precedent for using this slightly inaccurate notation. -lethe talk + 17:47, 31 March 2006 (UTC)
- How about T(f,a;x) or T(f,a)(x)?130.234.198.85 23:20, 25 January 2007 (UTC)
- As a formal object, the Taylor series depends on the function f and the center a, so the notation T(f) would be better, or T(f,a) would be better still. However, on a more concrete level, the Taylor series should be viewed as a function, which I suppose the notation T(x) is meant to indicate. Notation is of course arbitrary. I am actually not aware of any standard notatoin for Taylor series, so I don't know whether there is a good precedent for using this slightly inaccurate notation. -lethe talk + 17:47, 31 March 2006 (UTC)
- I think the answer to that is no. But! I have another question. Is a Taylor series a special case of power series? Because if so that should be noted in the definition, not as a passing comment. Fresheneesz 11:02, 29 March 2006 (UTC)
By 'special case' of a power series, are you asking if more than one unique power series converges to the function just like the Taylor Series? Because if I remember right, which I dont always do, the Taylor Series is not the only type of power series that can converge to an arbitrary function. 19:34, 17 July 2006 (UTC)
- The Taylor series can be based on the derivatives of the function on any value of X, not just 0 as in the Maclaurin series. Except for trivial case of a constant function, all the series will be different and still represent the same function. -- Petri Krohn 22:48, 31 October 2006 (UTC)
[edit] Infinitely differentiable
I have a feeling that the taylor series doesn't have to have an infinitely differentiable function. Such a series would simply end long before infinity, which isn't a crime as far as i'm concerned. I'll strike it from the definition if I get a confirmation.
Second question: what does a do? When it says that the function is "around the point x= " something what does that mean? How does one choose a? These variables need to be better defined, i'll start a bit. Fresheneesz 10:46, 29 March 2006 (UTC)
- The Taylor series of a function depends only on the function's value and derivatives at a single point (this point is a), whereas a better approximation to a function might look at the values of the function at many points. For some smooth functions, knowing its derivatives to all orders can tell you very little about the function. For these functions, the Taylor series may be a lousy approximation. For other functions (known as analytic), a Taylor series tells you everything there is to know about the function. As for how you choose a, well, it's up to you. For an analytic function, it doesn't matter, choose it anywhere you want (close to the range where you want to approximate the function is best). For a meromorphic function, the Taylor series only tells you about the function up to the nearest pole, so choose a between the poles where you want to know about the function. -lethe talk + 13:53, 29 March 2006 (UTC)
Fresheneesz, I disagree with your replacing real functions in Taylor's series with complex functions. If you want to be full general, the function can take values in any Banach space, but that is besides the point. Let us stick to the most widespread case, that being functions of a real variable. I told you about this many times before, please do not try to be most concise, most general, etc. It harms the understanding of the article by people who don't know this stuff. Oleg Alexandrov (talk) 17:27, 29 March 2006 (UTC)
- There is every reason to discuss Taylor series in the context of analytic functions of a complex variable, after first mentioning the case of real variables. This is a math article. Not only are analytic functions in mathematics far more often thought of in the complex domain (contrary to what Oleg Alexandrov says), but one's understanding of Taylor series is greatly enhanced by this way of thinking. Example: The Taylor series of f(x) = 1/(x^2 + 1) is f(x) = 1 - x^2 + x^4 - x^6 + . . ., which mysteriously converges only for |x| < 1 (and some boundary points). But if one considers the complex version f(z) = 1/(z^2 + 1) it is clear that this function has (pole) singularities precisely at z = ±i. Since each Taylor series' region of convergence is inside a circle of radius R in the complex plane (and possibly part of its boundary) for some 0 <= R <= oo, it is exactly these Poles that explain why the radius of convergence R is equal to 1: because | ±i - 0 | = 1.
- It would be appropriate to limit the discussion to real variables *only* if the article were meant to be understood at the lowest possible level and no higher. But that is not how Wikipedia math articles are designed. How about beginning with real variables, then stating that the natural milieu of Taylor series is the complex plane, and using complex variables from then on?Daqu 16:25, 13 April 2006 (UTC)
- Hmmm... I guess it's true that analytic functions are usually thought of in the complex plane. After all, they can be uniquely extended to the entire complex plane. On the other hand, meromorphic functions (like your example) naturally live in different Riemann surfaces (not C). So instead of assuming that the variables are all complex, let's assume that they're valued in some Riemann surface. But actually, the article deals with functions of more than one variable. So we should really develop the theory of complex manifolds in the intro, and then throughout the article make all our variables understood in those terms. And we'll just close our eyes altogether to functions that are not even meromorphic. -lethe talk + 17:04, 13 April 2006 (UTC)
- Certainly a worthwhile question. (But no, an analytic function need not have any extension to the entire plane; as long as it has a definition (locally by power series) in a connected open subset U of C then it is analytic in U.) No, there is no necessity to get into analytic continuation here, though it might be referenced. With no reference to analytic continuation, it is simply true and begging to be mentioned that every Taylor series about c in C converges in the interior of a circle about c of some radius R (with 0 <= R <= oo) in the complex plane, and for no z with |z| > R. I'm not sure about Taylor series in > 1 variable, but for one variable there is even a explicit formula for R in terms of the coefficients b_n of the series: R = 1/(lim sup_{n → oo} |b_n|^(1/n)).Daqu 05:43, 16 April 2006 (UTC)
- Hmmm... I guess it's true that analytic functions are usually thought of in the complex plane. After all, they can be uniquely extended to the entire complex plane. On the other hand, meromorphic functions (like your example) naturally live in different Riemann surfaces (not C). So instead of assuming that the variables are all complex, let's assume that they're valued in some Riemann surface. But actually, the article deals with functions of more than one variable. So we should really develop the theory of complex manifolds in the intro, and then throughout the article make all our variables understood in those terms. And we'll just close our eyes altogether to functions that are not even meromorphic. -lethe talk + 17:04, 13 April 2006 (UTC)
-
-
- Let me add an example. Without complex variables, it is difficult to understand the simplest phenomena with Taylor series:
-
Let f(x) = 1/(x^2 + 1). The Taylor series about x = 0 converges only in a disk of radius 1 about the center 0. But why? f(x) is perfectly well-behaved on all of the real numbers. The answer is that f(z) = 1/(z^2 + 1) becomes infinite as z -> i (or -i) in the complex numbers. Since Taylor series always converge in a circular disk in the complex plane, that disk's radius cannot exceed 1 (or it would include ±i, where the function is undefined!).Daqu 22:57, 13 July 2006 (UTC)
[edit] Mistake in the final example?
In the final example given exp(x)/sin(x), the Maclaurin series for each of the functions are used and we are told to compare powers of x to evaluate the unknown coefficients, however the coefficient of x^0 on the RHS is 1, while the coefficient of x^0 on the LHS is 0. This expansion is not a Taylor series. —Preceding unsigned comment added by 137.219.45.123 (talk • contribs)
- You're right about that. e^x/sin x doesn't even have a Taylor series about zero, there's a pole there! No wonder the example was left unfinished. I've changed the sin to cos, so the Taylor series should be defined. Removed a few steps in the calculation (it was painfully explicit). Should be OK now, I hope? Thanks for pointing out this embarrassing error. -lethe talk + 13:52, 31 March 2006 (UTC)
[edit] Wrong redirect
Series expansion redirects to this article. My opinion is that there are several kinds of series expansions, with the Taylor series being one of them. Therefore wouldn't it be better to have an own article about series expansion? --Abdull 13:46, 5 June 2006 (UTC)
- You mean like the Fourier series expansion? Right now, I'm not inclined to make a separate disambiguation page, because I don't think there are enough different uses, but I might feel differently if you stated a case. For now, I will add a link to the top of the article. If you think more is needed, please say so. -lethe talk + 19:37, 21 June 2006 (UTC)
[edit] A Casual Proof needs revision
I am just starting here at wikipedia, so I still need experience in my word choice, flow, voice, et cetera. I created the 'A Casual Proof' part. I thought of the proof myself, but it isnt very formal, so someone could make it at least a little more formal. If you think this is unnecesary, then I suppose we could remove it. Otherwise, polish it if you will. And getting that darn derivative to not intersect the f would be a nice help too, if someone knows how to do this.
Can someone tell me why this was taken out? Was it unnecesary because there was a proof on the Taylor Theorem page? RETROFUTURE 01:46, 18 July 2006 (UTC)
- Proofs are not that important in an encyclopedia, and they can be distracting. The Taylor series articles is already big enough. I guess we are better off without it. Oleg Alexandrov (talk) 03:31, 18 July 2006 (UTC)
Okay then. RETROFUTURE 16:10, 18 July 2006 (UTC)
[edit] The name of the series
The most common name for the series is Taylor series, although it's often called MacLaurin series when used at 0. The article states all this correctly, but my question is: why did this way of naming arise? Sure, the MacLaurin series is a "special case" of the "more general" Taylor series, but it is so only in a very superficial sense. If you want the Taylor series expansion for e.g. sine at a, you can get it with the MacLaurin expansion by simple translation - that is, get the MacLaurin series expansion of sin(x-a). Given that, as articles says, MacLaurin's result was published earlier than Taylor's, why is the most common name the Taylor series? For the uninitiated it would seem as if Taylor unwittingly has taken the credit of MacLaurin's discovery (if you believe that the first person to discover something is in some way special) simply by stating the theorem in a more popular way. 82.103.195.147 11:09, 12 August 2006 (UTC)
- It seems to me that it's the same kind of thing as Rolle's Theorem and Mean Value Theorem. It's pretty much the same thing, but Rolle's theorem is more specific.RageGarden 03:55, 21 April 2007 (UTC)
- I agree that "Maclaurin series" would be the proper name for historical reasons (I have even heard that he considered also the general case, but I am not sure), but "Taylor series" is the term in use. I think that most mathematicians say "Taylor series" also in the case around 0, but most introductory books call this case "Maclaurin series". I suggest we leave the article as it is. Jesper Carlstrom 08:40, 21 April 2007 (UTC)
[edit] First example
The first example ends by saying Expanding by using multinomial coefficients gives the requisite Taylor series. Actually, using multinomial coefficients is not enough: to really get the coefficients we need to add infinitely many terms, a thing which should at least be mentioned. I think it would be better to substitute the cosinus with the sinus in the example, in order to get finitely many terms to add. 62.94.48.91 09:56, 28 August 2006 (UTC)
[edit] Rewrite of introduction on 31 October 2006
- Moved from User talk:Petri Krohn
I did a partial revert of your changes to the Taylor series article. Your changes introduced some mistakes. A Taylor series is not a sum of derivatives, it is a sum of terms with each term being a derivative times a power over a factorial. Also, not all trignometric functions are globally analytic, like the tangent function. Also, you introduced a subtle mistake by implying that partial sums are always a good approximation to an infinitely differentiable function. That is true only for analytic functions, and only then just in a range. You can reply here if you have any comments. Thanks. Oleg Alexandrov (talk) 04:24, 1 November 2006 (UTC)
- I think the old intro sucks. It gives the impression, that the series is only an approximation of the function, and a tool in the "easy" calculation of values. I especially dislike this sentence: Functions that involve rational operations such as addition, subtraction, multiplication and division are relatively easy to evaluate. Many other functions aren't so easy to evaluate, like those that involve... This may be true, but I think is is weasel text with no place in the intro.
- The intro should point out three things:
- The series is constructed from the derivatives of the function. Knowing the values of the derivatives for one value of X allows one to calculate the value of the function everywhere.
- The series in not an approximation of the function, but the exactly the same function, and can be replaced for it in mathematical proofs.
- The two things above only apply to a limited set of well behaved functions. The intro must be able to name this set of functions and direct to the relevant article.
- -- Petri Krohn 04:56, 1 November 2006 (UTC)
- P.S. Mathematicians like to see the formulas at the begining of the article. This makes the article inaccessible to most readers. I believe 90% of readers will not read past the first fomula. If anything important can be expressed verbally, it should be placed in the beginning before the first formula. -- Petri Krohn 05:05, 1 November 2006 (UTC)
-
- The statement Taylor series can be used to produces all the values of an analytic function, if the value of the function, and of all of its derivatives, is known at a single point.
- is accurate only locally. The new intro has other subtle mistakes. Oleg Alexandrov (talk) 05:07, 1 November 2006 (UTC)
- Another mistake: For trigonometric functions, the derivatives at 0 are usually trivial to produce.
- That is not true for tangent.
- Peter, please fix those. I will look again at the intro tomorrow. Oleg Alexandrov (talk) 05:09, 1 November 2006 (UTC)
-
-
- I tried some fine tuning. -- Petri Krohn 05:24, 1 November 2006 (UTC)
-
[edit] Where are the error bounds?
Most text books give error bounds -- either in terms of an integral or the value of one of the derivatives at a point in the interval between a and x. Why do we not have them here? JRSpriggs 12:55, 10 December 2006 (UTC)
- It is at Taylor's theorem. JRSpriggs 06:02, 2 January 2007 (UTC)
[edit] Derivations of Some Series
Hello, I'm a student learning about Taylor Series and I was wondering if there can be some additional pages that derive the Taylor Series for say cos(x). It would be interesting to see it done. —Preceding unsigned comment added by 69.255.197.49 (talk • contribs)
[edit] Taylor series with Lagrange and Peano remainders
Why there's nothing about those two remainders in the article?
[edit] Difference between Taylor series and Taylor Polynomials
I think it is necessary to include information regarding the difference between a taylor series and a taylor polynomial. They are not the same.
A Taylor series is an INFINITE series of terms which, as they approach the nth term, will be EQUIVALENT to the stated function, whether it be sin x, cos x, ex... etc
A Taylor polynomial is a defined number of terms as specified by the notation, Pn(x), where n is the given amount of terms. Because n is defined as a finite number, Pn(x) will be EQUAL to that expanded series to that degree, and therefore will not be equal to the taylor series. It will be an approximation of it.
Please verify this information. EDIT...I messed up what was in bold...fixed now—The preceding unsigned comment was added by 24.229.193.72 (talk) 16:10, 25 February 2007 (UTC).
[edit] Gradient present
Do we really need to transpose the gradient vector?
rather than
Which one is the convention? Jackzhp 21:36, 11 April 2007 (UTC)
- The point is that both and are column vectors. So to form the inner product one must convert the first one into a row vector before matrix multiplication. JRSpriggs 07:47, 12 April 2007 (UTC)
[edit] Possible Uses
As it is right now, it states that Taylor series can be used as partial sums to approximate the function, but wouldn't it also be useful to say that as an infinite sum it can be used to show convergence and as an infinite sum Taylor series exactly are the function?RageGarden 18:35, 19 April 2007 (UTC)
- In the paragraph above, it does say "Functions that are equal to their Taylor series around any point a in their domain are called analytic functions." I'm not sure what you mean by "it can be used to show convergence". We can sometimes interpret a constant series as a Taylor series at a point (i.e., a Taylor series with x replaced by some constant) and use knowledge of the convergence of the Taylor series to conclude convergence of the constant series. Is that what you mean? That might be worth mentioning. Doctormatt 21:01, 19 April 2007 (UTC)
- Yeah, sorry about the vagueness. I wasn't exactly sure how to word it but you got the general idea of what I was going for.RageGarden 04:08, 20 April 2007 (UTC)
- I did some editing. Is the result better? Jesper Carlstrom 09:01, 20 April 2007 (UTC)
- Yeah, sorry about the vagueness. I wasn't exactly sure how to word it but you got the general idea of what I was going for.RageGarden 04:08, 20 April 2007 (UTC)
[edit] Explaining my revert
I reverted some edits (link). Here is why:
- The partial sums of a Taylor series are called Taylor polynomials. I don't see why this should not be mentioned.
- You do need sufficiently many terms for a good approximation. For example: approximating ex by 1+x is good only in some cases, you must take care to include sufficiently many terms for the problem considered. I don't see why that was removed.
- Finally, it is indeed necessary that the series converges. For instance, approximating (the real function) arctan by the Maclaurin series works only between -1 and 1; it does not help that it is analytic. Of course it helps if the function is analytic for all complex numbers (entire), simply because then the series converges! But this is on the other hand way to strict: arctan is a good example, it is not entire, but the Taylor series is useful anyway.
Jesper Carlstrom 07:19, 14 May 2007 (UTC)
- It is quite possible for the series to converge to the WRONG value. So convergence is NOT ENOUGH. JRSpriggs 07:35, 14 May 2007 (UTC)
-
- You are right. On the other hand, analytic is not enough either (arctan). Entire seems a bit too much to assume. What conditions should we use? Jesper Carlstrom 08:02, 14 May 2007 (UTC)
-
-
- I now have a new proposal. By the way, it seems to me that the "right" criterion for the Taylor series to converge to the function (provided it converges at all) is: f is differentiable in an open complex neighborhood of a path from a to x. This is a bit too advanced, so maybe the best thing is to state the property for entire functions only. Jesper Carlstrom 08:30, 14 May 2007 (UTC)
-
-
-
-
- If the function has a complex derivative at every point in a disk centered on a, then the Taylor series converges uniformly to the function in any smaller disk centered on a. JRSpriggs 09:02, 14 May 2007 (UTC)
-
-
-
-
-
-
- Do you think that your suggestion would be better than the stuff I put there? The information you suggest to put there is essentially already to be found below in the article. I have the feeling that stating these things early would be to require too much from the readers. Jesper Carlstrom 15:08, 14 May 2007 (UTC)
-
-
-
-
-
-
-
-
- Thanks for your edit. By the way, notice that neighborhood redirects to neighbourhood. Jesper Carlstrom 09:30, 15 May 2007 (UTC)
-
-
-
-
My spelling checker (the one built into Firefox), does not recognize British spellings. JRSpriggs 07:25, 16 May 2007 (UTC)
[edit] History
I'm trying to understand the history section (from today). What on earth does "the second-order Taylor series approximations of the sine and cosine functions" mean? The second-order term is 0 - is this the discovery? Could that be stated in the language of the time? Moreover, what does this mean: "the power series of the radius, diameter, circumference, angle θ, π and π/4, along with rational approximations of π, and infinite continued fractions." What is a power series of the radius? What is the power series of θ? What does infinite continued fractions have to do with this? I seriously begin to wonder if someone is making fun of us. Jesper Carlstrom 11:57, 21 May 2007 (UTC)
[edit] Taylor series formula
I noticed the following comment associated with the Taylor series formula: As stated below, the Taylor series need not equal the function. So please don't write f(x)=... here Current formula
However should the ammendment be made which satisfies the statement above
--Zven 22:45, 13 July 2007 (UTC)
- You are referring to Taylor polynomials for which there is a link in the intro paragraph, so I don't think this needs inclusion in this article. However, I don't see this explicit form at Taylor polynomial either: perhaps you could find a good way to incorporate it there? Cheers, Doctormatt 23:44, 13 July 2007 (UTC)
- Yeah I think you are right, will have a look at the other article and see if it can be included --Zven 00:12, 14 July 2007 (UTC)
- This is still being discussed at Talk:Taylor's theorem#Taylor's theorem approximation. As I said there, I think this article is the best place to mention the Taylor polynomials. -- Jitse Niesen (talk) 12:27, 24 July 2007 (UTC)
- Yeah I think you are right, will have a look at the other article and see if it can be included --Zven 00:12, 14 July 2007 (UTC)
[edit] figure text
"as the degree of the taylor series rises" is not nice because a power series has no degree.
The editors mainly consider real analysis rather than complex analysis, but they are not explicit about it.
In complex analysis a convergent taylor series always converges to the function value f(x).
In real analysis a convergent taylor series may converge to a value different from the function value f(x).
Bo Jacoby 23:23, 22 July 2007 (UTC).
[edit] Log Base What?
This page uses a logarithm function, but does not give the base of the log. —Preceding unsigned comment added by 72.196.234.57 (talk) 22:37, 18 September 2007 (UTC)
- You're right, that should be mentioned. Thanks, now fixed. -- Jitse Niesen (talk) 01:01, 19 September 2007 (UTC)
[edit] Complex Taylorseries
In the introduction to the Taylor expansion it is stated, that the formulation is also valid for functions of complex variables. Does this mean, in practice, that one does not have to separate the complex variable z in its real and imaginary content for the Taylor expansion? One can just write the expansion in z itself and in the end everything goes well right? In the example:
where a and b are complex variables and c and d are complex constants, one could therefore write the first order Taylor expansion as
It might be helpfull to include some remarks on the use of complex functions and maybe possible restrictions in their application? —Preceding unsigned comment added by Ddeklerk (talk • contribs) 07:34, 29 October 2007 (UTC)
[edit] Why convergent?
Can anyone support this claim:
"The Taylor series need not in general be a convergent series, but often it is."Randomblue 20:57, 15 November 2007 (UTC)
- In the article, several examples are given of Taylor series that converge for every x. An example of a Taylor series that does not converge for any is given in section Properties. Jesper Carlstrom 10:10, 16 November 2007 (UTC)
- It also points out that the series converges everywhere for all analytic functions, which takes care of the "often it is" part. -- Dominus 15:21, 16 November 2007 (UTC)
-
- Sorry, I was mistaken. The example in section Properties is one that converges everywhere, but not to the value of the function. There is no example of a Taylor series that diverge everywhere. But the warning that Taylor series need not converge can be read as saying that they need not converge everywhere. That is supported in the article. -- Jesper Carlstrom (talk) 21:32, 16 November 2007 (UTC)
[edit] Clarification requested
In the Convergence section, I think these sentences are unclear:
- "If f(x) is equal to its Taylor series in a neighborhood of a, it is said to be analytic in this neighborhood. If f(x) is equal to its Taylor series everywhere it is called entire. The exponential function ex and the trigonometric functions sine and cosine are examples of such functions."
Examples of which functions? Functions that are analytic? Entire? Both? I think it's unclear as currently written. --Kweeket Talk 00:30, 30 November 2007 (UTC)
- Agreed. I fixed that. Jesper Carlstrom (talk) 11:39, 30 November 2007 (UTC)
[edit] Integral of e^(x^2)
There was a huge ruckus at my school when i asked my maths teacher what this would be; he claimed this is inevaluable. I looked up some sites and its widely stated that the gaussian function (integral of e^(-x^2)) is evaluated using the taylor series expansion of e^(-x^2). My simple question is that if the taylor expansions accept complex arguments, would it be possible to substitute x by xi (i being square root of -1) and reduce the gaussian function expansion to e^(x^2), thereby evaluating the above integral term by term?Leif edling (talk) 18:00, 23 April 2008 (UTC)
- Well, can't be expressed in elementary functions either, although you can obviously write down a convergent power series for this. In fact, you can do this for either integral, it isn't hard. Probably your professor meant that there is no closed-form expression in elementary functions. I often tell students that this integral can't be evaluated without more advanced techniques. silly rabbit (talk) 21:24, 23 April 2008 (UTC)
-
- If you are allowed the use the (non-elementary) error function, then you can get a closed-form expression for the antiderivative of e^(x^2), which can be obtained from the (also non-elementary) antiderivative of e^(-x^2) by substituting xi for x. In general, if the function whose antiderivative is being sought is continuous and can be numerically evaluated (meaning it is possible to compute the numerical value assumed by the function for any numerically specified value of its argument), then basically any method for numerical integration will allow you to also numerically evaluate its antiderivative. This can also be used here, but in this case (just as for the antiderivative of e^(-x^2)) using the Taylor series expansion is faster and more accurate. --Lambiam 22:09, 26 April 2008 (UTC)
Rightly pointed out there Lambiam. But, unfortunately the error function is way beyond our syllabus at high school level (infact it was probably beyond our math teacher's scope because he obviously knew nothing about it :P). Using the convergent power series expansion seems logical enough, as does using the error function. But is the error function valid for an indefinite integral?Leif edling (talk) 00:53, 29 April 2008 (UTC)
- Sure, all you need to do is add in a constant of integration:
- --Lambiam 14:45, 30 April 2008 (UTC)
A new problem's arisen; a few mathematics textbooks here in India say that this integral "cannot be evaluated" alongwith a few other standard forms e.g. xtanx (which can also, apparently, be evaluated as an infinite power series utilizing the taylor series expansion). Isn't the statement "cannot be evaluated" wrong on the part of the authors?Leif edling (talk) 07:42, 15 May 2008 (UTC)
- At the very least it is an unfortunate and misleading statement. The usual meaning of "to evaluate" in mathematics is: "to ascertain the numerical value of". In that sense the integral can be evaluated just as well as the integral of ex. For example,
- 1.46265 17459 07181 60880 40485 86856 98815 51208 70096 21673 91856 60114 58021 87633 14290 97917 ...
- --Lambiam 11:38, 19 May 2008 (UTC)
[edit] Vector notation for multivariable Taylor series
Perhaps the following is worthy of addition into the article?
An alternative, more compact notation for the multivariable Taylor series.
Let be a function of n real variables. Define the vectors and . If f is infinitely differentiable at the point , then the Taylor series expansion for about the point is:
Where is the gradient vector of f.
Note therefore that is a differential operator. Saran T. (talk) 11:46, 8 May 2008 (UTC)