Talk:Chebyshev polynomials

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: B Class Mid Priority  Field: Analysis

Michael, it's not actually your browser. You can set that in your wikipedia preferences - by default it will render simple equations in normal text. And you apparently just complicated the equation by adding some white space ;-) snoyes 23:37 Mar 1, 2003 (UTC)

You're right; it works. Thanks. Why is the default the option that renders things as normal text, given that that is never what is intended when things are set with TeX? Michael Hardy 23:45 Mar 1, 2003 (UTC)

I don't know why that is the default option - best to ask one of the developers, maybe on the village pump. --snoyes 23:53 Mar 1, 2003 (UTC)

And BTW, there don't seem to be any other articles about special polynomial sequences on Wikipedia. Michael Hardy 23:45 Mar 1, 2003 (UTC)


//Here's the polynomial sequence on the page. Κσυπ Cyp   19:47, 25 Jan 2004 (UTC)

#include <stdio.h>

int asdf(int, int);
int choo(int, int);

int main() {
  int n, k;
  for(n=0;n<=16;++n) {
    printf("T<sub>%d</sub>(x)=", n);
    for(k=0;k<=n/2;++k) {
      printf("%s%d", k&&asdf(n, k)>=0?"+":"", asdf(n, k));
      if(n-2*k) printf("x");
      if(n-2*k>1) printf("<sup>%d</sup>", n-2*k);
      if(k==n/2) printf("<br>\n");
    }
  }
  return(0);
}

int tmp[1000];

int asdf(int n, int k) {
  int r, a;
  if(!n) return(1);
  if(!k) return(1<<(n-1));
  for(a=r=0;a<k;++a) r-=(asdf(n, a)*choo(n-2*a, k-a))>>(2*(k-a));
  return(r);
}

int choo(int n, int k) {
  int x, y;
  for(tmp[0]=y=1;y<=n;++y) for(tmp[x=y]=0;x;--x) tmp[x]+=tmp[x-1];
  return(tmp[k]);
}

If we recursively define a function (the letter ξ an arbitrary choice)
\xi(n,0)=\lceil 2^{n-1}\rceil
\xi(n,k)=-\sum^{k-1}_{a=0}\frac{\xi(n,a)}{4^{k-a}}{n-2a\choose k-a}
then
T_n(x)=\sum^{\lfloor n/2\rfloor}_{k=0}\xi(n,k)x^{n-2k}

Contents

[edit] Maxima and minima

It looks to me like the local maxima of the Chebyshev polynomials of the first kind all have value 1, and the local minima all have value -1. If so (I don't know enough to say whether it is actually true), then this is an interesting property that should be mentioned somewhere on the page. I'm guessing it probably pops right out of one of the definitions, but that's not transparent to a muggle like me. -dmmaus 04:01, 15 August 2005 (UTC)

I think that follows from the definition:

T_n(x) = 
\left\{
\begin{matrix} 
& \cos(n\arccos(x))           & \mbox{ , } \ x \in [-1,1] \\
& \cosh(n \, \mathrm{arcosh}(x))         & \mbox{ , } \ x \ge 1      \\
& (-1)^n \cosh(n \, \mathrm{arcosh}(-x)) & \mbox{ , } \ x \le -1     \\
\end{matrix}\right.

Obviously, for x in [-1, 1] since the behaviour is given by the cosine, the local maxima and minima are 1 and -1. If one can prove that there are no maxima/minima outside this interval, then you assertion would follow. Oleg Alexandrov 04:07, 15 August 2005 (UTC)

Ah, good point. I'm afraid my eyes glaze over when I see hyperbolic trig functions in the vicinity. :-) Still, I think it'd be nice to mention this property explicitly in the article, since it's the standout physical feature of the graphs when you look at them. -dmmaus 08:54, 15 August 2005 (UTC)

I don't know that much about these polynomials to attempt to write in the article. Let us hope somebody else will mention this fact. Oleg Alexandrov 17:04, 16 August 2005 (UTC)
I've mentioned it.
There's also an interesting paralel here, which is a bit OT. Notice that for a degree n polynomial (with fixed leading coeff) to have a minimal maximum absolute value on an interval, it has to reach this maximal absolute value the highest number of times possible: n+1 times. (Clearly a degree n polynomial can have at most n-1 places where its derivative is 0, so it's impossible that is has more than n+1 extremal point in an interval.) This paralels up nicely with a general theorem in numerical analysis that the polynomial p of degree n (with no restriction on the leading coeff) that approximates a continuous function f in the sense that max|p-f| on an interval is minimal, |p-f| has to take this maximum n+1 times. I'm not sure how this general theorem is stated exactly, and I also don't know if it has any connection to this. – b_jonas 21:45, 29 December 2005 (UTC)
Let me say thanks to User:172.128.197.224 for his correction of the statement in the second paragraph of the article. – b_jonas 18:30, 18 February 2006 (UTC)

[edit] Optimal approximations to arbitrary functions

The theorem says that, if an nth degree polynomial P approximates a function F on a closed interval in such a way that the error function P(x)-F(x) has n+2 maxima, and those maxima have alternating signs and the same absolute value (that is, P-F has maxima +ε, -ε, +ε, ...), then that polynomial is locally optimal. That is, there is no other polynomial close to P that has maximum error less than ε. This locality situation is the same as the situation in differential calculus: elementary calculus problems involve finding the maximum value of a function. One sets the derivative to zero. That gives a local maximum (or minimum), but there might be other solutions that are far away.

I will outline the proof below. I think this theorem, and its proof, are interesting, but don't seem to be part of the standard maths curriculum. (Orthogonal polynomials in general seem to be a mathematical orphan child.) I would suggest that this proof ought to be on a separate "proof" page. WP is trying to minimize the number of such pages, because WP isn't a textbook, but this would probably be a good candidate.

The proof really requires a graph to motivate it. I'm new to WP; it should be made by someone who is comfortable making and uploading graphs, such as the graphs on the Chebyshev page.

Here's the proof: Suppose P(x)-F(x) has n+2 maxima, alternating signs, all with absolute value ε. Suppose Q is another nth degree polynomial very close to P, that is strictly better. Superimpose tha graph of Q-F on that of P-F. Since Q is close to P, they will be about the same, but at the n+2 points where P-F is maximal, Q-F must lie strictly inside P-F. This is because the maximal excursions of Q-F are δ, which is strictly less than ε. So (Q-F)-(P-F) is strictly nonzero at those n+2 points, with alternating signs. But (Q-F)-(P-F) = Q-P, an nth degree polynomial. Since it has n+2 alternating nonzero values, it must have n+1 roots. But nth degree polynomials can't do that.

Now here's an outline of why Chebyshev interpolation gets very close to this optimum polynomial: If the Taylor series for a function converges very rapidly (the way, for example, sin, cos, and exponential do), the error from chopping off the series after some finite number of terms will be close to the first term after the cutoff. That term will dominate all the later terms, and that term plus all the later ones are of course the error function. The same is true if we use a Chebyshev series instead of a Taylor series.

Now if we expand F(x) in its Chebyshev series, and chop it off after the Tn term, we have an nth degree polynomial. The error is dominated by the Tn+1 term, so it looks like Tn+1. Tn+1 is level, with n+2 maxima, with alternating signs. This means the error function is approximately level, and hence, by the theorem above, this polynomial is approximately optimal.

If one wants a truly optimal polynomial, one "tweaks" the polynomial P so that its error excursions are truly level. Remes' algorithm does this.

Should there be a separate WP page for this stuff, linked from Chebyshev? What should it be called? William Ackerman 18:46, 21 February 2006 (UTC)

Thanks for stating the theorem. – b_jonas 21:13, 21 February 2006 (UTC)

[edit] The relationship

I removed the following:

Starting with T0( cos u ) = 1 and T1( cos u ) = cos u, one quickly calculates that cos 2u = 2 cos^2 u - 1, cos 3u = 4 cos^2 u - 3 cos u, cos 4u = 8 cos^4 u - 8 cos^2 u + 1, etc. By setting u = 0, whence cos u = 1, one can immediately detect that both sides of each equation evaluate to unity, which is a good clue that the relationship may be correct—which, of course, it is.

What does "the relationship may be correct" mean? which relationship? Not Tn(cosθ) = cos(nθ) because thats a definition. It may show that the trignometric identities are true, but thats not the point of the article. PAR 13:27, 11 August 2006 (UTC)

[edit] reasoning not correct

I removed the following part because it is nonsense:

From reasoning similar to that above, one can develop a closed-form generating formula for Chebyshev polynomials of the first kind:

\cos(n \theta)=\frac{e^{i n \theta}+e^{-i n \theta}}{2}=\frac{(e^{i \theta})^n+(e^{i \theta})^{-n}}{2} \,\!

which, combined with DeMoivre's formula: THIS IS FALSE:


\! e^{i \theta}=\cos\theta+i \sin\theta=\cos\theta+i \sqrt{1-\cos^2\theta}=\cos\theta+\sqrt{\cos^2\theta-1} \,\!

yields:


\cos(n \theta)=\frac{\left(\cos\theta+ \sqrt{\cos^2\theta-1}\right)^n+\left(\cos\theta+ \sqrt{\cos^2\theta-1}\,\right)^{-n}}{2} \,\!

which, of course, is far more expedient for determining the cosine of N times a given angle than is cranking through almost N rounds of the recursive generator calculation. Finally, if we replace cos(θ) with x, we can alternatively write:


 T_n(x)=\frac{\left(x+ \sqrt{x^2-1}\right)^n+\left(x+ \sqrt{x^2-1}\right)^{-n}}{2}.

[edit] Linear operators

Should someone write a section about these as the eigenfunctions of a second order linear differential operator (which is symmetric wrt the inner product (f,g)= \int_{-1}^{1} f(x)g(x)\frac{1}{\sqrt{1-x^2}}\,dx )? I would, but I'm not really comfortable with the theory yet... Marbini (talk) 17:54, 2 April 2008 (UTC)