Wikipedia:Reference desk archive/Mathematics/2006 August 28

From Wikipedia, the free encyclopedia

< August 27 Mathematics desk archive August 29 >
Humanities Science Mathematics Computing/IT Language Miscellaneous Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions at one of the pages linked to above.


Contents

[edit] handling curved space

I made the stick figures in Convex uniform honeycomb, and wish to do the same for their counterparts in spherical and hyperbolic 3-space. I know I'd not be the first to attempt ray-tracing in curved space. I have some books on appropriate subjects but, not surprisingly, they're more theoretical than practical. Where can I look for advice on appropriate systems of coordinates, useful formulae, and so on? —Tamfang 00:56, 28 August 2006 (UTC)

The Geometry Center did some nice work visualizing hyperbolic space, and they have a page of software, including Geomview, available for download. Even if you want to write your own code, this would be a good place to start. --KSmrqT 03:07, 28 August 2006 (UTC)

[edit] Taylor series

Does anyone recognize what function \sum_{n=1}^\infty \frac{x^n}{(2n+1)!} is? --HappyCamper 02:41, 28 August 2006 (UTC)

Yes, -i\sin(i\sqrt{x})/\sqrt{x}-x = ((\exp(\sqrt{x})-\exp(-\sqrt{x}))/2-\sqrt{x})/\sqrt{x} if I didn't make any mistakes (which I probably did). And since sin y /y is the zeroth spherical Bessel function, the above would involve the sph bess function along the imaginary axis, K_0. Perhaps that suggestive form helps. linas 03:01, 28 August 2006 (UTC)
I believe a correct answer is slightly different,
\frac{\sinh \sqrt{x}}{\sqrt{x}}-1 ,
noting that the hyperbolic sine, sinh z, can be rewritten −i sin iz. Compare the limits at zero, where the fraction goes to 1. --KSmrqT 03:28, 28 August 2006 (UTC)
Isn't that the same as linas's answer formulated differently? --LambiamTalk 23:26, 28 August 2006 (UTC)
That depends. When I posted my reply, linas had proposed the left-hand side above,
\frac{-i\sin(i\sqrt{x})}{\sqrt{x}}-x .\,\!
Later that was amended to include the right-hand side,
\frac{\frac{1}{2}(e^{\sqrt{x}}-e^{-\sqrt{x}})-\sqrt{x}}{\sqrt{x}} .\,\!
It is not hard to verify that linas' first formula is not equal to mine. But, is linas' equal sign correct? Ah! But we were warned to expect mistakes. --KSmrqT 00:59, 29 August 2006 (UTC)
I agree with KSmrq. (That was not my idea of fun.) - LambiamTalk 01:04, 29 August 2006 (UTC)
I agree with KSmrq. (That was fun.) - Rainwarrior 06:41, 28 August 2006 (UTC)
I should have seen that identity with i. Actually, my original function was 1 plus the function, so the answer is rather nice. Thanks everyone! --HappyCamper 22:03, 28 August 2006 (UTC)
Note that, in that case, your original function was actually \sum_{n=0}^\infty \frac{x^n}{(2n+1)!}. This explains why the final result is nice - artficially removing the n = 0 term can only complicate the function. -- Meni Rosenfeld (talk) 08:28, 29 August 2006 (UTC)
I bet it was presented that way because of the persistent superstition that 00 is an indeterminate form, rather than 1 by definition. --LambiamTalk 09:08, 29 August 2006 (UTC)
Actually, I simply didn't notice that n=0 would have worked. I was more concerned about putting this beast into normal order. The variable here was function of noncommuting operators. --HappyCamper 01:51, 30 August 2006 (UTC)

[edit] Differentiating polynomials

Are there any fast ways to differentiate polynomials like 4x2 + 5x3 − 8x2 − 44x + 42? I'm not looking for the solution (this is not homework either), but I'm looking for the method to solve it. I have basic knowledge of derivatives and Newton's fundamental law of Differentiation. Thanks!

You just differentiate each term in the polynomial individually. So you take the first term, differentiate it to get 8x, then you take the second term, differentiate it and add the result to the 8x. Keep going until all the terms are done. Theresa Knott | Taste the Korn 13:31, 28 August 2006 (UTC)
To differentiate a polynomial, you invoke three rules of diffferentiation. The sum rule, which allows you to differentiate the polynomial term by term and then adding derivatives of each term to get the final result. The constant multiplier rule, which is a case of the product rule (note: derivative of a constant is zero). It allows you to simplify differentiating of the terms like the following (axn)' = a(xn)'. And, finally, the power function rule, which allows you to differentiate (xn)' = nxn − 1. For example,
(4x^2-5x^3+2)' \,\! {}=(4x^2)'+(-5x^3)'+2' \,\!
{}=4(x^2)'-5(x^3)'+0 \,\!
{}=4\cdot (2x)-5\cdot(3x^2) \,\!
{}=8x-15x^2 \,\!
(Igny 15:31, 28 August 2006 (UTC))
When practiced in calculus, simple polynomials like this are already fast to solve. The power rule should come very readily to mind, and using it the solution writes itself. Do some reading about the power rule, and maybe find a proof (or prove it yourself using the limit definition of a derivative). - Rainwarrior 17:21, 28 August 2006 (UTC)
At a slightly higher level, we can say that differentiation is a linear operator, so that a weighted sum as input produces a weighted sum as output. Since a polynomial in x is a weighted sum of powers of x, the only remaining question is what happens to xn for all non-negative integers n.
We do, however, have some advanced tricks available. One possibility is to use dual numbers for automatic differentiation. (See the article.) Another is to adapt synthetic division in a Horner scheme to evaluate a polynomial and its derivative simultaneously. The Horner method of evaluation nests the multiplications.
a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n = a_0 + x (a_1 + x(a_2 + \cdots x (a_n)\cdots)) \,\!
As an algorithm we would write
k := n
p := ak
while (k > 0)
k := k-1
p := ak + x*p
To produce the derivative as well, modify this to
k := n
p := ak
dp := 0
while (k > 0)
k := k-1
dp := p + x*dp
p := ak + x*p
In numerical analysis it is often preferable to write a polynomial, not as a weighted sum of powers of x (the "power basis"), but as a weighted sum of Bernstein polynomials (the "Bernstein basis"). This makes differentiation really easy. Denote the k-th Bernstein polynomial of degree n by Bnk. Then the derivative of
b_0 B_0^n(x) + b_1 B_1^n(x) + b_2 B_2^n(x) + \cdots + b_n B_n^n(x) \,\!
is found by subtraction.
n\left((b_1 - b_0) B_0^{n-1}(x) + (b_2 - b_1) B_1^{n-1}(x) + \cdots + (b_n - b_{n-1}) B_{n-1}^{n-1}(x)\right) \,\!
For example, the polynomial in the post is a cubic, so we would use the cubic Bernstein polynomials.
B_0^3(x) = (1-x)^3 \,\!
B_1^3(x) = 3(1-x)^2 x \,\!
B_2^3(x) = 3(1-x) x^2 \,\!
B_3^3(x) = x^3 \,\!
Rewritten in this basis, the given polynomial would be
42 B_0^3(x) + \frac{82}{3} B_1^3(x) + \frac{34}{3} B_2^3(x) - B_3^3(x) , \,\!
and its derivative would be
-44 B_0^2(x) -48 B_1^2(x) -37 B_2^2(x) , \,\!
written in terms of the quadratic Bernstein polynomials.
B_0^2(x) = (1-x)^2 \,\!
B_1^2(x) = 2(1-x) x \,\!
B_2^2(x) = x^2 \,\!
Here, for example, the coefficient −37 is (−1)−343 times 3.
For pure mathematics and introductory differential calculus, linearity and the power basis will be the tools of choice; but in applied mathematics, these additional methods are good to know. As just one example, it is well known that Bézier curves are equivalent to use of the Bernstein basis, so computer graphics and font algorithms regularly use this last method to find tangents. --KSmrqT 19:24, 28 August 2006 (UTC)

[edit] series

1,1,2,3,5,8,13,21,........10946 —The preceding unsigned comment was added by 203.101.164.244 (talk • contribs) 19:09, 28 August 2006 (UTC)

Hike! Melchoir 19:10, 28 August 2006 (UTC)
See Fibonacci number. For future reference, consult the On-Line Encyclopedia of Integer Sequences. --KSmrqT 19:27, 28 August 2006 (UTC)

[edit] Tanh^2

Hi, I know how to calculate tanh(x), but what's tanh^2(x)? Is it just tanh(tanh(x))? Thanks! --Mary

No, it's usually just an abbreviation for (tanh(x))^2 – b_jonas 20:15, 28 August 2006 (UTC)
Ah, thanks. --Mary
Be careful—while sin2(x) means [sin(x)]2, sin-1(x) often means arcsin(x). 68.100.203.44 18:25, 31 August 2006 (UTC)
Trig functions with exponents are an ambiguous notation, you should ask for clarification. sin2(x) usually means [sin(x)]2, but could also mean sin[sin(x)]. Better ask/check. — QuantumEleven 08:20, 1 September 2006 (UTC)