Wikipedia:Reference desk/Archives/Mathematics/2008 May 8

From Wikipedia, the free encyclopedia

Mathematics desk
< May 7 << Apr | May | Jun >> May 9 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] May 8

[edit] Integration Problem

I've tried evaluating \int^1_0 \int^1_0\int^1_0\cdots \int^1_0\int^1_0\int^1_0 \frac{1}{1-x_1x_2\cdots x_{n-1}x_n}\,dx_1\,dx_2\cdots \,dx_{n-1}\,dx_n for values of n=1, 2, and 3, and they came out nicely to  -\infty, ζ(2), and ζ(3) respectively. But I know that you can't take infinite integrals, from the article on Multiple integral. Does this integral actually converge to 1 as n approaches infinity? This is NOT my homework; this problem came up while my friend and I were giving ourselves some calculus problems to do. Thanks. —Preceding unsigned comment added by 70.111.161.98 (talk) 02:28, 8 May 2008 (UTC)

The integral is indeed equal to ζ(n) for any n. Since ζ(n) converges to 1 as n goes to \infty, so does the integral. -- Meni Rosenfeld (talk) 09:42, 8 May 2008 (UTC)

[edit] JC-1 H2 Maths: Partial Fractions headache

I am revising for a test. I can do most of the partial fractions questions, until this one:

8x^2+4x+1/(x^2+1)(2x-1)

I have to split the big fraction into two small fractions.

The normal method is to substitute values of x so one of the functions in the denominator = 0, but I cannot make x^2+1=0, because then it will be x^2= -1 and squares cannot be negative. (We did not learn imaginary numbers.)

What to do? —Preceding unsigned comment added by 166.121.36.232 (talk) 05:18, 8 May 2008 (UTC)

When you have an irreducible polynomial of degree 2 or greater in the denominator (i.e. when one of the factors on the bottom is more than linear and can't be broken down), you have to do a couple of things:
  • First, you have to note that in the partial fraction decomposition, the fraction with the non-linear denominator will have a numerator of degree one fewer than the denominator. That is, if you have a quadratic factor, as in this case, its part in the fraction will not just be C/(x^2+1), but (Bx+C)/(x^2+1), which you need to take into account when finding the decomposition. (If you have a repeated factor, e.g. (x+3)^2, you need a decomposition that includes A/(x+3) and A/(x+3)^2.)
  • Secondly, to actually find the coefficients in the decomposition, you can either do it the hard way, or the usually-not-quite-so-hard way. In both cases, you start with an equation such as \frac{1}{(x^2+1)(2x-1)} = \frac{A}{2x-1}+\frac{Bx+C}{x^2+1}. Then, you multiply both sides by the denominator, giving 1 = A(x2 + 1) + (Bx + C)(2x − 1). In the hard way, you then expand it all out, and equate coefficients (giving you n simultaneous equations to solve, where n is the degree of the original denominator). In the usually-not-quite-so-hard way, you notice that the equality holds for all values of x, and choose smart ones to simplify your calculations. The simplest values are those that make one of the terms zero - in this case, x = 1/2 works nicely to give you A. After that, it's up to you to choose nice ones. Some suggestions include x = 0, 1 and -1, since they normally result in things you can work with easily. Also remember that once you've found one of the coefficients, you can use its value in further calculations rather than trying to eliminate it. Confusing Manifestation(Say hi!) 07:02, 8 May 2008 (UTC)
That’s something I’ve never quite understood about the “usually-not-quite-so-hard-way”, in the original inequality you couldn’t plug in x=1/2 because it was in a denominator. Why does the equation, after multiplying by the denominators hold true for 1/2? Further, why does using that point give you any information about A,B,C? The best I’ve been able to get at is that because the new equation is continuous, but can’t quite string out the logic as to exactly how that implies that the A,B,C found are the same as the original. GromXXVII (talk) 11:07, 8 May 2008 (UTC)
You've answered your own question. The equality (before multiplying, and consequently, afterward) holds for any x \neq \tfrac12. After multiplying, the equation is continuous, so from being true for any x \neq \tfrac12 we can deduce it being true for the limit point \tfrac12. It gives you information about A, B and C because not for all values of these the equality will hold, thus the fact that it holds tells you something about them. -- Meni Rosenfeld (talk) 11:19, 8 May 2008 (UTC)
If you move onto the complex plane, you can talk about analyticity and the fact that you create a removable singularity (i.e. a discontinuity that you can replace by a limit value and turn it into a point of continuity). Confusing Manifestation(Say hi!) 04:55, 9 May 2008 (UTC)
And, building up on that, while the original rational functions can't be said to be equal at 1/2, their residues at that point certainly are. -- Meni Rosenfeld (talk) 15:03, 9 May 2008 (UTC)

[edit] Another integration problem

Is there a solution for:

S = \int_a^b x^x\,dx.

Or is there any antiderivative for that matter? /81.233.41.161 (talk) 15:57, 8 May 2008 (UTC)

No, the function does not have an elementary antiderivative. For some particular values of a and b you can say something intelligent about the definite integral, e.g. sophomore's dream. -- Meni Rosenfeld (talk) 16:38, 8 May 2008 (UTC)

[edit] No antiderivative

Hi. I've been playing with some integrals, and I was working on one which, after a change of variable, turned into \int \cos(x^2)\,dx, which I don't think has any elementary anti-derivative. I say that because my copy of James Stewart's Calculus identifies sin(x2) as a function for which we have no anti-derivative, so it must also be true for \cos(x)=\sin(x+\frac{\pi}{2}). I have two questions: First what's the best way to tell when you've found one of these functions without any elementary anti-derivative? Secondly, I know that some functions with no anti-derivative, such as e^{-x^2}, have got special functions (the Error function) defined to be their anti-derivative. Are such functions listed somewhere? Is there one for my cos(x2) function? How can one tell?

Thanks in advance for any insights anyone can offer. -GTBacchus(talk) 17:52, 8 May 2008 (UTC)

I doubt there is any general procedure to determine if a function has an elementary antiderivative. For practical purposes, the best course of action will be to feed the function to a CAS and see if it is able to express it. In your case, you should take a look at Fresnel integral. -- Meni Rosenfeld (talk) 18:19, 8 May 2008 (UTC)
I think Meni's advice on feeding a CAS is usually the best method of finding out whether some function has an elementary primitive.
There are so many non-elementary antiderivatives that it would be hard to build a useful list. In any case, the article List of special functions and eponyms contains references to many of these non-elementary functions (as well as to many other ones). The article on special functions may also be of your interest. Here is also an external site. Pallida  Mors 18:41, 8 May 2008 (UTC)
Thanks very much! -GTBacchus(talk) 23:48, 8 May 2008 (UTC)
There is in fact such a general procedure: the Risch algorithm – although it may be disputed if this procedure is truly an algorithm in the formal sense, as it needs an oracle that tells whether a given expression equals 0. In any case, the procedure is too complicated for manual use.  --Lambiam 09:39, 9 May 2008 (UTC)
Oh, for some reason I was under the impression that it wouldn't terminate for functions with no elementary antiderivative. -- Meni Rosenfeld (talk) 11:03, 9 May 2008 (UTC)
One more thing - you don't even need to have a CAS available. You can simply go to the Wolfram Integrator. -- Meni Rosenfeld (talk) 19:28, 10 May 2008 (UTC)

[edit] Proof Generator?

Is there a program that can generate mathematical proofs if given an equation? If so, where is it? 75.170.42.250 (talk) 22:26, 8 May 2008 (UTC)

See Automated theorem proving. Algebraist 22:43, 8 May 2008 (UTC)
What do you mean by "given an equation"? You can't prove an equation, you can just prove that the equation is implied by some premise. --Tango (talk) 22:51, 8 May 2008 (UTC)
The same can be said about most statements that we prove. Eg. when we prove that the sum of angles in a triangle is 180 degrees we are "just" proving that this is implied by the axioms of euclidian geometry. Taemyr (talk) 23:17, 8 May 2008 (UTC)
You can word it as "If a, b and c are the angles in a triangle, then a+b+c=180", you can then prove that equation given the stated premise (obviously, you need definitions of the terms, and that's where the axioms of Euclidean geometry come in). Just "x2+5x-4=0" is not something which can be proven or disproven, it's just an equation. It's true for certain values of x and not true for others. For "prove" to be meaningful, the statement has to be always true. (The closest you can get to proving equations is proving identities, which are a subset of equations.) --Tango (talk) 15:28, 9 May 2008 (UTC)
I can’t seem to find the wikipage for it, but it sounds to me like the difference between a statement and an open statement in formal logic. Most equations are open statements. Some are trivially statements though, such as x+(1-x)=1, or 1 = 2. GromXXVII (talk) 20:10, 9 May 2008 (UTC)
I've heard some good things about Isabelle (theorem prover). I don't think any currently available automatic prover is very easy to use - you would probably need to know quite a bit about formal logic, as well as the syntax of the program. Even then, they might not be able to solve anything you give them. -- Meni Rosenfeld (talk) 23:15, 8 May 2008 (UTC)
You might enjoy reading about the Robbins problem ("are Robbins algebras equivalent to Boolean algebras?"). This problem, proposed in 1933, was first solved by the automated theorem prover EQP in 1996. [1] JohnAspinall (talk) 14:42, 9 May 2008 (UTC)