Wikipedia:Reference desk/Archives/Mathematics/2007 December 26

From Wikipedia, the free encyclopedia

Mathematics desk
< December 25 << Nov | December | Jan >> December 27 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] December 26

[edit] Merry Christmas

3 mathematicians ordered a pizza for Christmas, with radius r. They decided to cut the pizza into three equal pieces, one for each person. But being mathematicians, they decided to cut the pizzas with just 2 straight cuts, parallel to each other.

  • Find:
    • a. The length of the cuts
    • b. The distance from the center of the pizza to the intersection of a cut and the circumference.
    • c. Set a Cartesian coordinate system with origin (0,0) at the center of the pizza. Find the area enclosed if a sector of the pizza, with angle θ, is rotated about the x-axis.

Merry Christmas, this is not homework, but my dad says I can't open my presents until I solve this. HELP.--Goon Noot (talk) 04:07, 26 December 2007 (UTC)

Try impressing your dad by saying that if all the mathematicians want to do is split the pizza evenly, they can cut it horizontally. Then the answers seem to become trivial. –Pomte 05:11, 26 December 2007 (UTC)
Someone's going to get screwed on cheese and toppings with that solution. :p Also, your dad's awesome, Goon. Here's something that might help:
  • You have two parallel chords of equal length (the cuts) of length 2X
  • When you connect the midpoint of one chord to center point of the circle (length Y) and a radii drawn from the intersection of the circle and the chord to the center point, you get a right triangle.
  • Drawing another radii from the center point to the other intersection gives you a sector, with a big triangle (or two smaller right triangles) inside it.
  • The area of the sector ((θ/360)*pi*r^2 ), minus the area of the big triangle ( X*Y ), equals 1/3 of the total area. EvilCouch (talk) 05:45, 26 December 2007 (UTC)
Did he say, "until you solve it", or "until someone else solves it for you"? -- Meni Rosenfeld (talk) 12:38, 26 December 2007 (UTC)
The answer to (a) is the solution to a transcendental equation and does not have a closed form; is a numerical solution needed? Otherwise you can't do better than give the equation. Unless I don't understand something, like the pizza has the shape of West Virginia, (b) is trivial, but I wouldn't use the term "circumference" (a distance) for the "edge"; if you want a multi-syllabic word you can use "[[perimeter]". Problem (c) I don't understand. Is the rotation taking place in a third dimension, moving the slice outside the Cartesian plane? Then what is the meaning of "area enclosed"? We have a rotational solid swept out that has a volume – which you might perhaps call "enclosed" and an area, which presumably is the area of the surface (what else?), which is on the outside and thus not enclosed. Furthermore, the result depends on how the sector is oriented in th plane with respect to the x-axis.  --Lambiam 14:16, 26 December 2007 (UTC)
I've got about 1.929*r each for question (a), but I could be wrong.
As for question (b), could it perhaps mean rotating the pizza segment in its plane around where the center of the pizza was? – b_jonas 18:38, 26 December 2007 (UTC)
There is a treatment here that gets essentially the same result for (a).  --Lambiam 00:09, 27 December 2007 (UTC)
If you think your presents are worth it, offer your part of the pizza to your parents, then you can just cut it to halves on a diameter. The pizza is bound to be cold by now anyway. – b_jonas 07:49, 27 December 2007 (UTC)
Ah, but cold pizza - yummm. Even better than hot. -- JackofOz (talk) 04:20, 30 December 2007 (UTC)

[edit] It's that time of the year again...

Ok, I'm bored, on vacation and my pending image projects don't look very exciting at the moment. Do you guys have any suggestions for cool math images or graphs, especially animations, for me to create? Drop them here and I'll see what I can do. :) By the way, merry belated xmas! — Kieff | Talk 06:02, 26 December 2007 (UTC)

Maybe a Mobius strip, hope that link works, it's a one sided/one edged object, see why???? A math-wiki (talk) 18:57, 26 December 2007 (UTC)
"hope that link works"? You know, there is a "Show preview" button... The Mobius strip needs no introduction, but I don't think it will be much of a graphical challenge. -- Meni Rosenfeld (talk) 11:56, 27 December 2007 (UTC)
http://en.wikipedia.org/wiki/Talk:Impossible_cube , see "TODO" 195.35.160.133 (talk) 16:21, 27 December 2007 (UTC) Martin
Which reminds me, our article on "the father of the impossible figure", Oscar Reutersvärd, has a Penrose triangle as the sole illustration. As far as I know no image of the art of Reutersvärd himself is free, so we can't use these, but a better illustration would be a newly created "abstract" rendering of the nine impossibly floating cubes as shown on the engraving of the 25 öre stamp. I'm thinking of a line drawing with the cube faces uniformly filled with three colours of about the same hue and saturation, but differing in lightness. The ground six-pointed star should come out the same. It would then be nice to juxtapose this with a Penrose triangle of the same dimensions and orientation (the least Penrose triangle exactly covering the cubes) and the same drawing style and colours.  --Lambiam 00:35, 28 December 2007 (UTC)
See convex uniform honeycomb, particularly the "Frames" column of the table. (I made those pix in Povray.) I wanna see the analogous figures in S3 and H3 (curved 3-space). —Tamfang (talk) 21:11, 29 December 2007 (UTC)

[edit] Fundamental theorem of calculus

At the article in the title it says: "Suppose a particle travels in a straight line with its position given by x(t) where t is time." does this mean x.t (as in x times t)? When I see 2(3), I calculate 6. If this is not the intended interpretation, can someone please make it clearer? Thanks. --Seans Potato Business 12:12, 26 December 2007 (UTC)

No. It means that x is a function of t.
It is true that, in theory, it could have been interpreted as x \cdot t, but you need to keep the context in mind when interpreting notation. -- Meni Rosenfeld (talk) 12:21, 26 December 2007 (UTC)

It also says "Let us define this change in distance per change in time as the speed v of the particle. In Leibniz's notation:

\frac{\mathrm dx}{\mathrm dt} = v(t). "

I agree that speed is a change in distance per change in time so:

\frac{\mathrm dx}{\mathrm dt} = v.

but it says:

v(t).

Speed is not a function of time unless the particle is accelerating, right? --Seans Potato Business 15:03, 26 December 2007 (UTC)

Velocity is always a function of time; If the particle is not accelerating, then it happens to be a constant function. Note that the discussion in that article is meant to be general, and that the generic case is the velocity being nonconstant - a constant velocity is an isolated special case. -- Meni Rosenfeld (talk) 15:09, 26 December 2007 (UTC)
If velocity is always a function of time, then is it always a function of distance? --Seans Potato Business 15:53, 26 December 2007 (UTC)
No, at least not in the sense I mean it here. In particular, you have no guarantee that for a given distance you will have a unique velocity, which is a requirement for being a function. In Newtonian physics, the state of the universe is a function of time. This means that the locations, velocities, etc. of everything in the universe are functions of time. -- Meni Rosenfeld (talk) 16:15, 26 December 2007 (UTC)

[edit] Proposed change to mathematics style guidlines - use of period/full-stop

Current guideline:

"=== Punctuation ===

Just as in mathematics publications, a sentence which ends with a formula must have a period at the end of the formula. If the formula is written in LaTeX, that is, surrounded by the <math> and </math> tags, then the period needs to also be inside the tags, because otherwise it can be displayed on a new line if the formula is at the edge of the browser window."

Examples of the period in action:

\ln (x) \equiv \int_{1}^{x} \frac{dt}{t}.
\frac{d}{dx} \ln(x) = \frac{1}{x}.

I suggest that while in many cases, the presence of a period is clear, in others it looks similar to an apostrophe or could be construed as some other mathematical modifier. This could make the correct interpretation of a formula difficult to a reader who is not familiar with the guideline or formula that they are trying to interpret. I propose that the guidelines are changed to prohibit the use of the period, despite the common practices of mathematics publications. I would like to develop consensus for or against this proposal and/or to fine-tune it to the liking of the community. --Seans Potato Business 15:32, 26 December 2007 (UTC)

The guidelines reflect the current practice as followed in professional mathematics journals and mathematics books. If you leave out punctuation you are never sure where a phrase stops this is in my opinion a worse alternative. In any case, if you seriously want to propose this, the place is Wikipedia talk:WikiProject Mathematics. Bring your helmet.  --Lambiam 18:00, 26 December 2007 (UTC)


[edit] Derivation/integration

Are t and x being used synonymously? Why would you need to derivate the integral or something? Look, right there; they do the derivative of the integral of t^3 - it makes no sense. It's like me wondering what happens if I integrate something an then derivate it again. I get the same something. What's the point?

From Fundamental_theorem_of_calculus#Examples "...suppose you need to calculate

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt.

Here, f(t) = t3 and we can use F(t) = {t^4 \over 4} as the antiderivative. Therefore:

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt = {\mathrm d \over \mathrm dx} F(x) - {\mathrm d \over \mathrm dx} F(0) = {\mathrm d \over \mathrm dx} {x^4 \over 4} = x^3.

Look at that, we start with a function of t, and then after integration, we derivate it with respect to x or something. It makes no sense. You know, I got an A in A level (Edexcel 2003) maths so I knew what I was doing and I don't remember it being like this.

But this result could have been found much more easily as

{\mathrm d \over \mathrm dx} \int_0^x t^3\, \mathrm dt = f(x) {\mathrm dx \over \mathrm dx} - f(0) {\mathrm d0 \over \mathrm dx} = x^3."

--Seans Potato Business 17:21, 26 December 2007 (UTC)

I'm really having trouble making heads or tails what it is you are asking. I'll just comment that with all due respect to A level maths, high-school mathematics is not always the same as real mathematics. I don't think your last formula makes any sense whatsoever. -- Meni Rosenfeld (talk) 17:32, 26 December 2007 (UTC)
t and x are not being used synonomously. It's perfectly reasonable to want to integrate something with respect to one variable and then differentiate it wrt another. You might, for instance, want to find the area under a line of some sort (ie integrate wrt x), but then find out how that changes with time (so differentiate wrt t).
I think a source of your confusion here is having x as a bound of the definite integral. Consider \int_0^x t^3dt = \left[ \frac{t^4}{4} \right]^x_0 = \frac{x^4}{4} - 0 = F(x). This is clearly a function of x, so you cannot move the derivative inside the integral. If you did, you'd get \int_0^x \left(\dfrac{d}{dx} t^3\right) dt = \int^x_0 0 dx = 0. mattbuck (talk) 17:35, 26 December 2007 (UTC)
Actually, that's not what happens in the mentioned section - they are dealing with single-variable functions. t is just a dummy variable used to write the integral.
As for why would we want to integrate and then differentiate - to demonstrate that these are inverse operations, which is exactly what the fundumental theorem says. -- Meni Rosenfeld (talk) 17:41, 26 December 2007 (UTC)
Questions like this and previous ones suggest that you are not completely comfortable with the notations for functions. I'll emphasize that a variable is not "built into" the function. For example, let f be the function f: \mathbb{R} \to \mathbb{R}, x \mapsto x^2. This is the function that, for any real number, returns its square. There is nothing special about x - it is just a dummy variable used to describe the operation. You have f(x) = x2, but you also have f(t) = t2, f(α) = α2 and f(2 + 3z3) = (2 + 3z3)2.
So let's say I have this function f as I defined above. I want to define a new function F based on a definite integral of f. For any real number x, I want F to return the definite integral of f from 0 to x. To write down the integral I need to use some letter to denote the parameter of f, but x is already taken as the parameter for F! So I pick another letter, say t, as a dummy variable. I can then write F(x) = \int_0^xf(t)\ dt=\int_0^xt^2dt = \frac{x^3}{3}. But again, there is nothing special about x - I can also write F(t)=\frac{t^3}{3} and F(\alpha)=\frac{\alpha^3}{3}. The fundamental theorem then says that the derivative of F is f. -- Meni Rosenfeld (talk) 17:50, 26 December 2007 (UTC)

[edit] First-order linear differential equations

I was reading through this (Linear differential equation#First order equation) article. When I got to the part that said

"Now, the most important step: Since the differential equation is linear we can split this into two independent equations and write
u^\prime v + puv = 0
u v^\prime = r,"

I got really confused. Why are we able to do this? I understand that the equation is linear, but beyond that I have absolutely no idea. Foxjwill (talk) 17:48, 26 December 2007 (UTC)

I'm not sure why the linearity is mentioned, but it should be clear that any solution of the above pair of differential equations will give us a solution of the original equation. The next few steps do give a solution. It is more work to show directly that any solution of the original equation can be thus expressed, but given some boundary condition like y(0) = y0 there is only one solution to the original equation, and the generic solution can be made to agree with any boundary condition, so no solutions can have been swept under the carpet.  --Lambiam 18:18, 26 December 2007 (UTC)
Although it should be clear why they will give a solution, it isn't. That's my question. Foxjwill (talk) 18:22, 26 December 2007 (UTC)
The \Leftarrow part is indeed clear - if the functions u and v are solutions to u^\prime v + puv = 0 and u v^\prime = r, then obviously they are solutions to u^\prime v + puv + u v^\prime = r (if a = b and c = d then a + c = b + d). It's the \Rightarrow part that is not completely necessary and does not easily follow (unless the writer knows something we don't). -- Meni Rosenfeld (talk) 18:30, 26 December 2007 (UTC)
I think why  \Rightarrow follows is because we have two functions, u and v and only one condition: uv = y. Hence we are free to impose another condition: uv' = r. This conditon then forces u'v + puv = 0. SmaleDuffin (talk) 19:09, 26 December 2007 (UTC)
But we are only allowed to impose this other condition when we introduce u and v, not after the fact. Also, it is not obvious that we can indeed find u and v satisfying both requirements; we would need to construct them, and doing so is ultimately as complicated as the solution itself. -- Meni Rosenfeld (talk) 20:13, 26 December 2007 (UTC)
Yeah, the restrictions on u and v should probably be introduced earlier. It seems to me that the `proof' offered there is not particularly well written. I've usually seen this formula proved by first showing a lemma, that the solution to μ(x)p(x) = μ'(x) is  \mu(x) = \exp(\int a(x) dx) . If y'(x) + p(x)y(x) = r(x) is multipled by a function μ(x) satisfying μ(x)p(x) = μ'(x), then the equation becomes  \frac{d}{dx}(\mu(x) y(x)) = \mu(x) r(x) , and can be solved by integration. It also seems odd to me that the article doesn't mention integrating factors in this context. SmaleDuffin (talk) 22:30, 26 December 2007 (UTC)
Oh! I get it! >_< I was focusing so much on thinking that that wouldn't give a full solution that I forgot that we're looking at two linear independent solutions! Oy, ve. Thanks. Foxjwill (talk) 18:41, 26 December 2007 (UTC)

[edit] Average number of runs in a bitstream

Given a bitstream generated by a random source, is there a way to determine how many runs to expect in it, statistically? What I mean is, for example, 1101000101 has 7 different runs of bits. If I use a string of 1024 random bits, and I examine enough of them I can expect to see an average Hamming weight of 512, but how many runs should I get, on average?

I'm asking this because I'm analysing some data from "HotBits", which is generated based on radioactive decay, and I've got 571,486,208 bits here from that generator, divided into 558,092 strings of 1024 bits each. Now I've verified that the average number of 0's and 1's in each string is okay (511.979 bits set out of 1024 - close enough to the theoretical average), but I've analysed the number of unique runs in it, and it's come out as 512.9331. This to me seems startling. I was expecting 512, not something close to 513. Can anyone explain? Given the number of samples it doesn't seem likely to be a statistical fluke. So is it an issue of bias in the HotBits generator or can you always expect 1 extra run than half the number of bits? • Anakin (contribscomplaints) 19:19, 26 December 2007 (UTC)

Check this article. I guess there could be more non-parametric tests. Pallida  Mors 19:37, 26 December 2007 (UTC)
Thanks for that, that looks to be very helpful but I don't quite understand how to apply the formula, \mu=\frac{2\ N_+\ N_-}{N}+1\,,. What are N + and N ? (The +1 at the end of the formula though, looks to confirm that the average number of runs would be 1 more than half the number of bits, which would explain what I'm observing.) • Anakin (contribscomplaints) 20:22, 26 December 2007 (UTC)
As far as I understand, N + and N represent the expected number of instances of the s+ and s- types out of N ocurrences. If you expect bits 0 and 1 to appear with the same probability (which is the case here, I guess), then
N_\pm=\frac{1}{2}N,
and \mu=\frac{N}{2}+1,
which amounts to 513 for this case. Pallida  Mors 20:52, 26 December 2007 (UTC)
Ah I see, thank you very much! I understand now!! I had just assumed the average runs would be 512 originally. I'm still intrigued as to why it's not, but the 513 result does agree with the analysis of the random data. Thank you for your help :D • Anakin (contribscomplaints) 21:11, 26 December 2007 (UTC)

I've had another think about this and there's still something I don't get. If you take, for example, all the possible combinations of a 4-bit number and group them by number of runs you get this:

  • 1 run: 0000,1111
  • 2 runs: 0001,0011,0111,1000,1100,1110
  • 3 runs: 0010,0100,1101,1011,0110,1001
  • 4 runs: 0101,1010

If each of the 16 combinations of bits are equally likely then the average number of runs would be: \frac{2*1+6*2+6*3+2*4}{16}= \frac{40}{16}=2.5, but the Wald-Wolfowitz formula gives 3. I wrote out all the combinations for 5-bit numbers as well and got 72/32 = 2.25. What am I doing wrong? • Anakin (contribscomplaints) 21:37, 26 December 2007 (UTC)

You are welcome Ani! You are doing nothing wrong. The same happens for N=2 or 3. It seems that the formula yields an excess of one half for such low numbers. Anyone can comment on this? Pallida  Mors 21:59, 26 December 2007 (UTC)
(edit conflict, seems I'm not the only one who noticed this) I'd have thought the expected mean should be \mu=\frac{N+1}{2} rather than \frac{N}{2}+1 (with the extra half run on average coming from the fact that runs always break at the beginning and end of the string). That would make the expected mean in your case 512.5 rather than 513. To derive this, try counting run starts instead of runs (each run, of course, having exactly one start and one end). There's always a run starting at the first bit, and there's a 50% chance of a new run starting at each of the N-1 subsequent bits. Thus, the average number of run starts should be \frac{N-1}{2}+1 = \frac{N+1}{2}. —Ilmari Karonen (talk) 22:09, 26 December 2007 (UTC)
I've been testing this the long way, with a script to try every number of a certain number of bits, count the runs in it, add all the runs up, and average them up at the end, which has generated the following results:
bits average runs
1 1
2 1.5
3 2
4 2.5
5 3
6 3.5
7 4
8 4.5
16 8.5
When I originally said it was 2.25 for 5 bits this was my mistake with adding the numbers up. So this table follows a very nice and simple pattern which agrees with the formula \mu=\frac{N+1}{2}, but doesn't seem to agree with the Wald-Wolfowitz runs test article. Is the formula in that article mistyped I wonder? • Anakin (contribscomplaints) 22:42, 26 December 2007 (UTC)
(after-thought) Or perhaps the caveat is that the Wald-Wolfowitz formula is meant for determining the expected number of runs in a single sample, where you can't possibly have 0.5 of a run, so the 0.5 would most likely manifest itself as a whole extra run, in contrast to what I'm testing which is the average number of runs in thousands and thousands of samples. • Anakin (contribscomplaints) 22:52, 26 December 2007 (UTC)
No, a formula of a mean is just a formula for the expected value: Thus, it represents what to expect on average as a mean value, and does not need to take integer values. I cannot follow Ilmari's analysis (in the end, the number of runs equals the number of run starts or ends!), but the formula given in his/her answer seems to be right (for the equiprobable case). So far, I do not know what to say for the general case, I'm afraid. By the way, Ilmary thinks (and my own simulations seem to agree with him/her) that the expected number of runs should be 512.5 (again not 513) for N=1024. Pallida  Mors 05:22, 27 December 2007 (UTC)
Gosh, I am just inducing a formula, which seems to be
\mu=\frac{2 N_+ N_- (N-1)}{N^2}+1,
which coincides with the simple one for the equiprobable case, and matches my simulations for more general cases, but take care: comment is wanting from someone more familiar with this topic. Pallida  Mors 05:55, 27 December 2007 (UTC)
I have edited the Wald-Wolfowitz article to clarify the situation. This concerns a conditional distribution, in which N+ and N+ are given (have already been observed).  --Lambiam 09:37, 27 December 2007 (UTC)
I can't follow how you arrived at the formula Pallida, but it does appear to generate the correct results. And unlike the simple (N+1)/2 formula it handles the case of a zero-bit number better since the result is undefined.
Gosh it's a fascinating debate we're having here. ^_^ • Anakin (contribscomplaints) 14:11, 27 December 2007 (UTC)
Lambian: Yes, that was exactly what I had in mind this morning. Ours was a parametric approach. We use several givens on the way zeros and ones appear. Non-parametric tests work in a different way: you don't assume much about the data-generating process. You just work out asymptotic distributions for the data to follow. Eventually, you are able to test some hypothesis using the observed data and the derived distribution. By the way, my basic Econometrics handbook calls the W-W procedure "the Geary test".
Anakin: I work out a formula that fits the data, I haven't deduced it ;-). Hence, I just suppose it is right but I haven't formally proved it. Yes, good chat the one you initiated :-). Pallida  Mors 14:22, 27 December 2007 (UTC)

[edit] Multi-dimensional discrete-time Fourier transform

Sooo, I was reading xkcd and found this resistor problem. I figured it could be solved with a two-dimensional, discrete-time Fourier transform, but realised I had never seen that transform defined! Would you check my attempt here, please? In one dimension, the discrete-time Fourier transform (as per the article) is

X(\omega) = \sum_{n=-\infty}^{\infty} x[n] \, \mathrm{e}^{-\mathrm{i} \omega n}

and its inverse is

x[n] = \frac{1}{2 \pi}\int\limits_{\omega=-\pi}^{\pi} X(\omega) \, \mathrm{e}^{\mathrm{i} \omega n} \, \mathrm{d}\omega.

Is it correct to generalise this to d dimensions as

X(\boldsymbol{\omega}) = \sum_{\mathbf{n}\in\mathbb{N}^d} x[\mathbf{n}] \, \mathrm{e}^{-\mathrm{i} \boldsymbol{\omega}\cdot \mathbf{n}}
x[\mathbf{n}] = \frac{1}{(2 \pi)^d}\int\limits_{\boldsymbol{\omega}\in\boldsymbol{\Omega}} X(\boldsymbol{\omega}) \, \mathrm{e}^{\mathrm{i}\boldsymbol{\omega}\cdot \mathbf{n}} \, \mathrm{d}\boldsymbol{\omega},

where \boldsymbol{\Omega}=\left\{\mathbf{x}\in\mathbb{R}^d : ||\mathbf{x}||_\infty < \pi\right\}=\left\{(x_1,x_2,\ldots,x_d)\in\mathbb{R}^d : \forall j \in \{1,2,\ldots,d\}\quad |x_j| < \pi\right\}? The product \boldsymbol{\omega}\cdot\mathbf{n} feels awkward. Is it? —Bromskloss (talk) 22:07, 26 December 2007 (UTC)

If you just want to know the answer to the resistor problem, see cond-mat/9909120. If you want to figure it out yourself, then you probably shouldn't be asking us for help. If you want me to drop the wiseguy act and answer your question, then I can't because I don't know. But your generalization looks fine to me, and it bears a strong superficial resemblance to the answer in the paper I just referenced (equation 18). I don't see what you think is awkward about \boldsymbol{\omega}\cdot\mathbf{n}; that's how Fourier transforms are always generalized to higher dimensions. The name "discrete-time" seems inappropriate for the multidimensional version, though. Given that it seems natural to generalize this transform to multiple dimensions, I have to wonder how it got that name. -- BenRG (talk) 06:56, 27 December 2007 (UTC)
What I found awkward with \boldsymbol{\omega}\cdot\mathbf{n} was that \mathbf{n} takes on only discrete values, whereas \boldsymbol{\omega} lives in an interval. Somehow, that felt more OK in one dimension. —Bromskloss (talk) 10:29, 27 December 2007 (UTC)
Oups, typo. I wrote "\mathbb{N}", but I meant "\mathbb{Z}". Not much activity here. I assume you all got consumed by the problem. ;-) Or mabye even sniped! —Bromskloss (talk) 11:04, 28 December 2007 (UTC)