Wikipedia:Reference desk/Archives/Mathematics/2008 May 21

From Wikipedia, the free encyclopedia

Mathematics desk
< May 20 << Apr | May | Jun >> May 22 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] May 21

[edit] Complements of finite-dimensional subspaces

Let E be a Banach space and F a finite-dimensional subspace. The claim is that there exists a closed subspace G such that

E = F \oplus G.

Let \{e_1,\ldots,e_n\} be a basis for F and extend this to a basis B for E. Let \{\varepsilon_1,\ldots,\varepsilon_n\} be the basis dual to \{e_1,\ldots,e_n\}. By the Hahn-Banach theorem, each \varepsilon_i extends to a continuous linear functional on E. The intersection of the kernels of the \varepsilon_i is then a closed subspace G of E, and clearly F \cap G = \{0\}. And if \beta \in B - \{e_1,\ldots,e_n\} then

\beta - \sum_i \varepsilon_i(\beta)e_i \in G,

so that \beta \in F + G. It follows that F + G = E\,.

Is this the right approach?  — merge 08:54, 21 May 2008 (UTC)

Well, it works, though the use of the basis B is completely unnecessary. You could just end by saying 'if v \in E then v - \sum_i \varepsilon_i(v)e_i \in G.' Algebraist 10:30, 21 May 2008 (UTC)
Good point. Thanks!  — merge 11:38, 21 May 2008 (UTC)
Oh, and the assumption that E is complete was unused. Dispose of it. Algebraist 12:22, 21 May 2008 (UTC)
Oh, that's interesting. The name of the theorem confuses me at times. I should know by now that Lang is cavalier with his hypotheses.  ;)  — merge 12:38, 21 May 2008 (UTC)
Unfortunately, doing functional analysis requires distinguishing a lot of things named after Stefan Banach. Algebraist 13:01, 21 May 2008 (UTC)
Ahaha. Well, that I can live with (although I do often rail against the mathematical tradition of naming things after people instead of descriptively). But Lang is always doing things like assuming "normed instead of metric" so that "we can write the distance in terms of the absolute value sign", or adding hypotheses because "this is the only case anyone cares about anyway." I suspect this is just another combination of his perverse sense of humour and love of dropping stealth exercises for the reader.  — merge 13:22, 21 May 2008 (UTC)
The problem with descriptive naming is that disambiguation can make things pretty unwieldy. My own preference is a combination: thus Hahn-Banach extension theorem, Tietze-Urysohn extension theorem, Carathéodory's extension theorem, etc. On the subject of weak hypotheses in exercises, this often serves the purpose of making the reader think more. In some cases (the five lemma springs to mind), the minimal hypotheses are the proof. Algebraist 13:50, 21 May 2008 (UTC)
Right. Stealth exercises.  ;)  — merge 14:16, 21 May 2008 (UTC)
To the best of my memory, it is also true for locally convex spaces. twma 11:07, 22 May 2008 (UTC)
Yeah. For LCTVSs you can prove the continuous-extension version of HB from the most basic version via messing around with Minkowski functionals. Algebraist 21:34, 22 May 2008 (UTC)

[edit] proof for a.b=lcm x gcd

could please give me the proof for the following equality

product of two natural numbers is equal to the product of their lcm and gcd

( where lcm stands for least common multiple and gcd for greatest common divisor )Kasiraoj (talk) 14:07, 21 May 2008 (UTC)

I would start by expressing a and b as products of primes (see Fundamental theorem of arithmetic), and then work out what the lcm and gcd are in terms of those primes, and it should follow from that. --Tango (talk) 14:32, 21 May 2008 (UTC)
This answer got me thinking about whether the Fundamental Theorem is actually necessary here. More precisely: firstly, does there exist an integral domain in which any pair of elements has a GCD and an LCM, but which is not a UFD? (edit:yes) Secondly, if there are such rings, does this result hold in them, i.e. is ab always an associate of [a,b](a,b)? Algebraist 15:13, 21 May 2008 (UTC)
In case you didn't already find it, see GCD domain.
Seems to me that once you have a GCD w of x and y (not necessarily unique) then z=xy/w is a multiple of x and of y so is a common multiple. And z must be a LCM because if we had a u that was a common multiple of x and y and also a divisor of z then v=w(z/u) is common divisor of x and y (because x/v = u/y and y/v = u/x) that is also multiple of w, which contradicts our assumption that w is a GCD. So for each GCD w there is a LCM xy/w (and vice versa, by reversing the above), even if it is not unique. Gandalf61 (talk) 16:04, 21 May 2008 (UTC)
I suspect you might be right, but your proof doesn't work. You've shown that z is a minimal common multiple, but not that it is a least common multiple. Algebraist 21:50, 21 May 2008 (UTC)
And of course we're assuming the existence of LCMs, so any minimal CM is an LCM. Thanks. Algebraist 21:52, 21 May 2008 (UTC)
That we're in a GCD domain is only assuming the existence of GCDs, not LCMs, as far as I can tell, so you need a slightly stronger assumption than just being in a GCD domain. (It may turn out to be equivalent, of course.) --Tango (talk) 12:35, 22 May 2008 (UTC)
Sorry, by we I mean me, in my initial question above. Our article doesn't state whether a GCD domain automatically has LCMs, and it should. Algebraist 12:39, 22 May 2008 (UTC)
Aren't we going in circles here ? If x and y are in a general commutative ring and w is a maximal common divisor of x and y (i.e. the only multiples of w that are also c.d.s are associates of w) then z=xy/w is a minimal common multiple of x and y (i.e. the only divisors of z that are also c.m.s are associates of z) and vice versa - as per my argument above (with a little tightening up to allow for associates). And if further x and y are in a GCD domain so that w is not just a maximal c.d. but is a GCD of x and y then z=xy/w is a LCM of x and y, and vice versa.
I am sure this must be in a standard text somewhere - I will add it to the GCD domain article when I have found a reference. Gandalf61 (talk) 13:06, 22 May 2008 (UTC)
My algebra's been slow lately, but finding a piece of paper has finally allowed me to prove that any GCD domain has (binary) LCMs. Unfortunately, GCD domain is completely unreferenced and my algebra textbook doesn't mention the things explicitly; time to go looking for a reference that does. Algebraist 13:26, 22 May 2008 (UTC)
I'm off to the Uni library in a bit anyway - I'll see if I can find anything. --Tango (talk) 13:29, 22 May 2008 (UTC)
Google books gave me a ref, which I've added. Curiously, in any ID, ({x,y} has an LCM) → ({x,y} has a GCD), but the converse fails. Algebraist 14:17, 22 May 2008 (UTC)
I was unable to find any books which mentioned GCD domains, I did however find one that briefly mentioned a "ring [or it might have been ID, I don't recall] with a greatest common divisor" - it seems the name is far from standard. The only relevant thing it mentioned about them was a theorem: An integral domain with a greatest common divisor is one with a least common multiple, and conversely. I think that's what you'd already worked out. It is, indeed, curious that every pair having a GCD implies every pair has an LCM, but a given pair having a GCD doesn't imply that that same pair has a LCM. (The book I found was very large, very old and very much falling apart, so I left it in the library, so don't ask any more questions!) --Tango (talk) 14:58, 22 May 2008 (UTC)
According to my source, 'GCD domain' was popularised by Irving Kaplansky's textbook in 1974, so it may now be standard. Other terms mentioned are 'pseudo-Bezout', 'HCF-ring', 'complete' and 'property BA'. Algebraist 15:06, 22 May 2008 (UTC)
The book I was reading could easily have pre-dated 1974. Odd that none of the other books I looked at mentioned that name, though - maybe Durham Uni library is very out-of-date! --Tango (talk) 15:26, 22 May 2008 (UTC)

[edit] set theories

hey ive got fundamental dobts...kindly some one give me a link or the required answers clearly... i wd b highly greatful ..

my doubt is that ...how can we represent two indepentent events on a venn diagram.if we do it by two intersecting circle .then what can we say about the condition of independence.p(A intersection B)=p(A)*p(B).how is this derived.

can we say mutually exclusive events as a special case of independent events. Reveal.mystery (talk) —Preceding comment was added at 15:52, 21 May 2008 (UTC)

Hi. First off, no you can't say mutually exclusive events are a special case of independent events, as by definition the two are not independent - if you have one, you cannot have the other. The proof of independence I admit I can't remember right now, but it is fairly logical when you think about it. If you have two events, neither of which influences the other, the probability of them both happening would be the probability of one happening multiplied by the probability of the other happening. -mattbuck (Talk) 17:35, 21 May 2008 (UTC)
The definition of independence is p(A \cap B)=p(A)p(B), so there is nothing to prove. You may ask, why did we choose to call "independent" two events satisfying this. I think this boils down to the empirical observation that real-world events which seem independent to us tend to satisfy this rule. You can rephrase this in terms of Conditional probability, but that's just moving the problem elsewhere. -- Meni Rosenfeld (talk) 17:45, 21 May 2008 (UTC)
The definition of statistical independence can be written p(A|B) = p(A), meaning that the conditional probability of A given B is the same as the unconditional probability of A. So the probability of A is independent on whether B occured or not. This definition seems intuitively natural. As the conditional probability satisfies p(A | B)\cdot p(B)=p(A\cap B), the condition p(A\cap B)=p(A)\cdot p(B) is derived. In a unit square diagram, A may be drawn as a horizontal bar and B as a vertical bar crossing A, illustrating that the area of the intersection between A and B is the product of the areas of A and B. Bo Jacoby (talk) 19:44, 21 May 2008 (UTC).
Which, as Meni said, just move the problem to the definition of conditional probability. Sooner or later you just have to accept that those definitions seem to work - you can't prove everything, you have to start somewhere. --Tango (talk) 20:27, 21 May 2008 (UTC)
It is ok to use p(A|B) = p(A), rather than p(A intersection B)=p(A)*p(B) as the definition of independence. You don't just have to accept anything. Bo Jacoby (talk) 08:07, 22 May 2008 (UTC).
Of course you can define "A and B are independent if P(A | B) = P(A)" (I could quibble about what happens when the probabilities of A or B are zero, but never mind that). But how do you know that p(A | B)\cdot p(B)=p(A\cap B), unless you observe that empirically or define p(A|B)=\frac{P(A\cap B)}{P(B)}? And, why would you define the latter without empirically observing it? -- Meni Rosenfeld (talk) 11:07, 22 May 2008 (UTC)
No empirical observations are needed, (nor are they possible because the probability is a limit which is not accesible observationally), but a thought experiment: Consider an event A, (say, that a white dice show four or five), and an event B, (say, that a blue dice show six). The probability p(A) is the limit of the ratio between (the number of times you throw the white dice and it shows four or five) and (the number of times you throw the white dice altogether). The conditional probability p(A|B) is the limit of the ratio between (the number of times that you throw both dice and the white dice show four or five and the blue dice show six) and (the number of times that you throw both dice and the blue dice show six). Now, if the two events are independent in the non-technical sense of the word, that the result of the two dice do not depend on one another, then the two limits must be equal. So p(A)=p(A|B) if A and B are independent. Now define that A and B are statistically independent if p(A)=p(A|B). So independent events are also statistically independent. Consider the equation p(A|B)·p(B)=p(A and B). The left hand side is the limit of the ratio between (the number of times you throw both dice and the white one shows four or five and the blue one shows six) and (the number of times you throw both dice and the blue one shows six), multiplied by the limit of the ratio between (the number of times the blue dice shows six) and (the number of times you throw the blue dice all together). Using that the product of limits is the limit of the product, you get the limit of the ratio between (the number of times you throw both dice and the white one shows four or five and the blue one shows six) and (the number of times you throw the blue dice all together), which is equal to the right hand side p(A and B). Bo Jacoby (talk) 14:16, 22 May 2008 (UTC).
But again, you just move the problem a little further along. In order to justify that two dice will be independent of each other, you first assume that successive tosses of a single die will be independent, with that assumption and its definition swept under the carpet. Black Carrot (talk) 15:55, 22 May 2008 (UTC)
The very concept of the probability of an outcome of an experiment assumes that the experiment can be repeated indefinitely and that the outcome of the repetitions are mutually independent. Bo Jacoby (talk) 18:08, 22 May 2008 (UTC).
As I understand it, that's frequency probability, Bayesian probability is a little different. I don't think that's an issue for this discussion, where such frequencies are meaningful, but it's always good to be precise. --Tango (talk) 19:02, 22 May 2008 (UTC)
Yes, the 'probability of an outcome of an experiment', based on some hypothesis, is a frequentist probability. The Bayesian point of view is the opposite one, to estimate the credibility of a hypothesis, based on given outcomes of experiments. While the two interpretations of probability differ, the algebra is the same. Computing the probability of the outcome based on the hypothesis is deductive reasoning. Computing the credibility of the hypothesis based on the observations is called inductive reasoning. Bo Jacoby (talk) 06:18, 23 May 2008 (UTC).

[edit] Odd trig/geometry question

On a certain bike there are spokes that are 14 inches long. Each spoke forms an angle of 30o with each of the two spokes beside it. What is the distance between the places where two spokes that are beside each other attach to the wheel? I think this has something to do with geometric mean, but I'm not sure and have no idea how to do it. *This is in fact not a homework. It was a bonus question on a test we had today* Zrs 12 (talk) 23:35, 21 May 2008 (UTC)

Assuming they come from the same point, you can treat them as radii. You then use the formula for the circumference (c = 2(pi)r), but you only want 30 degrees rather than 360, so multiply by 30/360. -mattbuck (Talk) 23:49, 21 May 2008 (UTC)
Hmm, I wonder why that tripped me up so badly. Is there any way to do this with triangles and trig? This was on a trig test so I have no idea why there would be something about circles on it. I like Mattbuck's method though. That's a lot simpler than what I was trying. Thanks, Zrs 12 (talk) 23:58, 21 May 2008 (UTC)
Mattbuck's answer gives the distance along the wheel, i.e. along an arc of a circle. If you want the straight-line distance, then trigonometry is the way to go, specifically the law of cosines. Algebraist 00:02, 22 May 2008 (UTC)
Hmm, how would you pull that off? I tried that on the test, but I couldn't figure out how to make it work. Please explain. Wait, crap. I feel stupid now. I misunderstood the problem and took it that the spokes didn't touch at any point. Well, that was a stupid mistake. Zrs 12 (talk) 01:42, 22 May 2008 (UTC)
Also check out chord (geometry). --Prestidigitator (talk) 03:54, 22 May 2008 (UTC)
Oh, I understand what's going on now. I just completely misunderstood the problem. Otherwise, I would've hadn't any trouble solving it. Thank you all for your help, Zrs 12 (talk) 13:24, 22 May 2008 (UTC)