Wikipedia:Reference desk/Archives/Mathematics/2008 January 5
From Wikipedia, the free encyclopedia
Mathematics desk | ||
---|---|---|
< January 4 | << Dec | January | Feb >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Contents |
[edit] January 5
[edit] Show that two field extensions are equivalent
Gallien Contemporary Abstract Algebra, section 20, problem 2 (not homework, I'm working through all the exercises in the book, but I've been off a while, so I'm probably making some really stupid error). Show that . I'm thinking that I should be writing elements of the first form as and the second as but it seems that this is very unlikely as a means of getting to the solution. Then trying to use , but shouldn't that require that be a zero of (x2 − 2)(x2 − 3)? Any hints on getting to the solution (for the love of God, please don't just give me an answer, I want to find it myself). 68.183.18.54 (talk) 00:37, 5 January 2008 (UTC)
- is not true (the RHS is not even an integral domain). A true statement is .
- It is clear that is contained in , so all you have to show is the reverse inclusion. Thus you need to show that and are in . Note that fields have to be closed under multiplication, so is not a generic element of that field. Is that hint enough? Algebraist 00:51, 5 January 2008 (UTC)
- Yeah, I see it now, I just let myself get off-track. By taking , we can see that . Then taking Failed to parse (Cannot write to or create math output directory): \sqrt6(\sqrt2+\sqrt3)=3\sqrt2+2\sqrt3
and from there we can get so we get two-way subsets and the fields are identical. 68.183.18.54 (talk) 01:19, 5 January 2008 (UTC)
-
-
- Correct. If you like generalisations, this is a special case of the primitive element theorem. Algebraist 15:06, 5 January 2008 (UTC)
-
[edit] Average Length of Line Segment
Suppose you have 4 random variables (a,b,c,d) used to generate a line segment with two endpoints ((a,b) & (c,d)). All four variables must be within 0 and 1. What is the average length of said line segment? I think it would have to be μ√((a-c)2+(b-d)2) but have no clue as to how to evaluate that.
By the way this isn't homework, it's just a problem I've been thinking about for a while. Thanks in advance for any help. 65.31.80.94 (talk) 02:35, 5 January 2008 (UTC)
- I have no idea why, but I figured this is a perfect use of a little computer simulation. I threw together a little script that generated 100000 or so random line-segments and averaged them (you can test it here if you're curious. It's running on a server that was old when Alan Turing lived, so please don't use numbers that are too big ;) ), and the answer is about 0.521. No idea why, but that's the answer. 83.250.203.75 (talk) 09:58, 5 January 2008 (UTC)
- Come on people, have we forgotten our integrals? It's
- .
- Doing this without a CAS is the real challenge, one that I am not currently up to. -- Meni Rosenfeld (talk) 13:51, 5 January 2008 (UTC)
- Come on people, have we forgotten our integrals? It's
- You haven't actually stated the probability distribution of the random variables - everyone here is assuming a uniform distribution, but there are a multitude of other distributions that give you values between 0 and 1 (not to mention the question of whether the variables are independent of each other). In a more general sense, you can calculate the expected value of the segment as
- where in each expectation you hold constant the value of all variables of more outer expectations (e.g. in the expectation over b, you assume a has a constant value). This is then a huge, ugly integral, which is only analytically solvable in special cases, such as independent uniform variables as solved above. Confusing Manifestation(Say hi!) 15:56, 5 January 2008 (UTC)
- For those more comfortable with integrals than with expectations, denoting by f the joint probability density function of the four variables, the expected length in this general setting is:
- In my previous calculation, I have indeed assumed that the distribution is uniform, or in other words, that
- -- Meni Rosenfeld (talk) 17:01, 5 January 2008 (UTC)
- For those more comfortable with integrals than with expectations, denoting by f the joint probability density function of the four variables, the expected length in this general setting is:
-
- Bertrand's paradox is a perfect example of how important the probability distribution we choose can be. Tesseran (talk) 11:46, 6 January 2008 (UTC)
-
- If all we know is that the endpoints lie within the unit square, then the uniform distribution is indeed the distribution with maximum entropy — and thus least additional "assumed" information — among those satisfying the constraint. Thus, if one had to come up with a guess based only on that constraint, the uniform distribution would indeed be the reasonable one to default to. (Of course, what we really should be doing is minimizing, subject to the constraints given, the Kullback-Leibler divergence between our distribution and some reasonable "prior measure" on the space of all line segments — but if our prior is the standard Lebesgue measure on the set of pairs of endpoints, the result will still be uniform.) But yes, it would be nice to know the actual distribution of endpoints explicitly. —Ilmari Karonen (talk) 19:19, 6 January 2008 (UTC)
[edit] 10 to the Power of -50 and Impossibility
I've heard that in mathematics and according to mathematicians, any chance or probability below one out of 1050 is counted and dismissed as "impossible" or "never happening". Why? Are there any articles or section of articles in Wikipedia about this? Bowei Huang (talk) 03:06, 5 January 2008 (UTC)
- Even events with probability 0 are possible to a mathematician. The mathematical probability that a random real number between 0 and 1 is a rational number, is 0, even if rational numbers do exist. Bo Jacoby (talk) 03:30, 5 January 2008 (UTC).
- I haven't heard of a given limit and mathematicians are usually more careful with their formulations than this. However, 1050 is such a huge number that in practice, you can fairly assume such an event will not occur. But only if the probability is corrected computed, and it wasn't computed with the event as "input" after the event had actually occurred, and you aren't considering a tremendous number of such possibilities while assuming none of them will occur, and ... There are many reservations. PrimeHunter (talk) 03:41, 5 January 2008 (UTC)
- It's impossible to be absolutely sure of anything. Even in mathematics you can't be sure you haven't missed a bug in your proof. You can machine-verify it, but maybe the verifier has a bug or a cosmic ray flipped a memory bit causing the wrong answer to come out. In order to make any progress in math or science you have to discard very unlikely hypotheses, not merely impossible ones. There's no particular value, like 10−50, that's used as a general threshold of impossibility, but maybe the person you heard was talking about this issue and simply used 10−50 as an example. -- BenRG (talk) 09:52, 5 January 2008 (UTC)
- Things with astronomically small probabilities happen all the time. You can try this in your own home: toss a coin 1000 times, and (given a fair coin) the combination of heads and tails you come up with will have a probability of about 10−300. You can repeat this experiment as many times as you like, and keep on coming up with 10−300-probability events on demand. To get lower probabilities, just use more coins.
-
- The statistical mechanics formulation of entropy regularly deals with probabilities vastly smaller than 10−50. For a really low probability event, label the air molecules in your room by which half of the room they are in, wait a few seconds, and examine the new configuration. Since air molecules wander around the room more-or-less at random, this is the equivalent of throwing many billions of coins at once, and any particular configuration will have a probability of less than 10−billion.
-
- However, there's no paradox here: although the probability of each individual event is vanishingly small, one of them must happen; it's a bit like winning the lottery -- although it's almost certain any given player won't win, if the prize is awarded, it must be certain that one player will win -- we just don't know which one.
-
- Now consider an event like all the air molecules in your room moving to one side of the room at once. In principle, this is about as likely as any other specific partition of those molecules to the two sides of the room. Unlike almost all of the other configurations, you'd really notice if it happened: yet it is not much more unlikely [*] than the partition which is happening in your room right now. If this sounds peculiar and interesting, you might want to read more about entropy to find out why you can safely bet it won't ever happen to you.
-
- [*] yes, I know I'm oversimplifying here, but a few squillion orders of magnitude are neither here nor there when considering numbers so small. -- The Anome (talk) 11:51, 5 January 2008 (UTC)
- The moral of the story: If someone offers you a lottery with a probability of 10 − 50 to win, don't buy a ticket. -- Meni Rosenfeld (talk) 13:55, 5 January 2008 (UTC)
- [*] yes, I know I'm oversimplifying here, but a few squillion orders of magnitude are neither here nor there when considering numbers so small. -- The Anome (talk) 11:51, 5 January 2008 (UTC)
I would think that the OP got that from some other discipline, I can't imagine any mathematician making that statement. In Project Management, the 6Σ method is considered to be pretty neat, ie near certainty (and yes, I know it should be a lower case sigma, but QA people always write it that way). But 6σ is only .9999999980268 which equates to a probability of 1 in 5x108 approximately. And even 7σ is still only .9999999999974 (1 in 3.8x1011). In my own discipline, when I was still a design engineer we used slide rules (showing my age) which are good to three significant figures, ie any variation greater than 1 in 103 was not considered significant. Can't think who would be interested out to 1 in 1050 though. SpinningSpark 15:12, 5 January 2008 (UTC)
I have done a little poking around over this 1050 number and think I have the answer. It comes up a lot on Creationist web sites as the largest number that can occur in nature, the implication being that anything that has a larger number of possibilities is not going to happen. See for instance this one[1] under the MATHEMATICAL POSSIBILITIES OF DNA section. SpinningSpark 17:52, 5 January 2008 (UTC)
- That page actually says "It is said that any number larger than 2 x 1030 cannot occur in nature". You know, just in case anyone was looking for more evidence that creationism is nonsense. -- Meni Rosenfeld (talk) 18:00, 5 January 2008 (UTC)
- Especially as the line immediately above it claims that there are 1080 electrons in the universe!! SpinningSpark 18:21, 5 January 2008 (UTC)
-
-
- Creationism isn't nonsense, but those arguments for it are!! A math-wiki (talk) 09:12, 6 January 2008 (UTC)
-
I've heard that there's something called Borel's Law of Mathematics. What is that? Bowei Huang (talk) 05:51, 8 January 2008 (UTC)
- A Google search [2] shows "Borel's Law" is a term used mainly by creationists about a rule using the number 10-50 as a limit to "impossible" things. It's named after Émile Borel but apparently he didn't formulate it like that. PrimeHunter (talk) 06:07, 8 January 2008 (UTC)
-
- What Borel said was this:
- Imagine we have trained a million monkeys to hit the keys of a typewriter at random and that ... these typist monkeys work arduously ten hours a day ... And that at the end of a year [their collected work] appears to contain the exact copy of all kinds of works in all languages kept in the richest libraries of the world. Such is the probability of a notable deviation occurring, during a very short time, in an area of some extent, from what statistical mechanics considers the most likely phenomenon. (Émile Borel, "Mécanique Statistique et Irréversibilité," J. Phys. 5e série, vol. 3, 1913, pp.189-196.)
- Apart from the fact that he did not say this was impossible, the probability of the unlikely feat we are asked to imagine these diligent monkeys having performed is much and much less than 10-50. In a year they should produce more than a million letters, and – the total size of all books is rather certainly much more – perhaps hundred million letters, so we are considering events here whose probability is much less than 10-1000000. --Lambiam 12:04, 8 January 2008 (UTC)
- What Borel said was this:
Where have you heard or thought of that? And what about the probability of a tornado assembling a Boeing 747 out of pieces from a junkyard? Bowei Huang (talk) 01:23, 9 January 2008 (UTC)
- Considering that the typical junkyard doesn't have enough titanium to make one jet engine, much less the four monsters that power a 747, and that junkyards tend to have far less aluminum than a 747, I'd say that it falls under "can't happen" (which is not the same thing as a probability of 0). --Carnildo (talk) 01:46, 9 January 2008 (UTC)
But suppose the junkyard did have all the materials and chemicals to build an airplane piled together? Bowei Huang (talk) 02:19, 9 January 2008 (UTC)
- You don't even need to suppose that. Nuclear reactions can transform one element into another. Sure, the probability of it happening spontaneously is... very small. I think it's roughly on the order of . But it still can happen. -- Meni Rosenfeld (talk) 08:36, 9 January 2008 (UTC)
[edit] Another probability question.
Thanks for the last answer.
I'm doing another question now which never works out its on Conditional Probability. one example is There a 4 firms biding for a contract, A, B, C, & D. The probabilities of each wining is as follows :- P(A)=0.15, P(B)=0.35, P(C)=0.3, P(D)=0.2. However firm A drops out. Find the new probability of B wining.
I did it like this
P(B|Ā)= P(B :upsidedown U: Ā) ---------------------- P(B)
= 0.35 + 0.85 ----------- 0.85 = 1.412
Thats clearly wrong, so what do I do? 136.206.1.17 (talk) 15:29, 5 January 2008 (UTC)
- Your use of the formula P(B & Ā) = P(B) + P(Ā) is to blame (it gives 1.2 as a probability). Think about it: what is the chance of B winning the contract and A not winning it? You have also got the formula for conditional probablity wrong. Algebraist 15:53, 5 January 2008 (UTC)
Should it be, P(B & Ā) = P(B) X P(Ā) on the top line? If thats the case the answer works out a .35, also wrong as it should be going up.
If I did the sum like this I get a realistic answer, is it right?
P(A)= 0.35 --------------- 0.35 + 0.3 + 0.2 =0.538
unsigned contribution by 136.206.1.17 (talk)
You said thanks for the last answer, but I looked at your previous post and noticed that no-one has actually given you the correct answer yet, only hints (meanies!). Have you got it now or do you want me to put you out of your misery? SpinningSpark 16:32, 5 January 2008 (UTC)
Can you put me out of my misory please? 136.206.1.17 (talk) 16:35, 5 January 2008 (UTC)
- I'll answer in your original thread so this one doesn't get cluttered SpinningSpark 17:28, 5 January 2008 (UTC)
- (edit conflict) Of course no-one gave the answer - the big grey box says we shouldn't! However, I will point out that the reason why, in this case, P(X & Y) = P(X)P(Y) doesn't hold is because that assumes that X and Y are independent. In this case, if B wins the contract then obviously A can't, so they're not independent. Instead, the probability of B getting the contract and A not getting it is ... the probability of B getting it, which seems to be what you've figured out for yourself. At a glance, I'd say you have the right answer there, but I'm pretty tired right now so don't quote me on that. Confusing Manifestation(Say hi!) 16:41, 5 January 2008 (UTC)
- [ec]First things first. There is no way to answer this question, since we are not told how firm A dropping out effects the probabilities of the others. Assuming that conditional probability must be the correct thing to calculate is naive.
- If the question was phrased "what is the probability that B won, given that you know that A didn't win", we would be in business. The definition of Conditional probability is , or in our case, . Now, your idea that is only correct if the events are independent, but that is not the case here. A correct way to calculate it is to use again the definition of conditional probability to find . Now, if B won then surely A didn't win, thus so . This gives .
- Note that this calculation is equivalent to Bayes' theorem, . -- Meni Rosenfeld (talk) 16:44, 5 January 2008 (UTC)
Thanks for that answer. I know its a conditional probability question because thats the section of my book that it comes from. —Preceding unsigned comment added by 78.16.6.199 (talk) 12:20, 6 January 2008 (UTC)
- Huh? This attitude is wrong on so many levels. Just because whoever wrote the book wanted you to use conditional probability here, doesn't mean this is the correct thing to do. There's no inherent problem in answering a question, say, in an exam, the way you are expected and which will maximize your grade - but it's crucial to keep in mind the truth. In this case, you should write down "0.412" in the exam, but you should acknowledge that the true answer is that the question is unasnwerable. -- Meni Rosenfeld (talk) 12:59, 6 January 2008 (UTC)
[edit] Formal logic
Hello. I need some help with formal logic principles, as they relate to language. In formal logic, there is a strong connection between math and language. In particular, using the language "and" versus the language "or" logically carries two very different meanings.
So, let's say that we consider classifying everyone in the world into the following two characteristics: Athlete (professional) and Black.
Using these two characteristics, we effectively have four distinct categories of people:
- 1. Athlete who is Black - example Mike Tyson
- 2. Athlete who is not Black - example Jason Giambi
- 3. Non athlete who is Black - example Denzel Washington
- 4. Non athlete who is not Black - example Tom Cruise
Using formal logic, these would be described as
- 1. A AND B
- 2. A AND NOT B
- 3. NOT A AND B
- 4. NOT A AND NOT B
Using set theory notation, these would be described as
- 1. Set A intersect Set B
- 2. Set A intersect Set B':
- 3. Set A' intersect Set B
- 4. Set A' intersect Set B'
Now ... to distinguish between the language of AND versus the language of OR:
A AND B
If we discuss Set A intersect Set B ... in logic, A AND B ... we are talking about people who are Athletes AND are also Black ... that is, a person must display BOTH characteristics
So, let us consider the four famous people named above
- Mike Tyson - is included because he is both Athlete AND he is Black
- Jason Giambi - is excluded because, even though he is Athlete, he is not Black
- Denzel Washington - is excluded because, even though he is Black, he is not Athlete
- Tom Cruise - is excluded because he is not Athlete and also because he is not Black
A OR B
If we discuss Set A union Set B ... in logic, A OR B ... we are talking about people who are either Athletes OR are Black OR both ... that is, a person must display either ONE characteristic OR the other OR perhaps both
So, let us consider the four famous people named above
- Mike Tyson - is included because he is Athlete and also because he is Black (he happens to have both characteristics)
- Jason Giambi - is included because, even though he is not Black, he is Athlete (he has one of the characteristics, but not the other)
- Denzel Washington - is included because, even though he is not Athlete, he is Black (he has one of the characteristics, but not the other)
- Tom Cruise - is excluded because he not Athlete and he is not Black (he has neither of the characteristics)
Conclusion:
Using "and" ... A AND B ... this includes only 1 person from our group (Tyson) and excludes the other 3 people (Giambi, Washington, Cruise)
Using "or" ... A OR B ... this excludes only 1 person from our group (Cruise) and includes the other 3 people (Giambi, Washington, Tyson)
Thus, the word "and" versus "or" has very different meanings.
So, before I continue ... is all of the above correct? Or am I missing anything?
Premises: We can assume for this discussion that the Universe of consideration (U) is just these four famous individuals for now ... and that the word "or" is the inclusive or (OR) as opposed to the exclusive or (XOR) ... and that both Sets (Athletes / Black) are well-defined sets.
Thank you! (Joseph A. Spadaro (talk) 19:19, 5 January 2008 (UTC))
- This looks okay, though of course one might find something to quibble about in your choice of words. I'm not sure which is the part you had doubts about. -- Meni Rosenfeld (talk) 19:45, 5 January 2008 (UTC)
- The relation between the set operations and the operations on truth values is as follows.
- If S and T are sets, which are both subsets of some universum U, and x is an element (individual) of U, then:
- "x is a member of S-intersect-T" means the same as: "x is a member of S AND x is a member of T";
- "x is a member of S' " means the same as: "x is NOT a member of S ".
- Since a set is defined by its members, this can be seen as a definition of set intersection and set complement. See also Venn diagram. --Lambiam 23:55, 5 January 2008 (UTC)
Thanks. So, it seems that we are all more or less on the same page. That being said ... I posted a question on the Language Help Desk. And I wanted to get some input/feedback from some mathematicians who understand logic ... and the interplay between formal logic and precise language. If anyone well versed in math / formal logic would take a look at this question on the Language Desk (Wikipedia:Reference desk/Language#Correct wording), and offer some input there, it would be appreciated. Thanks. (Joseph A. Spadaro (talk) 23:23, 6 January 2008 (UTC))
-
- Note that whilst what has been written above seems correct at a quick scan, it doesn't mean that folk use AND and OR in that manner. Colloqially they get all messed up and common usage won't let you reliably interpret anything. -- SGBailey (talk) 23:26, 6 January 2008 (UTC)
- If you want to look at it formalistically, you can think of it as a question of order of precedence. "List of foos that are bar and zotz" could be taken with the "and" binding more tightly, in which case you get that the criterion for inclusion of a foo is that it be bar and at the same time also zotz, which is the one that most closely parallels the usual forms in mathematical logic. But if you want "and" to bind last, rather than first, then perhaps you distribute "foos that are" over the "and", getting "list of foos that are bar, and of foos that are zotz". There's nothing illogical, or even "colloquial", about the latter rendering, and in many contexts it's the way natural language semantics works. --Trovatore (talk) 23:46, 6 January 2008 (UTC)
- Note that whilst what has been written above seems correct at a quick scan, it doesn't mean that folk use AND and OR in that manner. Colloqially they get all messed up and common usage won't let you reliably interpret anything. -- SGBailey (talk) 23:26, 6 January 2008 (UTC)
Thanks for the input --- much appreciated ... (Joseph A. Spadaro (talk) 06:03, 10 January 2008 (UTC))