Wikipedia:Reference desk/Archives/Mathematics/2007 July 13

From Wikipedia, the free encyclopedia

Mathematics desk
< July 12 << Jun | July | Aug >> July 14 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] July 13

[edit] Formula for number of possible combinations?

This is my first ever post to the maths desk - I usually hang out on the humanities/misc/language cluster - so please make your answers to this comprehensible to a non-mathematician :)

I would like to know if there is a formula for working out the number of possible combinations of a given number of characters. For example, if you take A and B, I think there are 4 (or 2 squared) possibilities - AA, AB, BA and BB. If you take A, B and C, I make it 27, which is 3 cubed. Does this mean that the answer is always going to be n to the power of n? Is the number of permutations of A, B, C and D going to be 4 to the power of 4, or 256?

Secondly, is there a formula for working out the number of possible three-letter combinations of all 26 letters of the alphabet?

Many thanks, --Richardrj talk email 08:10, 13 July 2007 (UTC)

Hi! If we have n characters in our alphabet (2, for example, if we use only "A" and "B") and want to use them to create a string of length m, there are nm combinations. For example, if we have 26 characters avaliable (n = 26) and form three-letter combinations (m = 3), there are 263 = 17576 ways to do it. —Bromskloss 08:54, 13 July 2007 (UTC)
Cool, thanks very much! --Richardrj talk email 09:02, 13 July 2007 (UTC)
You're welcome! It's apparently called permutation with repetition, btw. —Bromskloss 09:04, 13 July 2007 (UTC)
You asked for the answer to be comprehensible to a non-mathemetician ... which, in this case, is quite limiting. First, there are articles on permutations and combinations, both of which might be over your head -- but also might have some helpful introductory concepts in them. Second, the problem as you define it is not very clear. In other words, I don't know what you want -- because I think that you yourself are not sure of what you want. Let's take the first example you give: the A and B. It seems like you are saying that you want all 2-letter arrangements, in which the only available letters are A and B. Thus, you have AA, AB, BA, and BB. Correct. In actuality, the total is derived at by 2 times 2 = 4. (This is called the fundamental counting principle.) In the second example, you have now limited your "alphabet" to the three letters A, B, C. But -- it seems -- you are only interested in 3-letter arrangements. Which, as you say, would be indeed 3 times 3 times 3 = 27. So, to answer your question about 4: yes, the "arrangements" (the proper mathematical word) would be 4 times 4 times 4 times 4 = 256 arrangements -- assuming that you are interested in creating only 4-letter arrangements (as opposed to, say, 3-letter or even 2-letter arrangements). That is to say, you are interested in things like "AABD" but not things like "ACC" or "BD". If you are only interested in creating 4-letter "words" (arrangements) using the 4 letters of A, B, C, D -- then, yes, there are 256 arrangements possible. If you want to move on to 5 letters: A, B, C, D, E -- there would be five raised to the fifth power (that is, 5x5x5x5x5) arrangements possible, but only if you are interested in creating 5-letter words ("AABDE") and not 4 or 3 or 2 letter words (such as "BCCE" or "ACD" or "BE"). I hope this is making sense. As far as your second question: "Is there a formula for working out the number of possible three-letter combinations of all 26 letters of the alphabet?" ... The answer is 26 times 26 times 26 = 17,576 arrangements of 3-letter "words" using 26 possible letters to be arranged. All of this falls under the "fundamental counting principle" (FCP). I can explain the FCP in basic terms. I will use the final example for illustration (using all 26 letters of the alphabet, determine the number of 3-letter arrangements). Think of it this way. You have 3 slots or, say, letters you are "branding" to create a license plate. The license plate will be of the format: (First character / Second character / third character). Hence, things like " Q / V / X " would be one possible license plate. Another would be " J / G / W ". And so on. So, as far as the first character in the license plate, you have 26 different ways available to fill that slot (A, B, C, ... X, Y, Z). As far as the second character in the license plate, you have 26 different ways available to fill that slot (again, A, B, C, ... X, Y, Z). And the same for the third character of the license plate. So, the FCP says that you multiply the number of ways of filling each slot: hence 26 x 26 x 26. Another example: Say that you want to create 5-character license plates and you only want to use the letters Q, W, E, R, T, and Y. There are 6 ways (Q, W, E, R, T, Y) to fill the first character of the license plate, 6 ways to fill the second character, 6 ways to fill the third character, 6 ways to fill the fourth character, and 6 ways to fill the fifth character of the license plate. Hence, the total number of arrangements will be 6x6x6x6x6. I hope this makes sense and helps. (JosephASpadaro 09:17, 13 July 2007 (UTC))
Stop being a joyless nerd, you know exactly what he was asking. Jesus
Thannks very much for this extensive reply, Joseph. In fact Bromskloss had already answered it fine - maybe you and he had an edit conflict, you don't say. In my defence, I thought I had made myself fairly clear, and he didn't seem to have a problem interpreting my question either. I said that I wanted the formula for "the number of possible combinations of a given number of characters", which to me is a pretty clear indication that, yes, I'm only interested in the number of letters in the arrangement being the same as the number of characters available. I don't wish to sound ungrateful, so thanks again for your reply. And have a look at my post this morning on the domain name thing, if you care to continue the debate over there. --Richardrj talk email 09:30, 13 July 2007 (UTC)
I "intuitively" understood what you were getting at, as I am sure Bromskloss did. I am simply saying that you did not word correctly what you intended. Yes, you stated "the number of possible combinations of a given number of characters". First, the word "combinations" is a mathematical term and, in this sentence, is used incorrectly. Being a non-mathematician, I suspect that you did not know that ... and that you were using the "everyday" meaning of the word "combinations". Second, yes -- you specified a given number of characters (A, B, C, D). True. However, you did not specify the length of the "word" you were attempting to create. By reviewing your examples, I figured it out. But, not by your wording of the question. That is why my explanation pointed out that, while using the 5 letters of A B C D E, one can still seek to create 4-letter arrangements, or 3, or 2, or even 1. It does not necessarily have to be 5. In other words, in a mathematical problem of this nature -- one needs to know TWO things in order to solve the problem: (1) how many letters are available for use; and (2) how long is the string / "word" being created. One needs to know / specify both in order to correctly solve the problem. And it is not always assumed that both numbers are always the same (i.e., 3 letter words using 3 letters; 5 letter words using 5 letters). That's why I pointed out the "other" scenarios (i.e., 5 letters available to create a 3-letter word like "CBD"). That's all. Thanks. (JosephASpadaro 10:09, 13 July 2007 (UTC))
And, for some other purposes, we should add that — since you allow repetitions (as in 'AA') — it is possible to create arrangements longer than the alphabet (say, 'abacab' is an example of a 6-letter word over the 3-letter alphabet {a, b, c}). Of course the number of such words is computed by the same formula, given by Bromskloss: (No of possible words) = (alphabet length)(word length).
CiaPan 11:07, 13 July 2007 (UTC)
I thought Bromskloss answered the question quite well. I also thought the question itself was well-written. You can't expect a newbie to know jargon, but he gave enough examples to make it clear what he was looking for. Black Carrot 16:05, 13 July 2007 (UTC)
BlackCarrot, I am not in disagreement -- Bromskloss answered the question just fine. Nonetheless, the question itself was not well-written (from a mathematical standpoint). That is why I offered the (mathematical) corrections. And I fully understood that the poster was not a mathematician and I specifically addressed that issue in my responses. Yes, from an "intuitive" standpoint, I understood the question. However, from a mathematical standpoint, the question was incomplete and used improper wording. I assumed that the poster would want to know his errors, so that he can correct them in the future. It serves no use to NOT point out the errors, in which case the poster would never know the errors. Thanks. (JosephASpadaro 20:38, 13 July 2007 (UTC))
You're a complete dick (from a mathematical standpoint)

[edit] Confused with F-test

Image:F-test.jpg

I tried to do F-test on Excel. On the basis of the table, can I conclude that the variance of series 1 is not greater than the variance of series 2 at 0.05 level of significance?

I know that the higher variance should always be used at numerator and the lower variance as denominator and the F statistics thus should always be greater than or equal to one. But when I did the other way round and used Excel formula to calculate p value using F calculated this way, the software did not display any error message and returned p value as 0.9992. Does this mean that the result is acceptable. Is this the case of one-tailed F test?--Ilovenepal 11:50, 13 July 2007 (UTC)--Ilovenepal 11:55, 13 July 2007 (UTC)

Without more information on what those series are, and the exact form of the p value computation, it is hard to say for sure. But here are some points to consider.
If the two series are simply independent samples of data, it really doesn't matter the order of the division, although in this case you probably would want a two sided F-test, unless a priori you wish to test that one particular variance is greater, or not, than the other. As I never use Excel, I could not promise it's smart enough to do that without a lot of effort. I would suspect the test you did is a one-sided one, although if so I can see that if you did a two sided one, the p-value would be approximately 0.0016, assuming the ordering of the quotient was determined before seeing the variances and not decided post hoc to be less than 1.
A good software package should be comfortable with F quotients less than 1 (the converse is not necessarily true!!).
What you do not want to do in this case is intentionally order them by size and divide, using the quotient as the statistic in a one-sided test. That essentially doubles the level of significance in this case. Doing this ordering with a one-sided test would be appropriate in a sums of squares (SS) analysis, when you know one of the SS terms may or may not have an additional variance component on top of a component common to both terms. Then the first term does go in the numerator, as it being the smaller of the two only suggests there is no additional component. Baccyak4H (Yak!) 13:49, 13 July 2007 (UTC)
I thought an F distribution was always a one-sided test, given the shape of the distribution. Is it not right? Tim 13:11, 14 July 2007 (UTC)
Its most typical application is, so it usually is (that would be the SS one I mentioned). But the one here (if I understand it correctly) is not its most typical application.
A distribution need not be symmetrical to have meaningful quantiles in its left or lesser skewed (whichever your point was) tail.
In some sense, you are right that one could do a one-sided test here, but to do it right would require using the 1-α/2 quantile for an α level test. Note the division by two. To demostrate the validity of that paradoxical halving is not a battle I wished to have. So I described it as two-sided. Baccyak4H (Yak!) 02:58, 15 July 2007 (UTC)

Thank you. I have just assumed the series to be independent and I wanted to see whether series 1 has significantly higher variance than series 2 (one-tailed test, I suppose), not whether the variances are statistically different (two-tailed test). Now, with the above results, can I conclude that series 1 does not have higher variance than series 2? --Ilovenepal 11:53, 15 July 2007 (UTC)

If your question of 1 being higher than 2 was not informed by looking at the data themselves (including the sample variances), but was stated before collecting the data, then your inference is correct. Baccyak4H (Yak!) 19:31, 15 July 2007 (UTC)

[edit] Equation

I am having problems solving this equation

 \frac{1}{2}^x + \frac{1}{4}^x = 1

I have tried solving it with logs but this always leads to x = 0 which obviously doesn't work because a0 = 1 and so this equation would always equal 2.

I am sure it is significant in some way that it can be written as

 \frac{1}{2}^x + \frac{1}{2}^{2x}=1

but I don't see how.

I'm not looking for an answer - infact that is the last thing I want - just a helpful hint or two. Algebra man 13:26, 13 July 2007 (UTC)

It looks like you raise 1 in the enumerator to the power x. If you mean a fraction raised to power x then you should use parentheses:
 \left( \frac{1}{2} \right)^x + \left( \frac{1}{4}\right)^x = 1
to make the notation clear. CiaPan 13:39, 13 July 2007 (UTC)

Sorry. I know I should have but I'm new to writing maths properly on wikipedia. Plus I though it would be obvious. Algebra man 13:42, 13 July 2007 (UTC)

As for solving the equation - note that \left(\frac{1}{2}\right)^{2x}=\left(\left(\frac{1}{2}\right)^x\right)^2. So what does this equation look like? -- Meni Rosenfeld (talk) 13:44, 13 July 2007 (UTC)
By the way, how exactly did using logs lead to x = 0? Using an inadequate technique might not lead you to a correct solution, but unless you did something wrong, it should not lead you to an incorrect solution. -- Meni Rosenfeld (talk) 13:47, 13 July 2007 (UTC)
I concur that MR's substitution is useful. Baccyak4H (Yak!) 13:53, 13 July 2007 (UTC)

Using logs, I got x = 0 via the follwoing:

 \left( \frac{1}{2} \right)^x + \left( \frac{1}{4}\right)^x = 1

 \implies log\left( \frac{1}{2} \right)^x +log \left( \frac{1}{4}\right)^x = log{1}

 \implies x log\left( \frac{1}{2} \right) + xlog \left( \frac{1}{4}\right) = log{1}

Setting the base as  \frac{1}{2}

 \implies x + 2x = 0

 \implies 3x = 0

 \implies x = 0

At least I think this is right. Algebra man 14:01, 13 July 2007 (UTC)

No. You have used the incorrect formula \log{(x+y)}=\log{x}+\log{y}\;\! in the first step. -- Meni Rosenfeld (talk) 14:06, 13 July 2007 (UTC)
Yes, you are making the mistake many of my students like to make. The correct formula is log(xy) = logx + logy. Unless I'm doing something dumb, the solution is not nice. (Of course, nice is a matter of opinion.) –King Bee (τγ) 14:31, 13 July 2007 (UTC)
Definitely a matter of opinion ;) Compared to (say) roots of cubics, it's not bad. Baccyak4H (Yak!) 14:46, 13 July 2007 (UTC)

Alright - sticking with logs for the minute, could I do the following;

 \left( \frac{1}{2} \right)^x + \left( \frac{1}{4}\right)^x = 1

 \implies log\left( \frac{1}{2} \right)^x +log \left( \frac{1}{4}\right)^x = log{1}

 \implies log\left( \left( \frac{1}{2} \right)^x \left( \frac{1}{4}\right)^x\right) = log{1}

using the logx + logy = log(xy) formula? Algebra man 14:51, 13 July 2007 (UTC)

No. You have used the incorrect formula \log{(x+y)}=\log{x}+\log{y}\;\! in the first step. -- Meni Rosenfeld (talk) 15:01, 13 July 2007 (UTC)

OK then, with regards to logs, I have no further ideas. Could someone tell me how to start solving this with logs? Algebra man 15:03, 13 July 2007 (UTC)

Not sure there is a way where your first manipulation uses logs. But without a doubt, they will come in handy sometime. In the spirit of Meni's earlier hint, have you tried using the substitution
 y = \left(\frac{1}{2}\right)^{x} \,\!
? Baccyak4H (Yak!) 15:07, 13 July 2007 (UTC)
There is definitely no way to solve this problem by starting with logs. I was hoping my original hint was enough, without explicitly stating the substitution. This is what happens when someone asks for only a hint but doesn't try using it when it is given. -- Meni Rosenfeld (talk) 15:09, 13 July 2007 (UTC)

Sorry, I got distracted with the logs, as that is how I originally tried solving it but lets not forget you asked me how I did it. Anyway I have taken your hint, hopefuly as you intended it, and tried completing the square using the substitution

 y = \left(\frac{1}{2} \right)^x

This took me to

 y^2 + y = 1\,\!

 y^2 + y + \frac{1}{4} = 1 + \frac{1}{4}\,\!

 (y + \frac{1}{2})^2 = \frac{5}{4}\,\!

Taking square roots on both sides

 y + \frac{1}{2} = \frac{ \sqrt{5}}{2}\,\!

 \implies y = \frac{-1 \pm\ \sqrt{5}}{2}\,\!

 \implies \left(\frac{1}{2} \right)^x =  \frac{-1 \pm\ \sqrt{5}}{2}\,\!

Is that right? Algebra man 15:24, 13 July 2007 (UTC)

Yes, indeed. -- Meni Rosenfeld (talk) 15:26, 13 July 2007 (UTC)

Thank you. Finishing off is a little harder than I thought, but now, am I finally allowed to use logs thus;

 \left(\frac{1}{2} \right)^x =  \frac{-1 \pm\ \sqrt{5}}{2}\,\!

 \implies log \left(\frac{1}{2} \right)^x = log\left(\frac{-1 \pm\ \sqrt{5}}{2} \right)\,\!

 \implies x log\left(\frac{1}{2} \right) = log\left(\frac{-1 \pm\ \sqrt{5}}{2} \right)\,\!

 \implies x = log\left(\frac{-1 \pm\ \sqrt{5}}{2} \right) * \frac{1}{log \left(\frac{1}{2} \right)}\,\!

I don't like the look of it but if it's right I don't care. Algebra man 15:38, 13 July 2007 (UTC)

That's what I was getting too. Note that the -\sqrt{5} can't be used if we're keeping it real. - Rainwarrior 15:53, 13 July 2007 (UTC)

OK Cheers Algebra man 15:55, 13 July 2007 (UTC)

This expression can be written more simply as x = \log_2\phi\;\!, where \phi\;\! is the golden ratio - which is why I disagree with King Bee characterizing it as "not nice". -- Meni Rosenfeld (talk) 16:34, 13 July 2007 (UTC)
Please! log2φ is horrifying to me. =) –King Bee (τγ) 19:38, 13 July 2007 (UTC)

As a, hopefully, final question how do you get rid of the

 \frac{1}{log \left(\frac{1}{2} \right)}\,\!

and why do you choose base 2? I'm sure they're connected but I don't see how. Algebra man 17:11, 13 July 2007 (UTC)

Note that \frac{-1 \pm\ \sqrt{5}}{2} =  \frac{2}{1 \pm\ \sqrt{5}} and \log(1/x) = - \log\,x, so we can also write the answer as
x = \left. \log\left(\frac{1 + \sqrt{5}}{2}\right) \right/ \log\,2\,.
Now we recognize \frac{1 + \sqrt{5}}{2} as the golden ratio, usually denoted by the Greek letter \varphi. Note further that \log\,a / \log\,b is the logarithm of a to the base b, often denoted by \log_b a\,. Using the notation lb a for log2 a, recommended by the ISO standard Mathematical signs and symbols for use in physical sciences and technology (ISO 31-11:1992), we can even go one step beyond Meni's expression and arrive at:
x = \mathrm{lb}\,\varphi\,,
which doesn't look too bad. (OK, maths is not included in "physical sciences and technology", so this is maybe a step too far.)  --LambiamTalk 17:19, 13 July 2007 (UTC)
As a general principle, one of the goals in algebraic solutions to problems is to change the problem to be either a linear equation or a quadratic (which can be effectively treated as a pair of linear equations). There are some problems in the math GRE which end up being quadratic equations in disguise. Donald Hosek 17:29, 13 July 2007 (UTC)

There are a couple of things I don't understand about going from

 \frac{1}{log \left(\frac{1}{2} \right)}\,\!

to

x = \left. \log\left(\frac{1 + \sqrt{5}}{2}\right) \right/ \log\,2\,.

First if

\log(1/x) = - \log\,x

then why is it

 log\,2\,.

and not

 -log\,2\,.

Also

 log\left(\frac{1 + \sqrt{5}}{2}\right)

why is not now 1 and not -1 - I do understand however why we disregarded -\sqrt{5}, as we wanted to 'keep it real'. Algebra man 21:17, 13 July 2007 (UTC)

\log{\left(\frac{-1+\sqrt{5}}{2}\right)} = \log{\left(\frac{2}{1+\sqrt{5}}\right)}=-\log{\left(\frac{1+\sqrt{5}}{2}\right)}
and
\log{\frac12}=-\log2
so
\log{\left(\frac{-1+\sqrt{5}}{2}\right)} \Bigg/ \log{\frac12} = \log{\left(\frac{1+\sqrt{5}}{2}\right)} \Bigg/ \log2.
-- Meni Rosenfeld (talk) 21:43, 13 July 2007 (UTC)

Thanks Algebra man 22:50, 13 July 2007 (UTC)

right, that's one solution covered out of an infinite number of solutions..

x =-1/ln(2) [ ln(|conj{ø}|) + πi(2k + 1) ] for integral k

expressing it in words seems to tie in a lot of mathematical concepts and functions:

the product of "every odd multiple of the product of pi and the imaginary unit plus the natural log of the absolute value of the surd-conjugate of the golden ratio" and the negative multiplicative inverse of the natural log of 2

Yeah, except that instead of "the absolute value of the surd-conjugate of the golden ratio", it's probably better to say "the reciprocial of the golden ratio" or "the golden ratio minus one"; and I think it would be clearer if you said "any odd multiple" instead of "every odd multiple". – b_jonas 09:42, 14 July 2007 (UTC)

[edit] Kalam cosmological argument

Here is an excerpt from an article in wikipedia of the title above which I find absurd. "Craig describes the impossibility of an actual infinite like an endless bookcase. For example, imagine a bookcase that extends infinitely on which there is an infinite number of books, colored green and red, green and red, and so on. Obviously there would be an infinite number of books. But imagine you remove all red colored books. How many are left? An infinite number. Thus infinity divided by two equals infinity, which is illogical given standard definitions of division. Craig thus attempts to show that infinity, as he defines it, cannot be applied to operations in the world." Why is ∞/2 not equal to ∞? eg. 5/0 /2 = 5/0 = ∞ Why is this illogical? The article just states it without explaining why.

He doesn't say that it's illogical, but that you can't have a physical manifestation of infinity. Donald Hosek 21:59, 13 July 2007 (UTC)
The text of the article strongly suggests that Craig states that ∞/2 = ∞ is illogical. If he does not actually state such a thing, the text should be modified to reflect what he does state. Presumably the source of the bookcase example is the book The Kalam Cosmological Argument by William Lane Craig (1979), Barnes & Noble, ISBN 978-0064913089, although this is not referenced in the article.  --LambiamTalk 22:42, 13 July 2007 (UTC)

It is the text itself that I objected to. The text, as I had already quoted above, states explicity "Thus infinity divided by two equals infinity, which is illogical given standard definitions of division." It does not say that Craig himself said it, although I guess I would be necessary for his theory to work. But that's besides the point. The question I want answered is: how is the above expample of books illogical "given standard definitions of division." ? -Orignial questioner

One answer might be "Standard definitions of division are usually simple, and apply only to the case when both numbers involved are finite (hell, both integers). Finite numbers have the property that x/2 = x never holds for nonzero x." (I think this answer sucks.) Tesseran 07:05, 14 July 2007 (UTC)
But if division is considered to be thus restricted, the argument is similar to saying something like "Nietzsche says that God is dead; thus infinity is zero divided by zero, which is illogical." The addition, making a jump from "God is dead" to "infinity is zero divided by zero" is "original research". The term "illogical" should apply more to this jump than to the supposed consequence. In this specific case, it is commonly accepted in mathematics that some "infinities" may be partioned in several equally large infinities, so even setting the OR issue aside "illogical" is clearly inappropriate here. See also Hilbert Hotel.  --Lambiam 08:31, 14 July 2007 (UTC)
It's worth noting that Craig is apparently a philosopher, not a mathematician. Tesseran's interpretation, while he think it sucks, would seem to be pretty much on the mark. The key point of the argument is whether infinity is a concept that can exist in the universe. Craig argues that because the conventional definitions of arithmetic don't apply to infinity, it cannot have a physical manifestation. Donald Hosek 17:12, 14 July 2007 (UTC)
The original poster signalled a problem with our own Wikipedia article. You keep addressing the merits of Craig's argument, which, however, has no bearing on the question whether our article sucks.  --Lambiam 19:15, 14 July 2007 (UTC)
Isn't the original red book and green book thing a variation on Hilbert's Hotel And isn't the point of that to show that something like infinity times 2 is allowed?