Talk:Student's t-test

From Wikipedia, the free encyclopedia

This article is within the scope of WikiProject Statistics, which collaborates to improve Wikipedia's coverage of statistics. If you would like to participate, please visit the project page.

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: B Class High Priority  Field: Probability and statistics
One of the 500 most frequently viewed mathematics articles.

Contents

[edit] Calculations

I don't suppose anyone wants to add HOW TO DO a t-test??

That seems to be a deficiency of a fairly large number of statistics pages. The trouble seems to be that they're getting written by people who've gotten good grades in statistics courses in which the topics are covered, but whose ability does not exceed what that would imply. Maybe I'll be back.... Michael Hardy 22:04, 7 June 2006 (UTC)
If I have time to learn TeX, maybe I'll do it. I know the calculations, it's just a matter of getting Wikipedia to display it properly. Chris53516 16:17, 19 September 2006 (UTC)
Those who don't know TeX can present useful changes here on the talk page in ASCII (plain text), and others can translate them into TeX. I can do basic TeX; you can contact me on my talk page to ask for help. (i.e. I can generally translate equations into TeX; I may not be able to help with more advanced TeX questions.) --Coppertwig 11:57, 8 February 2007 (UTC)
I uploaded some crappy images of the calculations. I don't have time to mess with TeX, so someone that's a little more TeX-savvy (*snicker*) can do it. Chris53516 16:42, 19 September 2006 (UTC)
User:Michael Hardy converted two of my crappy graphics to TeX, and I used his conversion to do the last. So there you have it, calculations for the t-test. Chris53516 18:21, 19 September 2006 (UTC)
Great. Now, could someone explicit the formulla? I assume than N is the sample size, s the standard deviation, but what is the df1/dft? ... Ok I found the meaning of df. I find the notation a bit comfusing. it looks a lot like the derivative of a function... is dft thez degrees of freedom of the global population?
What do you mean: "could someone explicit the formulla (sic)" (emphasis added)? N is the sample size of group 1 or group 2, depending on which number is there; s is the standard deviation; and df is degress of freedom. There is a degree of freedom for each group and the total. The degrees of freedom for each group is calculated by taking the sample size and subtracting one. The total degrees of freedom is calculated by adding the two groups' degrees of freedom or by subtracting the total sample size by 2. I will change the formula to reflect this and remove the degrees of freedom. Chris53516 13:56, 11 October 2006 (UTC)
Thanks for the help with doing the calculation, I'm feeling comfortable finding a confidence bound on the Mean - but is there any way to also find a confidence bound on the variation? My real goal is to make a confidence statement like "using a student t-test, these measurements offer a 90% confidence that 99% of the POPULATION would be measured below 5000". —Preceding unsigned comment added by 64.122.234.42 (talk) 14:03, 23 October 2007 (UTC)

[edit] independent samples

Should 'assumptions' include the idea that we assume all samples are independent? This seems like a major omission.

[edit] history unclear

"but was forced to use a pen name by his employer who regarded the fact that they were using statistics as a trade secret. In fact, Gosset's identity was unknown not only to fellow statisticians but to his employer - the company insisted on the pseudonym so that it could turn a blind eye to the breach of its rules." What breach? Why didn't the company know? If it didn't know, how is it insisting on a pseudonym?

[edit] Welch (or Satterthwaite) approximation?

"As the variance of each group is different, the Welch (or Satterthwaite) approximation to the degrees of freedom is used in the test"...

Huh?

--Dan|(talk) 15:00, 19 September 2006 (UTC)

[edit] Table?

This article doesn't mention the t-table which appears to be necessary to make sense of the t value. Also, what's the formula used to compute such tables? —Ben FrantzDale 15:07, 12 October 2006 (UTC)

I'm not sure which table you are referring to or what you mean by "make sense of the t value". Perhaps you mean the table for determining whether t is statistically significant or not. That would be a statistical significance matter, not a matter of just the t-test. Besides, that table is pretty big, and for the basic meaning and calculation of t, it isn't necessary. Chris53516 15:24, 12 October 2006 (UTC)
I forgot. The calculation for such equations is calculus, and would be rather cumbersome here. It would belong at the statistical significance article, anyway. That, and I don't know the calculus behind p. Chris53516 15:26, 12 October 2006 (UTC)
Duah, Student's t-distribution has the answer to my question. —Ben FrantzDale 14:55, 13 October 2006 (UTC)
Glad to be of not-so-much help. :) Chris53516 15:11, 13 October 2006 (UTC)

[edit] Are the calculations right?

The article says:

t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}
\ \mathrm{where}\ s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{\mathrm({N}_1 - 1)\cdot s_1^2 + \mathrm({N}_2 - 1)\cdot s_2^2  \over \mathrm({N}_1 + {N}_2 - 2)}\left({1 \over N_1} + {1 \over N_2}\right)}

But if you ignore the -1 and -2, say for the biased estimator or if there are lots of samples, then s simplifies to

s = \sqrt{ s_1^2 / N_2 + s_2^2 / N_1 }


This seems backwards. The external links all divide the standard deviation by its corresponding sample size, which is what I was expecting. So I'd guess there's a typo and the article should have:

t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}
\ \mathrm{where}\ s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{\mathrm({N}_2 - 1)\cdot s_1^2 + \mathrm({N}_1 - 1)\cdot s_2^2  \over \mathrm({N}_1 + {N}_2 - 2)}\left({1 \over N_1} + {1 \over N_2}\right)}

Can anyone confirm this?

Bleachpuppy 22:14, 17 November 2006 (UTC)

I think it's right as it stands, but I don't have time to check very carefully. When you multiply s12 by N1 − 1, you just get the sum of squares of deviations from the sample mean in the first sample. Similarly with "2" instead of "1". So the sum in the numerator is the sum of squares due to error for the two samples combined. Then you divide that sum of squares by its number of degrees of freedom, which is N1 + N2 − 2. All pretty standard stuff. Michael Hardy 23:23, 17 November 2006 (UTC)
... and I think that just about does it; i.e. I've checked carefully. Michael Hardy 23:29, 17 November 2006 (UTC)
Please provide a citation or derivation. I think Bleachpuppy is right that the subscripts have been switched. Suppose N1 = 30 and N2 = 109, a very large number, and s1 and s2 are of moderate and comparable size (i.e. N2 is a very large number in comparison to any of the other numbers involved). In this case, in effect \overline{X}_2 is known almost perfectly, so the formula should reduce to a close approximation of the t-distribution for the case where the sample 1 is being compared to a fixed null-hypothesis mean μ which in this case is closely estimated by \overline{X}_2. In other words, it should be approximately equal to:
t = \frac{\overline{X}_1 - \mu}{(\sigma_1/\sqrt{30})}
But apparently the formula as written does not reduce to this; instead it reduces to approximately:
t = \frac{\overline{X}_1 - \mu}{(\sigma_2/\sqrt{30})}
This is claiming that this statistical test depends critically on σ2. But since N2 is a very large number in this example, σ2 should be pretty much irrelevant; we know \overline{X}_2 with great precision regardless of the value of σ2, as long as σ2 is not also a very large number. And the test should depend on the value of σ1 but does not. --Coppertwig 12:45, 19 November 2006 (UTC)
All I have with me right now is an intro to stat textbook: Jaccard & Becker, 1997. Statistics for the behavioral sciences. On page 265, it verifies the original formula. I have many more advanced books in my office, but I won't be there until tomorrow. -Nicktalk 21:02, 19 November 2006 (UTC)
P.S. none of the external links really have any useful information on them (they especially lack formulas). Everything that I've come across on the web uses the formula as currently listed in the article. -Nicktalk 21:29, 19 November 2006 (UTC)
The original formula is also confirmed by Hays (1994) Statistics p. 326. -Nicktalk 19:36, 20 November 2006 (UTC)
OK! I see what's wrong!! The formula is a correct formula. However, the article does not state to what problem that formula is a solution! I assumed that the variances of the two populations could differ from each other. Apparently that formula is correct if you're looking at a problem where you know the variance of the two distributions is the same, even though you don't know what the value of the variance is. I'll put that into the article. --Coppertwig 03:33, 21 November 2006 (UTC)

I know these calculations are correct; I simply didn't have my textbook to for a citation. Keep in mind that much of the time we strive to have an equal sample size between the groups, which makes the calculation of t much easier. I will clarify this in the text. – Chris53516 (Talk) 14:28, 21 November 2006 (UTC)

I'm not certain, but it looks like the calculations don't match the graphic formula; n=6 in the problem, but n=8 in the graphic formula. 24.82.209.151 07:54, 23 January 2007 (UTC)


These are wrong, they do not match each other. In the first you need to divide by 2, and in the second, you need to drop the multiplication by (1/n1+1/n2) That makes them match -DC

[edit] Extra 2?

Where the text reads, "Where s2 is the grand standard deviation..." I can't tell what that two is referring to. It doesn't appear in the formula above or as a reference. 198.60.114.249 23:29, 14 December 2006 (UTC)

The equation you're looking for can be found at standard deviation. It was not included in this page because it would be redundant. However, I will add a link to it in the text you read. — Chris53516 (Talk) 02:38, 15 December 2006 (UTC)
Thanks Chris! 198.60.114.249 07:23, 15 December 2006 (UTC)

[edit] I wanna buy a vowel ...

I may be off my medication or something, but does this make sense to anyone? :

  "In fact, Gosset's identity was unknown not only to fellow statisticians but 
   to his employer—the company insisted on the pseudonym so that it could turn 
   a blind eye to the breach of its rules."

So Gosset works for Guinness. Gosset uses a pen-name cuz Guiness told him to. But, um ... Guiness doesn't know who he is and doesn't want to know. So they can turn a blind eye.

So they told this person - they know not whom - to use the pen-name.

I know this was a beer factory and all but ... somebody help me out here.

CeilingCrash 05:28, 24 January 2007 (UTC)

I don't know the history, but maybe they promulgated a general regulation: If you publish anything on your research, use a pseudonym and don't tell us about it. Michael Hardy 20:13, 3 May 2007 (UTC)
Maybe it should should read "a pseudonym" instead of "the pseudonym". I'm not so sure management did not know his identity, however. My recollection of the history is that management gave him permission to publish this important paper, but only under a pseudonym. Guiness did not allow publications for reasons of secrecy. Can someone research this and clear it up?--141.149.181.4 14:45, 5 May 2007 (UTC)

Unfortunatly I have no sources at hand, but the story as I heard it is that Guiness had(/has?) regulations about confidentiallity on all processes used in the factory. Since Gosset used his formulas for grain selection, they fell under the regulations, so he couldn't publish. He than published under the pseudonym, probably with non-official knowladge and consent of the company, which officially couldn't recognize the work as to be his, due to the regulations.

Can we just delete that last sentence and just keep that he wrote under a pen name because it was against company rules to publish a paper —Preceding unsigned comment added by 65.10.25.21 (talk) 12:02, 4 December 2007 (UTC)

[edit] a medical editor's clarification

The correct way of expressing this test is "Student t'Italic text test". The word "Student" is not possessive; there is no "apostrophe s" on it. The lowercase "t" is always italicized. And there is no hyphen between the "t" and "test". It's simply "Student t'Italic text' test"

I'm a medical editor, and this is according the the American Medical Association Manual of Style,'Italic text 9th edition. Sorry I don't really know how to change it - I'm more a word person than a technology person. But I just wanted to correct this. Thank you! -- Carlct1 16:40, 7 February 2007 (UTC)

You need to close those comma edits. When you want bold text, close it off like this:'''bold''', and it will appear like this: bold. Please edit your comment above so it makes more sense using this information. — Chris53516 (Talk) 17:00, 7 February 2007 (UTC)
I'm not sure you are correct about the possessive use. As the article notes, "Student" was Gosset's pen name, which would require a possessive s after the name; otherwise, what does the s mean? The italic on t is left off of the article name because it can't be used in the heading. There are other problems like this all over Wikipedia, and it's a technical limitation. By the way, I see both use of "t-test" and "t test" on the web, and I'm not sure that either are correct. — Chris53516 (Talk) 17:05, 7 February 2007 (UTC)
I have no solid source on this, but I have definitely seen it both ways. It is Student's test, in that Student invented it. However with age, it has also been referred to as the Student t-test. And I, too, have seen both "t-test" and "t test" alas. 128.200.46.67 (talk) 19:43, 18 April 2008 (UTC)

[edit] Recent edit causing page not to display properly -- needs to be fixed

Re this edit: 10:47, 8 February 2007 by 58.69.201.190 I see useful changes here; but it's not displaying properly, and also I suggest continuing to provide the equation for the unbiased estimate in addition to the link to the definition of it. I.e. I suggest combining parts of the previous version with this edit. I don't have time to fix it at the moment. --Coppertwig 11:53, 8 February 2007 (UTC)

Looking at it again, I'm not sure any useful material was added by that edit, (I had been confused looking at the diff display), so I've simply reverted it. --Coppertwig 13:02, 8 February 2007 (UTC)

[edit] Equal sample sizes misses a factor sqrt(n)

The formula with equal sample size should be a special case of the formula with unequal sample size. However, looking at the formula for the t-test with unequal sample size:

t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}
\ \mathrm{where}\ s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{({N}_1 - 1) s_1^2 + ({N}_2 - 1) s_2^2  \over {N}_1 + {N}_2 - 2}\left({1 \over N_1} + {1 \over N_2}\right)}

and setting n=N_1=N_2 yields

s_{\overline{X}_1 - \overline{X}_2} = \sqrt{s_{\overline{X}_1}^2 + s_{\overline{X}_2}^2} / \sqrt{n}.

The factor of sqrt(n) should be correct in the limit of large n. However, there might be a problem since one sets N_1 = N_2 which reduces the degree of freedom by one. Does anyone knows the correct answer?

Oliver.duerr 09:13, 20 February 2007 (UTC)

I don't have the answer, but I agree both formulas don't match 128.186.38.50 15:37, 10 May 2007 (UTC)

[edit] Explaining a revert

I just reverted "t tests" from singular back to plural. The reason is that it's introducing a list of different tests, so there is more than one kind of t test. I mean to put this in the edit summary but accidentally clicked the wrong button. --Coppertwig 19:55, 25 February 2007 (UTC)

[edit] Copied from another source?

Why is this line in the text? [The author erroneously calculates the sample standard deviation by dividing N. Instead, we should divide n-1, so the correct value is 0.0497].

To me, this suggests that portions of the article were copied from another, uncited source. If true, this is copyright infringement and needs to be fixed right away.

I can't find any internet-based source for the text in that part of the article. I think the line might be directed toward the author of the Wikipedia article, as it seems to point out an error. I removed it, and will look into the error. -Nicktalk 00:38, 19 March 2007 (UTC)

By the way; the line should be read carefully. It is correct. As this is an estimate based on an estimate, it should have been divided by n-1, so the correct value is 0.0497. Can someone please change this?

[edit] Testing Normality

The call: I think it would be appropriate to change the wording "normal distribution of data, tested by ..." as these tests for normality are only good for establishing that the data is not drawn from a normal distribution.

The backgroud: Tests for normality (e.g. Shapiro-Wilk test) test the null hypothesis that the data is normally distributed against the alternative hypothesis that it is not normally distributed. Normality cannot be refuted if the null is not rejected, a statement that can only be statistically evaluated by looking at the power of the test.

The evidence: Spiegelhalter (Biometrika, 1980, Table 2) shows that the power of the Shapiro-Wilk test can be very low. There are non normal distributions such that with 50 observations this test only correctly rejects the null hypothesis that the data is not normally distributed 8% (!) of the time.

At least two possible solutions: (1) Drop the statement that the assumption of normality can be tested. (2) Indicate that one can test if the data is not normally distributed, pointing out that no rejection of normality does not mean that the data is normally distributed due to the low power of these tests.

Schlag 11:55, 27 June 2007 (UTC)

If you perform any of these tests before doing a t-test the p-value under the null hypothesis will no longer be uniformly distributed. This entire section is bad statistical advice (although commonly done in practice) Hadleywickham 07:27, 9 July 2007 (UTC)

[edit] Plain Speak?

This article is written to people who already know something about statistics. It would be nice if the intro (at least) explained t-tests to the statistically uninitiated. The current intro says: "A t test is any statistical hypothesis test in which the test statistic has a Student's t distribution if the null hypothesis is true." A lot of people will be thinking, "You what?" Laetoli —Preceding comment was added at 13:44, 3 November 2007 (UTC)


This article may be too technical for a general audience.
Please help improve this article by providing more context and better explanations of technical details to make it more accessible, without removing technical details.
The "history" section is perfectly straightforward, and the "assumptions" and "uses" sections are comprehensible if one reads them closely although they might bear a little expansion, but the "calculations" section should be explained better. What, for example, is meant by a "grand" standard deviation? The explanation "pooled sample standard deviation" might mean something to somebody who remembers what they learned in Statistics, but not all of us remember what we studied in college (:-) I would like to see an article which teaches the average reader:
  1. when to use the tests (the article already explains this, but some textual explanation could supplement some of the wikilinks); and
  2. how to do the tests: although mathematical formulæ are concise, precise and unambiguous, as another Wikipedia article points out, "Not everyone thinks in mathematical symbols," so either a text description or, if a text description would be burdensome, examples would be useful. 69.140.159.215 (talk) 03:57, 10 January 2008 (UTC)

[edit] Dependent t-test

The Dependent t-test section makes very little sense. Where, for example, are the pairs in the first table - or did someone maliciously truncate the table and rename Jon to Jimmy and Jane to Jesse? Why not walk the reader through a paired t-test using the data in the second table? Also, the example in the following section is not very helpful, since a 95% confidence interval is never even calculated. The example isn't related to any of the "uses of t-tests" previously - in part because the construction of confidence interval isn't really a "test" sensu stricto. Also, the claim that: "With the mean and the first five weights it is possible to calculate the sixth weight. Consequently there are five degrees of freedom," is a classic example of the opaque fog statisticians lead their flailing students into when trying to explain the degrees of freedom concept. Surely somebody in the community can clarify this page, as it is certainly a widely visited one. - Eliezg (talk) 21:41, 20 November 2007 (UTC)

[edit] Example

I don't get the example at all. It doesn't mention any of the formulas from the article, and computes a quantity (confidence interval of a mean) that's not discussed anywhere else in the article, with no explanation as to how that quantity was derived. Can someone add a real example, and explain it in terms of the rest of the article? (null hypothesis, etc.) --P3d0 17:16, 4 December 2007 (UTC)

[edit] Without what?

I commented the following sentence out:

"Modern statistical packages make the test equally easy to do with or without it [to what does "it" refer here?]."

as I also don't know what "it" is. --Slashme (talk) 08:39, 21 February 2008 (UTC)

[edit] Inconsistency in use of N and n

The section "Unequal sample sizes, unequal variance" seems to use both 'N' and 'n' to mean the sample size. Correct?

[edit] Headline text