Talk:Null hypothesis
From Wikipedia, the free encyclopedia
Contents |
[edit] References
Seeing as how this is based on a scientific, or at least scholarly topic, I believe this needs references. I'm (I hope appropriately) adding the "Not Verified" tag, so hopefully someone will come through and add references so the poor, confused beginning stats students (myself included) can know this is more than the ramblings of a demented mind. Having had experience using the subject matter, even reading this straight from a textbook can make one crazy. Garnet avi 10:41, 5 September 2006 (UTC)
[edit] Unsorted Comments
Sorry, this stuff was above the contents, but are not together or titled. I thought they would be more appropriate under the contents, so moved them to be so. Garnet avi 10:41, 5 September 2006 (UTC)
Is this sentence, from the article, correct?
- But if the null hypothesis is that sample A is drawn from a population whose mean is no lower than the mean of the population from which sample B is drawn, the alternative hypothesis is that sample A comes from a population with a larger mean than the population from which sample B is drawn, and we will proceed to a one-tailed test.
It seems as if the null hypothesis says that mean(A) >= mean(B). Therefore the alternative hypothesis should be the negation of this, or mean(A) < mean(B). But the text states that the alternative hypothesis is that mean(A) > mean(B). Is this right?
- I agree. Fixed. --Bernard Helmstetter 19:54, 8 Jan 2005 (UTC)
The difference between H0:μ1 = μ2 and H0:μ1 - μ2 = 0 is unclear, to say the least. --Bernard Helmstetter 20:02, 8 Jan 2005 (UTC)
This entry is confusing, to say the least. The introduction is somehow split in two sections by the TOC, and paradoxically is too short. The closing sections on controversies and pubilcation bias could be merged as well. I am not attempting a rewrite for I know little about statistics myself - but even so it is evident that the article could be clearer.--Duplode 01:20, 4 April 2006 (UTC)
[edit] ??
I'm a sophomore in high school, here's my request:
Could someone create a "Null hypothesis for dummies" section? as it is now, this article is very hard to comprehend. -- Somebody
"Null hypothesis for dummies" would be useful. In the examples there are null hypotheses stating that "the value of this real number is the same as the value of that real number". Is there some explanation for why such a hypothesis is reasonable? It seems to me that for a very broad class of probability distributions the null hypothesis has probability of 0 and the opposite probability of 1. The article at the moment says this:
However, concerns regarding the high power of statistical tests to detect differences in large samples have led to suggestions for re-defining the null hypothesis, for example as a hypothesis that an effect falls within a range considered negligible. This is an attempt to address the confusion among non-statisticians between significant and substantial, since large enough samples are likely to be able to indicate differences however minor.
So the more data we have, the more likely it is that the null hypothesis is rejected? This is exactly what should happen if the null hypothesis is always false - the only difference is in how much data we need to prove that. Is this the case in actual use? If so, how does the theory justify drawing conclusions from a false premise? Presumably the theory is "robust enough" when there isn't "too much data", but how exactly does this work? 82.103.214.43 14:58, 11 June 2006 (UTC)
[edit] Elisabeth Anscombe
Who the hell is she and why is she quoted here? Any reference?
- Misspelling. Elizabeth Anscombe. Flapdragon 22:00, 18 May 2006 (UTC)
Are you sure the author of the quote is Elizabeth Anscombe? Francis Anscombe was a statistician who, among other things, applied statistical methods to agriculture and is a much more plausible source for that quote. As stated above, a source for the quote would be nice.--jdvelasc 21:29, 9 October 2006 (UTC)
[edit] example conclusion
"For example, if we want to compare the test scores of two random samples of men and women, a null hypothesis would be that the mean score of the male population was the same as the mean score of the female population, and therefore there is no significant statistical difference between them:"
This is wrong, the two samples can have the the same mean and be statistically totally different (e.g. differ in variance). 84.147.219.67 15:56, 26 June 2006 (UTC)
- I made some changes: I deleted "and therefore there is no significant statistical difference between them:", because it is redundant and arguably incorrect. I also added a few words to the part about assuming they're drawn from the same population, to say that this means they have the same variance and shape of distribution too. I deleted the equation with mu1 - mu0 = 0 because it was out of context IMO given the sentence that was just before it, and because it is practically the same as the previous equation mu1 = mu0. Sorry I forgot again to put an "edit summary". Coppertwig 00:14, 5 November 2006 (UTC)
[edit] "File drawer problem"?
What is it, and why does it make a sudden and unexplained appearance near the end of this article? If I hadn't gotten a C- in stats I'd go out and fix it myself. :) --User:Dablaze 13:29, 1 August 2006 (UTC)
- The "file drawer problem" is this: suppose a researcher carries out an experiment and does not find any statistically significant difference between two populations. (For example, tests whether a certain substance cures a certain illness and does not find any evidence that it does.) Then, the researcher may consider that this result (or "non-result") is not very interesting, and put all the notes about it into a file drawer and forget about it, instead of publishing it which is what the researcher would have done if the test had found the interesting result that the substance apparently cures the illness.
- Not publishing it is a problem for several reasons: one, other researchers may waste time carrying out the same test on a useless substance and also not publishing. Two, it is sometimes possible to find a statistically significant result by combining the results of several studies; this can't easily happen if it isn't published so nobody knows about it. Three, if various researchers keep repeating the same experiment and not finding statistically significant results, and then one does the same experiment and by a random fluke (luck) does get a statistically significant result, they might publish that and it would look as if the substance cures the illness, although if you combined the results of all the studies you would see that there is no statistically significant result overall.
- It really does make sense if you can guess what "file drawer problem" means. Does it need a few words in the article to explain it? Coppertwig 00:00, 5 November 2006 (UTC)
[edit] Accept, reject, do not reject Null Hypothesis
After a statistical test (say, determining p-values), one can only reject or not reject the Null Hypothesis. Accepting the alternative hypothesis is wrong because there is always a probability that you are incorrectly accepting or rejecting (alpha and beta; type I and type II error). --70.111.218.254 02:03, 22 November 2006 (UTC)