Testing hypotheses suggested by the data
From Wikipedia, the free encyclopedia
This article does not cite any references or sources. (January 2008) Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. |
In statistics, hypotheses suggested by the data must be tested differently from hypotheses formed independently of the data.
Contents |
[edit] How to do it wrong
For example, suppose fifty different researchers, unaware of each other's work, run clinical trials to test whether Vitamin X is efficacious in preventing cancer. Forty-nine of them find no significant differences between measurements done on patients who have taken Vitamin X and those who have taken a placebo. The fiftieth study finds a difference so extreme that if Vitamin X has no effect then such an extreme difference would be observed in only one study out of fifty. When all fifty studies are pooled, one would say no effect of Vitamin X was found. But it would be reasonable for the investigators running the fiftieth study to consider it likely that they have found an effect, until they learn of the other forty-nine studies. Now suppose that the one anomalous study was in Denmark. The data suggest a hypothesis that Vitamin X is more efficacious in Denmark than elsewhere. But Denmark was fortuitously the one-in-fifty in which an extreme value of a test statistic happened; one expects such extreme cases one time in fifty on average if no effect is present. It would therefore be fallacious to cite the data as serious evidence for this particular hypothesis suggested by the data.
However, if another study is then done in Denmark and again finds a difference between the vitamin and the placebo, then the first study strengthens the case provided by the second study. Or, if a second series of studies is done on fifty countries, and Denmark stands out in the second study as well, the two series together constitute important evidence even though neither by itself is at all impressive.
[edit] The general problem
A large set of tests as described above greatly inflates the probability of type I error as all but the data most favourable to the hypothesis is discarded. This is a risk, not only in hypothesis testing but in all statistical inference as it is often problematic to accurately describe the process that has been followed in searching and discarding data. It is a particular problem in statistical modelling, where many different models are rejected by trial and error before publishing a result (see also overfitting). Likelihood and Bayesian approaches are no less at risk owing to the difficulty in specifying the likelihood function without an exact description of the search and discard process.
The error is particularly prevalent in data mining and machine learning. It also commonly occurs in academic publishing where only reports of positive, rather than negative, results tend to be accepted, resulting in the effect known as publication bias.
[edit] How to do it right
Strategies to avoid the problem include:
- Collecting confirmation samples
- Cross-validation
- Methods of compensation for multiple comparisons
- Simulation studies including adequate representation of the multiple-testing actually involved
Henry Scheffé's simultaneous test of all contrasts in multiple comparison problems is the most well-known remedy in the case of analysis of variance. It is a method designed for testing hypotheses suggested by the data while avoiding the fallacy described above. See his "A Method for Judging All Contrasts in the Analysis of Variance", Biometrika, 40, pages 87-104 (1953).