Sequential analysis
From Wikipedia, the free encyclopedia
In statistics, sequential analysis refers to statistical analysis where the sample size is not fixed in advance. Instead data is evaluated as it is collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. A final conclusion may therefore sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.
Contents |
[edit] History
Sequential analysis was first developed by Abraham Wald with Jacob Wolfowitz as a tool for more efficient industrial quality control during World War II.
Essentially the same approach was independently developed at the same time by Alan Turing as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together. This work remained secret until the early 1980s.
[edit] See also
[edit] Journals
[edit] References
- Abraham Wald, "Sequential Tests of Statistical Hypotheses", Annals of Mathematical Statistics, 16, (1945), 117-186
- Abraham Wald, Sequential Analysis, (1947)
[edit] External links
- Course given by Rebecca Betensky at Harvard University, lecture note slides
- Software for conducting sequential analysis and applications of sequential analysis in the study of group interaction in computer-mediated communication by Dr. Allan Jeong at Florida State University