Sequential analysis

From Wikipedia, the free encyclopedia

In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data is evaluated as it is collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.

Contents

[edit] History

Sequential analysis was first developed by Abraham Wald [1] with Jacob Wolfowitz as a tool for more efficient industrial quality control during World War II.

Essentially the same approach was independently developed at the same time by Alan Turing as part of the Banburismus technique used at Bletchley Park, to test hypotheses about whether different messages coded by German Enigma machines should be connected and analysed together. This work remained secret until the early 1980s.

Sequential analysis has also connection to the problem of gambler's ruin studied by among others Huyghens already in 1657[2].

[edit] See also

[edit] Notes & References

  1. ^ Wald, Abraham (June 1945). "Sequential Tests of Statistical Hypotheses". The Annals of Mathematical Statistics 16 (2): 117–186. doi:10.1214/aoms/1177731118. 
  2. ^ B. K. Gosh and P. K. Sen (1991). Handbook of Sequential Analysis. New York: Marcel Dekker. ISBN 0-8247-8404-1. 

[edit] External links

Languages