Continuity correction

From Wikipedia, the free encyclopedia

In probability theory, if a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then

P(X\leq x) = P(X<x+1)

for any x ∈ {0, 1, 2, ... n}. If np and n(1 − p) are large (sometimes taken to mean ≥ 5), then the probability above is fairly well approximated by

P(Y\leq x+1/2)

where Y is a normally distributed random variable with the same expected value and the same variance as X, i.e., E(Y) = np and var(Y) = np(1 − p). This addition of 1/2 to x is a continuity correction.

A continuity correction can also be applied when other discrete distributions supported on the integers are approximated by the normal distribution. For example, if X has a Poisson distribution with expected value λ then the variance of X is also λ, and

P(X\leq x)=P(X<x+1)\approx P(Y\leq x+1/2)

if Y is normally distributed with expectation and variance both λ.

See also Yates' correction for continuity.

[edit] References

  • Devore, Jay L., Probability and Statistics for Engineering and the Sciences, Fourth Edition, Duxbury Press, 1995.