Geometric distribution

From Wikipedia, the free encyclopedia

Geometric
Probability mass function
{{{pdf_image}}}
Cumulative distribution function
{{{cdf_image}}}
Parameters 0< p_1 \leq 1 success probability (real)
Support k \in \{1,2,3,\dots\}\!
Probability mass function (pmf) (1 - p_1)^{k-1}\,p_1\!
Cumulative distribution function (cdf) 1-(1 - p_1)^k\!
Mean \frac{1}{p_1}\!
Median \left\lceil \frac{-\log(2)}{\log(1-p_1)} \right\rceil\! (not unique if − log(2) / log(1 − p1) is an integer)
Mode 1
Variance \frac{1-p_1}{p_1^2}\!
Skewness \frac{2-p_1}{\sqrt{1-p_1}}\!
Excess Kurtosis 6+\frac{p_1^2}{1-p_1}\!
Entropy -\frac{1-p_1}{p_1}\ln(1-p_1)-\ln p_1\!
mgf \frac{p_1\,e^t}{1-(1-p_1) e^t}\!
Char. func. \frac{1-q}{1-q\,e^{it}}\! (where q = 1 − p1)

In probability theory and statistics, the geometric distribution is either of two discrete probability distributions:

  • the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, ...}, or
  • the probability distribution of the number Y = X − 1 of failures before the first success, supported on the set { 0, 1, 2, 3, ... }.

Which of these one calls "the" geometric distribution is a matter of convention and convenience.

If the probability of success on each trial is p1, then the probability that k trials are needed to get one success is

\Pr(X = k) = (1 - p_1)^{k-1}\,p_1\,

for k = 1, 2, 3, ....

Equivalently, if the probability of success on each trial is p0, then the probability that there are k failures before the first success is

\Pr(Y=k) = (1 - p_0)^k\,p_0\,

for k = 0, 1, 2, 3, ....

In either case, the sequence of probabilities is a geometric sequence.

For example, suppose an ordinary die is thrown repeatedly until the first time a "1" appears. The probability distribution of the number of times it is thrown is supported on the infinite set { 1, 2, 3, ... } and is a geometric distribution with p1 = 1/6.

Contents

[edit] Moments and cumulants

The expected value of a geometrically distributed random variable X is 1 / p1 and the variance is (1-p_1)/p_1^2:

\mathrm{E}(X) = \frac{1}{p_1},  \qquad\mathrm{var}(X) = \frac{1-p_1}{p_1^2}.

Equivalently, an expected value of the geometrically distributed random variable Y is (1 − p0) / p0, and its variance is (1-p_0)/p_0^2:

\mathrm{E}(Y) = \frac{1-p_0}{p_0},  \qquad\mathrm{var}(Y) = \frac{1-p_0}{p_0^2}.

Let μ = (1 − p0) / p0 be the expected value of Y. Then the cumulants κn of the probability distribution of Y satisfy the recursion

\kappa_{n+1} = \mu(\mu+1) \frac{d\kappa_n}{d\mu}.

[edit] Parameter estimation

For both variants of the geometric distribution, the parameter p can be estimated by equating the expected value with the sample mean. This is the method of moments, which in this case happens to yield maximum likelihood estimates of p.

Specifically, for the first variant let k_1,\dots,k_n be a sample where k_i \geq 1 for i=1,\dots,n. Then p1 can be estimated as

\widehat{p_1} = \left(\frac1n \sum_{i=1}^n k_i\right)^{-1}. \!

In Bayesian inference, the Beta distribution is the conjugate prior distribution for the parameter p1. If this parameter is given a Beta(αβ) prior, then the posterior distribution is

p_1 \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n (k_i-1)\right). \!

The posterior mean E[p1] approaches the maximum likelihood estimate \widehat{p_1} as α and β approach zero.

In the alternative case, let k_1,\dots,k_n be a sample where k_i \geq 0 for i=1,\dots,n. Then p0 can be estimated as

\widehat{p_0} = \left(1 + \frac1n \sum_{i=1}^n k_i\right)^{-1}. \!

The posterior distribution of p0 given a Beta(αβ) prior is

p_0 \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n k_i\right). \!

Again the posterior mean E[p0] approaches the maximum likelihood estimate \widehat{p_0} as α and β approach zero.

[edit] Other properties

G_X(s) = \frac{s\,p_0}{1-s\,(1-p_0)}, \!
G_Y(s) = \frac{p_1}{1-s\,(1-p_1)}, \quad |s| < (1-p_1)^{-1}. \!
  • Like its continuous analogue (the exponential distribution), the geometric distribution is memoryless. That means that if you intend to repeat an experiment until the first success, then, given that the first success has not yet occurred, the conditional probability distribution of the number of additional trials does not depend on how many failures have been observed. The die one throws or the coin one tosses does not have a "memory" of these failures. The geometric distribution is in fact the only memoryless discrete distribution.
  • Among all discrete probability distributions supported on {1, 2, 3, ... } with given expected value μ, the geometric distribution X with parameter p1 = 1/μ is the one with the largest entropy.
  • The geometric distribution of the number Y of failures before the first success is infinitely divisible, i.e., for any positive integer n, there exist independent identically distributed random variables Y1, ..., Yn whose sum has the same distribution that Y has. These will not be geometrically distributed unless n = 1; they follow a negative binomial distribution.
  • The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables. For example, the hundreds digit D has this probability distribution:
\Pr(D=d) = {q^{100d} \over {1 + q^{100} + q^{200} + \cdots + q^{900}}},
where q = 1 − p1, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.

[edit] Related distributions

Z = \sum_{m=1}^r Y_m
follows a negative binomial distribution with parameters r and p1.
  • If Y1,...,Yr are independent geometrically distributed variables (with possibly different success parameters p_1^{(m)}), then their minimum
W = \min_{m} Y_m\,
is also geometrically distributed, with parameter p1 given by
1-\prod_{m}(1-p_1^{(m)}).
  • Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable Xk has a Poisson distribution with expected value rk/k. Then
\sum_{k=1}^\infty k\,X_k
has a geometric distribution taking values in the set {0, 1, 2, ...}, with expected value r/(1 − r).

[edit] External links

Image:Bvn-small.png Probability distributionsview  talk  edit ]
Univariate Multivariate
Discrete: BernoullibinomialBoltzmanncompound PoissondegenerateGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-Mandelbrot Ewensmultinomial
Continuous: BetaBeta primeCauchychi-squareDirac delta functionErlangexponentialexponential powerFfadingFisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse GaussianHalf-LogisticHotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-squareinverse Gaussianinverse gammaKumaraswamyLandauLaplaceLévyLévy skew alpha-stablelogisticlog-normalMaxwell-BoltzmannMaxwell speednormal (Gaussian)ParetoPearsonpolarraised cosineRayleighrelativistic Breit-WignerRiceStudent's ttriangulartype-1 Gumbeltype-2 GumbeluniformVoigtvon MisesWeibullWigner semicircleWilks' lambda DirichletKentmatrix normalmultivariate normalvon Mises-FisherWigner quasiWishart
Miscellaneous: Cantorconditionalexponential familyinfinitely divisiblelocation-scale familymarginalmaximum entropyphase-typeposteriorpriorquasisamplingsingular