Sufficiency (statistics)
From Wikipedia, the free encyclopedia
In statistics, a statistic is sufficient for the parameter θ, which indexes the distribution family of the data, precisely when the data's conditional probability distribution, given the statistic's value, no longer depends on θ.
Intuitively, a sufficient statistic for θ captures all the possible information about θ from the data. Both the statistic and θ can be vectors.
The concept is due to Sir Ronald Fisher.
Contents |
[edit] Mathematical definition
A statistic T(X) is sufficient for θ precisely if the conditional probability distribution of the data X, given the statistic T(X), is independent of the parameter θ, i.e.
or in shorthand
[edit] Fisher's factorization theorem
Fisher's factorization theorem provides a convenient characterization of a sufficient statistic. If the likelihood function of X is L(θ; x), then T is sufficient for θ if and only if functions g and h can be found such that
i.e. the likelihood L can be factored into a product such that one factor, h, does not depend on θ and the other factor, which does depend on θ, depends on x only through T(x).
[edit] Interpretation
A way to think about this is to consider varying x in such a way as to maintain a constant value of T(X) and ask whether such a variation has any effect on inferences one might make about θ. If the factorization criterion above holds, the answer is "none" because the dependence of the likelihood function f on θ is unchanged.
[edit] Proof
[edit] Sufficiency => Factorization
If T(x) is sufficient, then
[edit] Factorization => Sufficiency
On the other hand, if factorization holds, then
- , independent of θ.
[edit] Minimal sufficiency
A sufficient statistic is minimal sufficient if it can be represented as a function of any other sufficient statistic.
In other words, S(X) is minimal sufficient iff
- S(X) is sufficient, and
- if T(X) is sufficient, then there exists a function f such that S(X) = f(T(X)).
Intuitively, a minimal sufficient statistic most efficiently captures as much information as is possible about the parameter θ.
[edit] Examples
[edit] Bernoulli distribution
If X1, ...., Xn are independent Bernoulli-distributed random variables with expected value p, then the sum T(X) = X1 + ... + Xn is a sufficient statistic for p (here 'success' corresponds to Xi = 1 and 'failure' to Xi = 0; so T is the total number of successes)
This is seen by considering the joint probability distribution:
Because the observations are independent, this can be written as
and, collecting powers of p and 1 − p, gives
which satisfies the factorization criterion, with h(x) being just the identity function.
Note the crucial feature: the unknown parameter p interacts with the observation x only via the statistic T(x) = Σ xi.
[edit] Uniform distribution
If X1, ...., Xn are independent and uniformly distributed on the interval [0,θ], then T(X) = max(X1, ...., Xn ) is sufficient for θ.
To see this, consider the joint probability distribution:
Because the observations are independent, this can be written as
where H(x) is the Heaviside step function. This may be written as
which can be viewed as a function of only θ and maxi(Xi) = T(X). This shows that the factorization criterion is satisfied, again where h(x) is the identity function.
[edit] Poisson distribution
If X1, ...., Xn are independent and have a Poisson distribution with parameter λ, then the sum T(X) = X1 + ... + Xn is a sufficient statistic for λ.
To see this, consider the joint probability distribution:
Because the observations are independent, this can be written as
which may be written as
which shows that the factorization criterion is satisfied, where h(x) is the reciprocal of the product of the factorials.
[edit] The Rao-Blackwell theorem
Sufficiency finds a useful application in the Rao-Blackwell theorem.
Since the conditional distribution of X given a sufficient statistic T(X) does not depend on θ, neither does the conditional expected value of g(X) given T(X), where g is any function well-behaved enough for the conditional expectation to exist. Consequently that conditional expected value is actually a statistic, and so is available for use in estimation.
The Rao-Blackwell theorem states that if g(X) is any kind of estimator of θ, then typically the conditional expectation of g(X) given T(X) is a better estimator of θ, and is never worse. Sometimes one can very easily construct a very crude estimator g(X), and then evaluate that conditional expected value to get an estimator that is in various senses optimal.
[edit] See also
- Completeness of a statistic
- Basu's theorem on independence of complete sufficient & ancillary statistics
- The Rao-Blackwell theorem on improving an estimator through conditioning with a sufficient statistic
- The Lehman-Scheffe theorem stating complete sufficient estimator is the best estimator of its expectation