Completeness (statistics)

From Wikipedia, the free encyclopedia

In statistics, completeness is a property of a statistic for which the statistic obtains optimal information, in a certain sense, about the unknown parameters characterizing the distribution of the underlying data.

It is closely related to statistical sufficiency and often occurs in conjunction with it.

Contents

[edit] Mathematical definition

Suppose a random variable X (which may be a sequence (X1, ..., Xn) of scalar-valued random variables), has a probability distribution belonging to a known family of probability distributions \mathcal{P} parametrized by θ. Let s(X) be any statistic based on X.

Then s(X) is a complete statistic iff

E(g(s(X))) = 0 for all θ => g=0 almost everywhere

and is boundedly complete if the implication holds for all bounded g.

[edit] Completeness of the family

It is not guaranteed that for a particular family of probabilities, a complete statistic will always exist. In contrast, a minimal sufficient statistic always exists.

In particular, if a complete statistic exists, then a statistic is complete iff it is minimal sufficient. Taking this fact into account, the family \mathcal{P} of distributions is called complete iff its minimal sufficient statistic is complete.

[edit] Examples

[edit] Example of a complete statistic

Suppose (X1, X2) are independent, identically distributed random variables, normally distributed with expectation θ and variance 1. The sum

s((X_1,\ X_2)) = X_1 + X_2\,\!

is a complete statistic. To show this one demonstrates that there is no non-zero function g such that the expectation of

g(s(X_1,\ X_2)) = g(X_1+X_2)\,\!

remains zero regardless of the value of θ.

That fact may be seen as follows. The probability distribution of X1 + X2 is normal with expectation 2θ and variance 2. Its probability density function in x is therefore proportional to

\exp\left(-(x-2\theta)^2/4\right).

The expectation of g above would therefore be a constant times

\int_{-\infty}^\infty g(x)\exp\left(-(x-2\theta)^2/4\right)\,dx.

A bit of algebra reduces this to

k(\theta) \int_{-\infty}^\infty h(x)e^{x\theta}\,dx\,\!

where k(θ) is nowhere zero and

h(x)=g(x)e^{-x^2/4}.\,\!

As a function of θ this is a two-sided Laplace transform of h(X), and cannot be identically zero unless h(x) zero almost everywhere. The exponential is not zero, so this can only happen if g(x) is zero almost everywhere.

[edit] Counterexample 1

Again suppose (X1, X2) are independent, identically distributed random variables, normally distributed with expectation θ and variance 1.

Then

g((X_1,\ X_2)) = X_1 - X_2\,\!

is an unbiased estimator of zero. Therefore the pair (X1, X2) itself is not a complete statistic (though it is a sufficient statistic).

[edit] Counterexample 2

Let U follow Uniform[-½,½]. Let X = U + θ, so that the distribution of X is parametrized by the mean θ = E(X).

Then if g(x) = sin(2πx), then E(g(X)) = 0 irrespective of θ. Therefore X itself is not a complete statistic for θ.

[edit] Utility

[edit] Lehmann-Scheffé theorem

The major importance of completeness is in the application of the Lehmann-Scheffé theorem, which states that a statistic that is unbiased, complete and sufficient for some parameter θ is the best estimator for θ, i.e., the one that has a smaller expected loss for any convex loss function (in typical practice, a smaller mean squared error) among any estimators with the same expected value.

[edit] Basu's theorem

Completeness is also a prerequisite for the applicability of Basu's theorem: A statistic which is both complete and sufficient is independent of any ancillary statistic (one independent of the parameters θ).