Chapman–Robbins bound

From Wikipedia, the free encyclopedia

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute.

The bound was independently discovered by Hammersley[1] in 1950 and by Chapman and Robbins[2] in 1951.

Contents

[edit] Statement

Let \theta \in {\mathbb R}^n be an unknown, deterministic parameter, and let X \in {\mathbb R}^k be a random variable, interpreted as a measurement of θ. Suppose the probability density function of X is given by p(x;θ). It is assumed that p(x;θ) is well-defined and positive for all values of x and θ.

Suppose δ(X) is an unbiased estimate of an arbitrary function g(θ) of θ, i.e.,

E\{\delta(X)\} = g(\theta)\,\! for all θ.

The Chapman–Robbins bound then states that

\mathrm{Var}(\delta(X)) \ge \sup_{\Delta} \frac{\left[ g(\theta+\Delta) - g(\theta) \right]^2}{E \left[ \tfrac{p(x;\theta+\Delta)}{p(x;\theta)} - 1 \right]^2}.

[edit] Relation to Cramér–Rao bound

The Chapman–Robbins bound converges to the Cramér–Rao bound when \Delta \rightarrow 0, assuming the regularity conditions of the Cramér–Rao bound hold. This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter.

The Chapman–Robbins bound also holds under much weaker regularity conditions. For example, no assumption is made regarding differentiability of the probability density function p(x;θ). When p(x;θ) is non-differentiable, the Fisher information is not defined, and hence the Cramér–Rao bound does not exist.

[edit] See also

[edit] Further reading

  • Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation. Springer, 2nd ed., p.113–114. ISBN 0-387-98502-6. 

[edit] References

  1. ^ Hammersley, J. M. (1950), “On estimating restricted parameters”, J. Roy. Stat. Soc. B 12 (2): 192-240, <http://links.jstor.org/sici?sici=0035-9246%281950%2912%3A2%3C192%3AOERP%3E2.0.CO%3B2-M> 
  2. ^ Chapman, D. G. & Robbins, H. (1951), “Minimum variance estimation without regularity assumptions”, Ann. Math. Statist. 22 (4): 581-586, <http://links.jstor.org/sici?sici=0003-4851%28195112%2922%3A4%3C581%3AMVEWRA%3E2.0.CO%3B2-O>