Fiducial inference

From Wikipedia, the free encyclopedia

Fiducial inference was a form of statistical inference put forward by R A Fisher in an attempt to perform inverse probability without prior probability distributions.

A Fiducial interval may be used instead of a confidence interval or a Bayesian credible interval in order to measure the precision of a statistical estimate. Fiducial inference attracted controversy and was never universally accepted. In 1978, JG Pederson wrote that "the fiducial argument has had very limited success and is now essentially dead."[1]

Fisher did not develop fiducial inference substantially and many of the results he did get could also be obtained from Bayesian inference (which Fisher rejected philosophically), often using Jeffreys priors.

[edit] Background

The concept of a confidence interval with coverage γ is something that students often find difficult. The interpretation does seem rather convoluted: among all confidence intervals computed by the same method, a proportion γ will contain the true value that we wish to estimate (and therefore a proportion 1 − γ will not do so). This is a repeated sampling (or frequentist) interpretation. It is a property of the method for calculating the interval and does not tell us the probability that the true value is in the particular interval we have calculated.

By contrast Bayes credible intervals do allow us to give such a probability. However they make an assumption that many statisticians find objectionable: that before looking at the data we can give a probability distribution (known as the prior distribution) for the unknown parameter. Fisher’s fiducial (from the Latin for faith) method was designed to overcome the latter objection and at the same time give this more natural interpretation. The fiducial distribution is a measure of the degree of faith we should put in any given value of the unknown parameter.

Unfortunately Fisher did not give a general definition of the fiducial method and he denied that the method could always be applied. His only examples were for a single parameter; different generalisations have been given when there are several parameters.

[edit] The Fiducial Distribution

Fisher required the existence of a sufficient statistic for the fiducial method to apply. Suppose we have a single sufficient statistic for a single parameter. That is, suppose that the conditional distribution of the data given the statistic does not depend on the value of the parameter. For example suppose that n independent observations are uniformly distributed on the interval [0,ω]. The maximum, X, of the n observations is a sufficient statistic for ω. If we record X and forget the values of the remaining observations, these remaining observations are equally likely to have had any values in the interval [0,X]. This statement does not depend on the value of ω. Then X contains all the available information about ω and the other observations could have given no further information.

The distribution function of X is F(x) = P(X <= x) = P\left(\mathrm{all\ observations} <= x\right) = \left(\frac{x}{\omega}\right)^n. We may make probability statements about \frac{X}{\omega}. For example, given γ, we may choose a with 0 < a < 1 such that P\left(a < \frac{X}{\omega}\right) = 1-a^n = \alpha. Thus a = (1-\alpha)^{\frac{1}{n}}. Then Fisher says we may invert that statement and say P\left(\omega < \frac{X}{a}\right) = \alpha. In this latter statement ω is now regarded as a random variable and X is fixed, whereas previously it was the other way round. This distribution of ω is the fiducial distribution which may be used to form fiducial intervals.

The calculation is identical to the pivotal method for finding a confidence interval, but the interpretation is different. In fact older books use the terms confidence interval and fiducial interval interchangeably. Notice that the fiducial distribution is uniquely defined when a single sufficient statistic exists.

The pivotal method is based on a random variable that is a function of both the observations and the parameters but whose distribution does not depend on the parameter. Then probability statements about the observations may be made that do not depend on the parameters and may be inverted by solving for the parameters in much the same way as in the example above. However, this is only equivalent to the fiducial method if the pivotal quantity is uniquely defined based on a sufficient statistic.

We could define a fiducial interval to be just a different name for a confidence interval and give it the fiducial interpretation. But the definition might not then be unique. Fisher would have denied that this interpretation is correct: for him, the fiducial distribution had to be defined uniquely and it had to use all the information in the sample.

[edit] Bibliography

  1. ^ Pederson, JG (1978), “Fiducial Inference”, International Statistical Review 46 (2): 147-170, <http://links.jstor.org/sici?sici=0306-7734%28197808%2946%3A2%3C147%3AFI%3E2.0.CO%3B2-S> 
  • Fisher, R A (1956). Statistical Methods and Scientific Inference. New York: Hafner. 
  • (1950) in Tukey, J W, ed.: R.A. Fisher's Contributions to Mathematical Statistics. New York: Wiley.