Imprecise probability

From Wikipedia, the free encyclopedia

The notion of Imprecise probability is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings,...) and quantitative modes (interval probabilities, possibility theory, belief functions, upper and lower previsions, upper and lower probabilities, ...). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete.

Imprecise Probability Theory aims not to replace, but to complement and enlarge the classical notion of Bayesian probability, approach to probability theory, by providing it with tools to work with weaker information states.

Contents

[edit] Gambling interpretation of imprecise probabilities according to Walley

A common starting point for researchers in imprecise probability is the 1991 book by Peter Walley. This book is in no way the first attempt to formalize imprecise probabilities, but it is highly cited in the community for both its depth and breadth of coverage. In terms of probability interpretations, Walley’s formulation of imprecise probabilities is based on the subjective or Bayesian interpretation of probability. Walley defines upper and lower probabilities as special cases of upper and lower previsions and the gambling framework advanced by Bruno de Finetti. In simple terms, a decision maker’s lower prevision is the highest price at which the decision maker is sure he or she would buy a gamble, and the upper prevision is the lowest price at which the decision maker is sure he or she would buy the opposite of the gamble (which is equivalent to selling the original gamble). If the upper and lower previsions are equal, then they jointly represent the decision maker’s fair price for the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.

The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precision and imprecise probability theories.

[edit] Motivation for imprecise probabilities

The general motivation for imprecise probabilities is that the more evidence on which a probability estimate is based, the more confidence a decision-maker can have in it. Thus, the imprecision in the probabilities should be expressed explicitly in order to signal the appropriate level of confidence to ascribe to them.

Walley considers the exercise of determining the probability that a tossed thumbtack lands pin-up. Three experimenters perform this exercise, as follows (adapted from Walley):

  • Experimenter A is in a hurry and does not even look at the thumbtack. Experimenter A employs a non-informative prior distribution (e.g., uses the principle of indifference or insufficient reason) and assumes that the probability of the tack landing pin-up is equal to the probability it lands pin-down, thus ascribing a probability of 0.5 to both outcomes.
  • Experimenter B tosses the thumbtack 10 times and gets 6 pin-ups. Experimenter B’s estimated precise probability of the tack landing pin-up is thus 0.6.
  • Experimenter C tosses the thumbtack 1000 times and gets 400 pin-ups. Experimenter C’s estimated precise probability of the tack landing pin-up is thus 0.4.

If the three experimenters are three analysts that could provide the decision maker with information, which analyst would the decision maker prefer to hire? Because Experimenter C’s estimate was based on more data, it is more precise than Experimenter B’s estimate. Experimenter A’s estimate was based on no data, so it does not seem reasonable to place much confidence in it. Nevertheless, the precise probability estimates of 0.5, 0.6, and 0.4 appear equally credible. By not expressing the imprecision in these estimates, one is arbitrarily eliminating it by assuming precision that has no justification in the available evidence. Advocates claim that this type of problem can be overcome by allowing analysts to state imprecise probabilities.

[edit] Alternative explanation without imprecise probabilities

However, imprecise probabilities are not necessary to resolve this issue: simple probability theory suffices. The posterior probability distributions for the parameter being estimated (here called the 'probability of the tack landing pin-up', although in practice it is a summary of complex physical properties of the tack and its environment and not a probability at all) are very different in the three cases, and characterize in a precise way the imprecision of the estimates. Let us call the parameter to be estimated p. Its value lies between 0 and 1.

  • In case A, the probability distribution for P is uniform between 0 and 1. The mean of this distribution is 0.5, and its standard deviation is 0.289.
  • In case B, the posterior mean of p is 0.583 (0.6 is the most probable value, however) and its standard deviation is 0.137.
  • In case C, the posterior mean of p is 0.400 and its standard deviation is 0.015.

Thus the standard deviation in case C is 100 times smaller than that in case B. Clearly the result of experimenter C is preferable due to the much smaller uncertainty. Imprecise probabilities are thus seen to be unnecessary to explain this example.

[edit] Bibliography

  • Walley, Peter: Statistical reasoning with imprecise probabilities. London ; New York : Chapman and Hall, 1991. Monographs on statistics and applied probability : 42. ISBN: 0412286602. [1]

[edit] External links