Admissible decision rule

From Wikipedia, the free encyclopedia

In classical (frequentist) decision theory, an admissible decision rule is a rule for making a decision that is "better" than any other rule that may compete with it, in a specific sense defined below. Generally speaking, in most decision problems the set of admissible rules is large, even infinite, but as will be seen there are good reasons to favor admissible rules.

Contents

[edit] Definition

Define sets \Theta,\mathcal{X} and \mathcal{A}, where \Theta\,\! are the states of nature, \mathcal{X} the possible observations and \mathcal{A} the actions that may be taken. A decision rule is a function \delta:{\mathcal{X}}\rightarrow {\mathcal{A}}, i.e., upon observing x\in \mathcal{X}, we choose to take action \delta(x)\,\!.

In addition, we define a loss function L: \Theta \times \mathcal{A} \rightarrow \Re, where \Re is the set of real numbers, which measures the loss we incur by taking action a \in \mathcal{A} when the true state of nature is \theta \in \Theta. Usually we will take this action after observing data x \in \mathcal{X}, so that the loss will be L(\theta,\delta(x))\,\!.

It is possible to recast the theory in terms of a utility function, the negative of the loss. However, admissibility is usually defined in terms of a loss function, and we shall follow this convention.

Let x\,\! have cumulative distribution function F(x|\theta)\,\!. Define the risk function as the expectation

R(\theta,\delta)=E^{\mathcal{X}}[{L(\theta,\delta(x))]}.\,\!

A decision rule \delta^*\,\! dominates a decision rule \delta\,\! if and only if R(\theta,\delta^*)\le R(\theta,\delta) for all \theta\,\!, and the inequality is strict for some \theta\,\!.

A decision rule is admissible if and only if no other rule dominates it; otherwise it is inadmissible. An admissible rule should be preferred over an inadmissible rule since for any inadmissible rule there is an admissible rule that performs at least as well for all states of nature and betters it for some.

[edit] Bayes rules

Let \pi(\theta)\,\! be a probability distribution on the states of nature. From a Bayesian point of view, we would regard it as a prior distribution. That is, it is our believed probability distribution on the states of nature, prior to observing data. For a frequentist, it is merely a function on \Theta\,\! with no such special interpretation. The Bayes risk of the decision rule \delta\,\! with respect to \pi(\theta)\,\! is the expectation

r(\pi,\delta)=E^\pi[R(\theta,\delta)].\,\!

If the Bayes risk is finite, we can minimize r(\pi,\delta)\,\! with respect to \delta\,\! to obtain \delta^\pi(x)\,\!, a Bayes rule with respect to \pi(\theta)\,\!. There may be more than one Bayes rule. If the Bayes risk is infinite, then no Bayes rule is defined.

[edit] Admissible rules and Bayes rules

In the Bayesian approach to decision theory, x\,\! is considered fixed. Instead of averaging over \mathcal{X}\,\! as in the frequentist approach, the Bayesian would average over \Theta\,\!. Thus, we would be interested in computing for our observed x\,\! the expected loss

\rho(\pi,\delta)=E^{\pi} [ L(\theta,\delta(x)) ]. \,\!

Since x\,\! is considered fixed and known, we can choose \delta\,\! to minimize the expected loss for any x\,\!; by varying x\,\! over its range, we can define a function \delta^\pi(x)\,\!, which is known as a generalized Bayes rule. A generalized Bayes rule will be the same as some Bayes rule (relative to \pi\,\!), provided that the Bayes risk is finite. Since more than one decision rule may minimize the expected loss, there may not be a unique generalized Bayes rule.

According to the complete class theorems, under mild conditions every admissible rule is a (generalized) Bayes rule (with respect to some, possibly improper, prior). Thus, in frequentist decision theory it is sufficient to consider only (generalized) Bayes rules.

While Bayes rules with respect to proper priors are virtually always admissible, generalized Bayes rules corresponding to improper priors need not yield admissible procedures. Stein's example is one such famous situation.

[edit] References

  • James O. Berger Statistical Decision Theory and Bayesian Analysis. Second Edition. Springer-Verlag, 1980, 1985. ISBN 0-387-96098-8.
  • Morris De Groot Optimal Statistical Decisions. Wiley Classics Library. 2004. (Originally published 1970.) ISBN 0-471-68029-X.
  • Christian P. Robert The Bayesian Choice. Springer-Verlag 1994. ISBN 3-540-94296-3.