Optimal decision

From Wikipedia, the free encyclopedia

An optimal decision is a decision such that no other available decision options will lead to a better outcome. It it an important concept in decision theory. In order to compare the different decision outcomes, one commonly assigns a relative utility each of them.

Sometimes, the equivalent problem of minimizing loss=negative utility is considered, particularly in financial situations, where then utility is defined as economic gain. This highlights the fact that "utility" is only an arbitrary term for quantifying the desirability of a particular decision outcome and not necessarily connected to "usefulness". For example, it may well be the optimal decision for someone to buy a sports car rather than a wagon, if the outcome in terms of e.g. personal image is more desirable even given the higher cost and lower hauling-capacity of the vehicle.

In case the decision outcome is subject to uncertainty, an optimal decision is maximizing the expected utility.

The problem of finding the optimal decision is a mathematical optimization problem. In practice, few people are verifying that their decisions are optimal, but use more intuitive approaches to obtain decisions that are "good enough".

Typical reasons for a more formal approach is when the the decision is important enough to motivate the time it takes to analyze it as well as too complex to solve with more simple intuitive approaches, e.g. with a large number of available decision options and a complex decision - outcome relationship.

Contents

[edit] Formal mathematical description

Each decision d in a set D of available decision options will lead to an outcome o = f(d). All possible outcomes form the set O. Assigning a utility UO(o) to every outcome, we can define the utility of a particular decision d as

U_D(d) \ = \  U_O(f(d)) \,

We can then define an optimal decision dopt as one that maximizes UD(d) :

d_\mathrm{opt} = \arg\max \limits_{d \in D} U_D(d) \,

Solving the problem can thus be divided into three steps:

  1. predicting the outcome o for every decision d
  2. assigning a utility UO(o) to every outcome o
  3. finding the decision d that maximizes UD(d)

[edit] Under uncertainty in outcome

In case it is not possible to predict with certainty what will be the outcome of a particular decision, a probabilistic approach is necessary. In its most general form, it can be expressed as follows:

given a decision d, we know the probability distribution for the possible outcomes described by the conditional probability density p(o | d). We can then calculate the expected utility of decision d as

U_D(d)=\int{p(o|d)U(o)do}\,    ,

where the integral is taken over the whole set O (DeGroot, pp 121)

An optimal decision dopt is then one that maximizes UD(d), just as above

d_\mathrm{opt} = \arg\max \limits_{d \in D} U_D(d) \,


[edit] Example

The Monty Hall problem.

[edit] References

  • Morris DeGroot Optimal Statistical Decisions. McGraw-Hill. New York. 1970. ISBN 0070162425.
  • James O. Berger Statistical Decision Theory and Bayesian Analysis. Second Edition. 1980. Springer Series in Statistics. ISBN 0-387-96098-8.