Bayes estimator

From Wikipedia, the free encyclopedia

In decision theory and estimation theory, a Bayes estimator is an estimator or decision rule that maximizes the posterior expected value of a utility function or minimizes the posterior expected value of a loss function. (See also prior probability.)

Specifically, suppose an unknown parameter θ is known to have a prior distribution Π. Let δ be an estimator of θ (based on some measurements), and let R(θ,δ) be a risk function, such as the mean squared error. The Bayes risk of δ is defined as EΠ{R(θ,δ)}, where the expectation is taken over the probability distribution of θ. An estimator δ is said to be a Bayes estimator if it minimizes the Bayes risk among all estimators.

If we take the mean squared error as a risk function, then it is not difficult to show that the Bayes' estimate of the unknown parameter is simply the posterior mean:

\widehat{\theta }(x) = E[\theta |X]=\int \theta f(\theta |x)\,d\theta

[edit] See also