MAP estimator
From Wikipedia, the free encyclopedia
In statistics, MAP estimates come from maximizing the likelihood function multiplied by a prior probability distribution.
For example, if the density of the data is given by f, then the likelihood function is given by
When θ is unknown, the method of maximum likelihood uses the value of θ that maximizes L(θ) as an estimate of θ. This is the maximum likelihood estimator (MLE) of θ
In contrast, the MAP estimator of θ postulates the existence of an a priori distribution π(θ), and the MAP estimator is given by
[edit] Example
Suppose that we are given a sequence of IID random variables and an a prior distribution of μ is given by . We wish to find the MAP estimate of μ.
The function to be maximized is then given by
which is equivalent to minimizing in μ the following
Thus, we see that the MAP estimator for μ is given by
Note that as that
The case of is called a non-informative prior and leads to an ill-defined a priori probability distribution.
[edit] See also
[edit] References
- Harold W. Sorenson, (1980) "Parameter Estimation: Principles and Problems", Marcel Dekker.