Hierarchical Bayes model
From Wikipedia, the free encyclopedia
The hierarchial Bayes method is one of the most important topics in modern Bayesian analysis. It is a powerful tool for expressing rich statistical models that more fully reflect the actual problem at hand than a simpler model could.
Given data and parameters , a simple Bayesian analysis starts with a prior probability (prior) and likelihood to compute a posterior probability .
Often, the prior on depends in turn on other parameters that are not mentioned in the likelihood. This means that the prior must be replaced by a prior . But then, a prior on the newly introduced parameters is required, resulting in a posterior probability
- .
This is the simplest example of a hierarchial Bayes model.
The process may be repeated; For example, the parameters may depend in turn on additional parameters , which will require their own prior. Eventually the process must terminate, with priors that do not depend on any other unmentioned parameters.
Contents |
[edit] Examples
Suppose we have measured quantities , where the observed data have been measured with normally distributed errors of known standard deviation , e.g.,
Suppose we are interested in estimating the . A naive approach would be to estimate the using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
However, if the quantities are related, so that for example we may think that the individual have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
with improper priors flat, flat. When , this is an identified model, and the posterior distributions of the individual will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
More examples needed.
[edit] Restrictions on Priors
Some care needs to be taken when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will be improper and not normalizable, and estimates made by minimizing the expected loss will be inadmissible.
This section needs significant expansion.
[edit] Representation by Directed Acyclic Graphs (DAGs)
A useful graphical tool for representing hierarchical Bayes models is the directed acyclic graph, or DAG. In this diagram, the likelihood function is represented as the root of the graph; each prior is represented as a separate node pointing to the node that depends on it. In a simple Bayesian model, the data x are at the root of the diagram, representing the likelihood , and the variable is placed in a node that points to the root, as in the following diagram:
Diagram needed here.
In the simplest hierarchical Bayes model, where in turn depends on a new variable , a new node labelled is indicated, with an arrow pointed towards the node . See also Bayesian networks.
Diagram needed here.
Significant expansion required.
[edit] References
- Gelman, A., et. al. (2004), Bayesian Data Analysis, Second Edition. Boca Raton: Chapman & Hall/CRC. Chapter 5.