Hierarchical Bayes model

The hierarchical Bayes model is a method in modern Bayesian statistical inference.[says who?] It is a framework for describing statistical models that can capture dependencies more realistically than non-hierarchical models.

Given data x\,\! and parameters \vartheta, a simple Bayesian analysis starts with a prior probability (prior) p(\vartheta) and likelihood p(x|\vartheta) to compute a posterior probability p(\vartheta|x) \propto p(x|\vartheta)p(\vartheta).

Often the prior on \vartheta depends in turn on other parameters \varphi that are not mentioned in the likelihood. So, the prior p(\vartheta) must be replaced by a likelihood p(\vartheta|\varphi), and a prior p(\varphi) on the newly introduced parameters \varphi is required, resulting in a posterior probability

p(\vartheta,\varphi|x) \propto p(x|\vartheta)p(\vartheta|\varphi)p(\varphi).

This is the simplest example of a hierarchical Bayes model.

The process may be repeated; for example, the parameters \varphi may depend in turn on additional parameters \psi\,\!, which will require their own prior. Eventually the process must terminate, with priors that do not depend on any other unmentioned parameters.

Contents

Examples

Suppose we have measured the quantities x_1,\dots,x_n\,\!each with normally distributed errors of known standard deviation \sigma\,\!,


x_i \sim N(\vartheta_i, \sigma^2)

Suppose we are interested in estimating the \vartheta_i. An approach would be to estimate the \vartheta_i using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply


\vartheta_i = x_i

However, if the quantities are related, so that for example we may think that the individual \vartheta_i have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,


x_i \sim N(\vartheta_i,\sigma^2),

\vartheta_i\sim N(\varphi, \tau^2)

with improper priors \varphi\simflat, \tau\simflat \in (0,\infty). When n\ge 3, this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual \vartheta_i will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.

Restrictions on priors

Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable \tau\,\! in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will be improper (not normalizable), and estimates made by minimizing the expected loss will be inadmissible.

Representation by directed acyclic graphs (DAGs)

A useful graphical tool for representing hierarchical Bayes models is the directed acyclic graph, or DAG. In this diagram, the likelihood function is represented as the root of the graph; each prior is represented as a separate node pointing to the node that depends on it. In a simple Bayesian model, the data x are at the root of the diagram, representing the likelihood p(x|\vartheta), and the variable \vartheta is placed in a node that points to the root, as in the following diagram:

 \vartheta {\rightarrow} x

In the simplest hierarchical Bayes model, where \vartheta in turn depends on a new variable \varphi, a new node labelled \varphi is indicated, with an arrow pointed towards the node \vartheta, as in the equation below or the diagram at right. See also Bayesian networks.

 \varphi {\rightarrow} \vartheta {\rightarrow} x

See also

References

External links

Software: