Gibbs measure

In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is the measure associated with the Boltzmann distribution, and generalizes the notion of the canonical ensemble. Importantly, when the energy function can be written as a sum of parts, the Gibbs measure has the Markov property (a certain kind of statistical independence), thus leading to its widespread appearance in many problems outside of physics, such as Hopfield networks, Markov networks, and Markov logic networks. In addition, the Gibbs measure is the unique measure that maximizes the entropy for a given expected energy; thus, the Gibbs measure underlies maximum entropy methods and the algorithms derived therefrom.

The measure gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as

P(X=x) = \frac{1}{Z(\beta)} \exp \left( - \beta E(x) \right).

Here, E(x) is a function from the space of states to the real numbers; in physics applications, E(x) is interpreted as the energy of the configuration x. The parameter \beta is a free parameter; in physics, it is the inverse temperature. The normalizing constant Z(\beta) is the partition function.

Contents

Markov property

An example of the Markov property of the Gibbs measure can be seen in the Ising model. Here, the probability of a given spin \sigma_k being in state s is, in principle, dependent on all other spins in the model; thus one writes

P(\sigma_k = s|\sigma_j,\, j\ne k)

for this probability. However, the interactions in the Ising model are nearest-neighbor interactions, and thus, one actually has

P(\sigma_k = s|\sigma_j,\, j\ne k) = 
P(\sigma_k = s|\sigma_j,\, j\isin N_k)

where N_k is the set of nearest neighbors of site k. That is, the probability at site k depends only on the nearest neighbors. This last equation is in the form of a Markov-type statistical independence. Measures with this property are sometimes called Markov random fields. More strongly, the converse is also true: any probability distribution having the Markov property can be represented with the Gibbs measure, given an appropriate energy function;[1] this is the Hammersley–Clifford theorem.

Gibbs measure on lattices

What follows is a formal definition for the special case of a random field on a group lattice. The idea of a Gibbs measure is, however, much more general than this.

The definition of a Gibbs random field on a lattice requires some terminology:

H_\Lambda^\Phi(\omega | \bar\omega) = H_\Lambda^\Phi(\omega_\Lambda\bar\omega_{\Lambda^c}),
where \Lambda^c = \mathbb{L}\setminus\Lambda.
Z_\Lambda^\Phi(\bar\omega) = \int \lambda^\Lambda(\mathrm{d}\omega) \exp(-\beta H_\Lambda^\Phi(\omega | \bar\omega)).
A potential \Phi is \lambda-admissible if Z_\Lambda^\Phi(\bar\omega) is finite for all \Lambda\in\mathcal{L}, \bar\omega\in\Omega and \beta>0.

A probability measure \mu on (\Omega,\mathcal{F}) is a Gibbs measure for a \lambda-admissible potential \Phi if it satisfies the Dobrushin-Lanford-Ruelle (DLR) equations

\int \mu(\mathrm{d}\bar\omega)Z_\Lambda^\Phi(\bar\omega)^{-1} \int\lambda^\Lambda(\mathrm{d}\omega) \exp(-\beta H_\Lambda^\Phi(\omega | \bar\omega)) 1_A(\omega_\Lambda\bar\omega_{\Lambda^c}) = \mu(A),
for all A\in\mathcal{F} and \Lambda\in\mathcal{L}.

An example

To help understand the above definitions, here are the corresponding quantities in the important example of the Ising model with nearest-neighbour interactions (coupling constant J) and a magnetic field (h), on \mathbb{Z}^d:

\Phi_A(\omega) = \begin{cases}
-J\,\omega(t_1)\omega(t_2) & \mathrm{if\ } A=\{t_1,t_2\} \mathrm{\ with\ } \|t_2-t_1\|_1 = 1 \\
-h\,\omega(t) & \mathrm{if\ } A=\{t\}\\
0 & \mathrm{otherwise}
\end{cases}

See also

References

  1. ^ Ross Kindermann and J. Laurie Snell, Markov Random Fields and Their Applications (1980) American Mathematical Society, ISBN 0-8218-5001-6