Mean field theory

From Wikipedia, the free encyclopedia

A many-body system with interactions is generally very difficult to solve exactly, except for extremely simple cases (Gaussian field theory, 1D Ising model.) The great difficulty (e.g. when computing the partition function of the system) is the treatment of combinatorics generated by the interaction terms in the Hamiltonian when summing over all states. The goal of mean field theory (MFT, also known as self-consistent field theory) is to resolve these combinatorial problems.

The main idea of MFT is to replace all interactions to any one body with an average or effective interaction. This reduces any multi-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a relatively low cost.

In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean field". Quite often, in the formalism of fluctuations, MFT provides a convenient launch-point to studying first or second order fluctuations.

In general, dimensionality plays a strong role in determining whether a mean-field approach will work for any particular problem. In MFT, many interactions are replaced by one effective interaction. Then it naturally follows that if the field or particle exhibits many interactions in the original system, MFT will be more accurate for such a system. This is true in cases of high dimensionality, or when the Hamiltonian includes long-range forces. The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, depending upon the number of spatial dimensions in the system of interest.

While MFT arose primarily in the field of statistical mechanics, it has more recently been applied elsewhere, for example in inference in graphical models theory in artificial intelligence.

[edit] Formal approach

The formal basis for mean field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian

\mathcal{H}=\mathcal{H}_{0}+\Delta \mathcal{H}

has the following upper bound:

F \leq F_{0} \ \stackrel{\mathrm{def}}{=}\  \langle \mathcal{H} \rangle_{0} -T S_{0}

where the average is taken over the equilibrium ensemble of the reference system with Hamiltonian \mathcal{H}_{0}. In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as

\mathcal{H}_{0}=\sum_{i=1}^{N}h_{i}\left( \xi_{i}\right)


where \left(\xi_{i}\right) is shorthand for the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth). One can consider sharpening the upper bound by minimising the right hand side of the inequality. The minimizing reference system is then the "best" approximation to the true system using non-correlated degrees of freedom, and is known as the mean field approximation.

For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,

\mathcal{H}=\sum_{(i,j)\in \mathcal{P}}V_{i,j}\left( \xi_{i},\xi_{j}\right)

where \mathcal{P} is the set of pairs that interact. The minimizing procedure can be carried out formally. Define Trifi) as the generalized sum of the observable f over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by

F_{0} = \,\! {\rm Tr}_{1,2,..,N}\mathcal{H}(\xi_{1},\xi_{2},...,\xi_{N})P^{(N)}_{0}(\xi_{1},\xi_{2},...,\xi_{N})
+kT \,{\rm Tr}_{1,2,..,N}P^{(N)}_{0}(\xi_{1},\xi_{2},...,\xi_{N})\log P^{(N)}_{0}(\xi_{1},\xi_{2},...,\xi_{N})

where P^{(N)}_{0}(\xi_{1},\xi_{2},...,\xi_{N}) is the probability to find the reference system in the state specified by the variables 12,...,ξN). This probability is given by the normalized Boltzmann weight

P^{(N)}_{0}(\xi_{1},\xi_{2},...,\xi_{N})=\frac{1}{Z^{(N)}_{0}}e^{-\beta \mathcal{H}_{0}(\xi_{1},\xi_{2},...,\xi_{N})}=\prod_{i=1}^{N}\frac{1}{Z_{0}}e^{-\beta h_{i}\left( \xi_{i}\right)}
\ \stackrel{\mathrm{def}}{=}\  \prod_{i=1}^{N} P^{(i)}_{0}(\xi_{i}).

Thus

F_{0}=\sum_{(i,j)\in\mathcal{P}} {\rm Tr}_{i,j}V_{i,j}\left( \xi_{i},\xi_{j}\right)P^{(i)}_{0}(\xi_{i})P^{(j)}_{0}(\xi_{j})+
kT \sum_{i=1}^{N} {\rm Tr}_{i} P^{(i)}_{0}(\xi_{i}) \log P^{(i)}_{0}(\xi_{i}).

In order to minimize we take the derivative with respect to the single degree-of-freedom probabilities P^{(i)}_{0} using a Lagrange multiplier to ensure proper normalisation. The end result is the set of self-consistency equations

P^{(i)}_{0}(\xi_{i})=\frac{1}{Z_{0}}e^{-\beta h_{i}^{MF}(\xi_{i})}\qquad i=1,2,..,N

where the mean field is given by

h_{i}^{MF}(\xi_{i})=\sum_{\{j|(i,j)\in\mathcal{P}\}}Tr_{j}V_{i,j}\left( \xi_{i},\xi_{j}\right)P^{(j)}_{0}(\xi_{j})

[edit] Example

Consider the Ising model on an N-dimensional cubic lattice. The Hamiltonian is given by

 H = -J \Sigma^{'} \mathbf{s_i} \mathbf{s_{i'}}

where the Σ' indicates summation over nearest neighbors, and \mathbf{s_i} and \mathbf{s_{i'}} are neighboring Ising spins.

Let's transform our spin variable by introducing the fluctuation from its mean value  \langle\mathbf{s}\rangle . We may rewrite the Hamiltonian:

 H = -J \Sigma^{'} (\mathbf{\Delta(s_i) + \langle s_i \rangle}) (\mathbf{\Delta(s_{i'})+ \langle s_{i'}\rangle})

where we define  \mathbf{\Delta(s) \ \stackrel{\mathrm{def}}{=}\  s - \langle s\rangle} ; this is the fluctuation term of the spin. If we multiply out the right hand side, we obtain one term that's entirely dependent on the mean values of the spins, and independent of the spin configurations. This is the trivial term, which does not affect the partition function of the system. The next term is the one involving the product of the mean value of the spin and the dynamic fluctuation value. Finally, the last term involves a product of two fluctuation values.

If fluctuations are small, we may neglect this last term. As per the above arguments, when the fluctuations are small, then MFT should work 'better', from an intuitive stand-point.

 H \approx -J \Sigma^{'} (\mathbf{ 2 \Delta(s_i) \langle s_{i'}\rangle })

Again, the summand can be reexpanded to

 \mathbf{(s_i - \langle s_i\rangle) \langle s_{i'}\rangle}

The only term that matters from the partition function's point of view is the first product.

 \mathbf{(s_i) \langle s_{i'}\rangle}

By symmetry arguments, the mean value of each spin is site-independent. We can replace  \langle\mathbf{s_{i'}}\rangle with \langle \mathbf{s}\rangle .

We are still stuck with a double summation over neighboring spins, yet the summand involves only one site of each neighbor. Roughly speaking, we count 2d bonds (where d is the dimensionality of the cubic lattice) for each site. But since each bond participates in two spins, we would be overcounting by a factor of 2 if we gave each site a multiplicity of 2d. Therefore, the Hamiltonian becomes

 H = -2dJ\langle\mathbf{s}\rangle \Sigma_i \mathbf{s_i}

At this point, the Hamiltonian has been reduced to that of a single-body problem. The drawback is that now the effective coupling constant contains the mean value of the summand.

Substituting this Hamiltonian into the partition function, and solving the effective 1D problem, we obtain

 Z = (2 \cosh(\frac{2dJ}{T} \langle \mathbf{s}\rangle))^{N}

where N is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system, and calculate critical exponents.

MFT is known under a great many names and guises. Similar techniques include Bragg-Williams approximation, models on Bethe lattice, Landau theory, Pierre-Weiss approximation, Flory-Huggins solution theory, and Scheutjens-Fleer theory.