Thermal fluctuations

Atomic diffusion on the surface of a crystal. The shaking of the atoms is an example of thermal fluctuations. Likewise, thermal fluctuations provide the energy necessary for the atoms to occasionally hop from one site to a neighboring one. For simplicity, the thermal fluctuations of the blue atoms are not shown.

In statistical mechanics, thermal fluctuations are random deviations of a system from its average state, that occur in a system at equilibrium.[1] All thermal fluctuations become larger and more frequent as the temperature increases, and likewise they decrease as temperature approaches absolute zero.

Thermal fluctuations are a basic consequence of the definition of temperature: A system at nonzero temperature does not stay in its equilibrium microscopic state, but instead randomly samples all possible states, with probabilities given by the Boltzmann distribution.

Thermal fluctuations generally affect all the degrees of freedom of a system: There can be random vibrations (phonons), random rotations (rotons), random electronic excitations, and so forth.

Thermodynamic variables, such as pressure, temperature, or entropy, likewise undergo thermal fluctuations. For example, a system has an equilibrium pressure, but its true pressure fluctuates to some extent about the equilibrium.

Only the 'control variables' of statistical ensembles (such as N, V and E in the microcanonical ensemble) do not fluctuate.

Thermal fluctuations are a source of noise in many systems. The random forces that give rise to thermal fluctuations are a source of both diffusion and dissipation (including damping and viscosity). The competing effects of random drift and resistance to drift are related by the fluctuation-dissipation theorem. Thermal fluctuations play a major role in phase transitions and chemical kinetics.

Central Limit Theorem for Thermal Fluctuations

The volume of phase space \mathcal{V}, occupied by a system of  2m degrees of freedom is the product of the configuration volume  V and the momentum space volume. Since the energy is a quadratic form of the momenta for a non-relativistic system, the radius of momentum space will be \sqrt{E} so that the volume of a hypersphere will vary as \sqrt{E}^{2m} giving a phase volume of

 \mathcal{V}=\frac{(C\cdot E)^m}{\Gamma(m+1)},

where  C is a constant depending upon the specific properties of the system and \Gamma is the Gamma function. In the case that this hypersphere has a very high dimensionality,  2m, which is the usual case in thermodynamics, essentially all the volume will lie near to the surface

 \Omega(E)=\frac{\partial\mathcal{V}}{\partial E}=\frac{C^m\cdot E^{m-1}}{\Gamma(m)},

where we used the recursion formula m\Gamma(m)=\Gamma(m+1).

The surface area \Omega(E) has its legs in two worlds: (i) the macroscopic one in which it is considered a function of the energy, and the other extensive variables, like the volume, that have been held constant in the differentiation of the phase volume, and (ii) the microscopic world where it represents the number of complexions that is compatible with a given macroscopic state. It is this quantity that Planck referred to as a 'thermodynamic' probability. It differs from a classical probability inasmuch as it cannot be normalized; that is, its integral over all energies diverges--but it diverges as a power of the energy and not faster. Since its integral over all energies is infinite, we might try to consider its Laplace transform

 \mathcal{Z}(\beta)=\int_0^{\infty}e^{-\beta E}\Omega(E)\,dE,

which can be given a physical interpretation. The exponential decreasing factor, where \beta is a positive parameter, will overpower the rapidly increasing surface area so that an enormously sharp peak will develop at a certain energy E^{\star}. Most of the contribution to the integral will come from an immediate neighborhood about this value of the energy. This enables the definition of a proper probability density according to

 f(E;\beta)=\frac{e^{-\beta E}}{\mathcal{Z}(\beta)}\Omega(E),

whose integral over all energies is unity on the strength of the definition of \mathcal{Z}(\beta), which is referred to as the partition function, or generating function. The latter name is due to the fact that the derivatives of its logarithm generates the central moments, namely,

 \langle E\rangle =-\frac{\partial\ln\mathcal{Z}}{\partial\beta}, \qquad \ \langle(E-\langle E\rangle)^2\rangle=\langle(\Delta E)^2\rangle=\frac{\partial^2\ln\mathcal{Z}}{\partial\beta^2},

and so on, where the first term is the mean energy and the second one is the dispersion in energy.

The fact that \Omega(E) increases no faster than a power of the energy ensures that these moments will be finite. [2] Therefore, we can expand the factor  e^{-\beta E}\Omega(E) about the mean value \langle E\rangle, which will coincide with  E^{\star} for Gaussian fluctuations (i.e. average and most probable values coincide), and retaining lowest order terms result in

f(E;\beta)=\frac{e^{-\beta E}}{\mathcal{Z}(\beta)}\Omega(E)\approx\frac{\exp\{-(E-\langle E\rangle)^2/2\langle(\Delta E)^2\rangle\}}{\sqrt{2\pi\langle(\Delta E)^2\rangle}}.

This is the Gaussian, or normal, distribution, which is defined by its first two moments. In general, one would need all the moments to specify the probability density, f(E;\beta), which is referred to as the canonical, or posterior, density in contrast to the prior density \Omega, which is referred to as the 'structure' function.[2] This is the central limit theorem as it applies to thermodynamic systems.[3]

If the phase volume increases as  E^m, its Laplace transform, the partition function, will vary as \beta^{-m}. Rearranging the normal distribution so that it becomes an expression for the structure function and evaluating it at  E=\langle E\rangle give

\Omega(\langle E\rangle)=\frac{e^{\beta(\langle E\rangle)\langle E\rangle}\mathcal{Z}(\beta(\langle E\rangle))}{\sqrt{2\pi\langle(\Delta E)^2\rangle}}.

It follows from the expression of the first moment that \beta(\langle E\rangle)=m/\langle E\rangle, while from the second central moment, \langle(\Delta E)^2\rangle=\langle E\rangle^2/m. Introducing these two expressions into the expression of the structure function evaluated at the mean value of the energy leads to

 \Omega(\langle E\rangle)=\frac{\langle E\rangle^{m-1}m}{\sqrt{2\pi m}m^me^{-m}}.

The denominator is exactly Stirling's approximation for m!=\Gamma(m+1), and if the structure function retains the same functional dependency for all values of the energy, the canonical probability density,

 f(E;\beta)=\beta\frac{(\beta E)^{m-1}}{\Gamma(m)}e^{-\beta E}

will belong to the family of exponential distributions known as gamma densities. Consequently, the canonical probability density falls under the jurisdiction of the local law of large numbers which asserts that a sequence of independent and identically distributed random variables tends to the normal law as the sequence increases without limit.

Distribution of fluctuations about equilibrium

The expressions given below are for systems that are close to equilibrium and have negligible quantum effects.[4]

Single variable

Suppose x is a thermodynamic variable. The probability distribution w(x)dx for x is determined by the entropy S:

 w(x) \propto \exp\left(S(x)\right).

If the entropy is Taylor expanded about its maximum (corresponding to the equilibrium state), the lowest order term is a Gaussian distribution:

 w(x) = \frac{1}{\sqrt{2\pi \langle x^2 \rangle}} \exp\left(-\frac{x^2}{2 \langle x^2 \rangle} \right).

The quantity \langle x^2 \rangle is the mean square fluctuation.[4]

Multiple variables

The above expression has a straightforward generalization to the probability distribution w(x_1,x_2,\ldots,x_n)dx_1dx_2\ldots dx_n:

 w = \prod_{i,j=1\ldots n}\frac{1}{\left(2\pi\right)^{n/2}\sqrt{\langle x_ix_j \rangle}} \exp\left(-\frac{x_ix_j}{2\langle x_ix_j \rangle}\right),

where \langle x_ix_j \rangle is the mean value of x_ix_j.[4]

Fluctuations of the fundamental thermodynamic quantities

In the table below are given the mean square fluctuations of the thermodynamic variables T,V,P and S in any small part of a body. The small part must still be large enough, however, to have negligible quantum effects.

Averages \langle x_ix_j \rangle of thermodynamic fluctuations. Temperature is in energy units (divide by Boltzmann's constant k_B to get degrees). C_P is the heat capacity at constant pressure; C_V is the heat capacity at constant volume.[4]
\Delta T \Delta V \Delta S \Delta P
\Delta T \frac{T^2}{C_V} 0  T \frac{T^2}{C_V}\left( \frac{\partial P}{\partial T} \right)_V
\Delta V 0 -T\left(\frac{\partial V}{\partial P} \right)_T T\left(\frac{\partial V}{\partial T} \right)_P -T
\Delta S  T T\left(\frac{\partial V}{\partial T} \right)_P C_P 0
\Delta P \frac{T^2}{C_v}\left(\frac{\partial P}{\partial T} \right)_V -T 0 -T\left(\frac{\partial P}{\partial V} \right)_S

See also

Notes

  1. In statistical mechanics they are often simply referred to as fluctuations.
  2. 1 2 Khinchin 1949
  3. Lavenda 1991
  4. 1 2 3 4 Landau 1985

References

  • Khinchin, A. I. (1949). Mathematical Foundations of Statistical Mechanics. Dover Publications. ISBN 0-486-60147-1. 
  • Lavenda, B. H. (1991). Statistical Physics: A Probabilistic Approach. Wiley-Interscience. ISBN 0-471-54607-0. 
  • Landau, L. D.; Lifshitz, E. M. (1985). Statistical Physics, Part 1 (3rd ed.). Pergamon Press. ISBN 0-08-023038-5. 
This article is issued from Wikipedia - version of the Sunday, January 17, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.