Hamilton's principle

From Wikipedia, the free encyclopedia

In physics, Hamilton's principle is an alternative formulation of the differential equations of motion for a physical system as an equivalent integral equation, using the calculus of variations. The principle is also called the principle of stationary action. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and has even been extended to quantum mechanics and quantum field theory.

Contents

[edit] Mathematical formulation

Hamilton's principle states that the true evolution \mathbf{q}(t) of a system described by N generalized coordinates \mathbf{q} = \left( q_{1}, q_{2}, \ldots, q_{N} \right) between two specified states \mathbf{q}_{1} \ \stackrel{\mathrm{def}}{=}\  \mathbf{q}(t_{1}) and \mathbf{q}_{2} \ \stackrel{\mathrm{def}}{=}\  \mathbf{q}(t_{2}) at two specified times t1 and t2 is an extremum (i.e., a stationary point, a minimum, maximum or saddle point) of the action functional

\mathcal{S}[\mathbf{q}(t)] \ \stackrel{\mathrm{def}}{=}\   \int_{t_{1}}^{t_{2}} L(\mathbf{q},\dot{\mathbf{q}},t)\, dt

where L(\mathbf{q},\dot{\mathbf{q}},t) is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in \mathcal{S}. It should be noted that the action \mathcal{S} is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. For those familiar with functional analysis, Hamilton's principle states that the true evolution of a physical system is the solution of the functional equation

\frac{\delta \mathcal{S}}{\delta \mathbf{q}(t)}=0

[edit] Euler-Lagrange equations for the action integral

Requiring that the true trajectory \mathbf{q}(t) be a stationary point of the action functional \mathcal{S} is equivalent to a set of differential equations for \mathbf{q}(t) (the Euler-Lagrange equations), which may be derived as follows.

Let \mathbf{q}(t) represent the true evolution of the system between two specified states \mathbf{q}_{1} \ \stackrel{\mathrm{def}}{=}\  \mathbf{q}(t_{1}) and \mathbf{q}_{2} \ \stackrel{\mathrm{def}}{=}\  \mathbf{q}(t_{2}) at two specified times t1 and t2, and let \boldsymbol\varepsilon(t) be a small perturbation that is zero at the endpoints of the trajectory

\boldsymbol\varepsilon(t_{1}) = \boldsymbol\varepsilon(t_{2}) \ \stackrel{\mathrm{def}}{=}\  0

To first order in the perturbation \boldsymbol\varepsilon(t), the change in the action functional \delta\mathcal{S} would be

\delta \mathcal{S} =  \int_{t_{1}}^{t_{2}}\;  \left[ L(\mathbf{q}+\boldsymbol\varepsilon,\dot\mathbf{q} +\dot\boldsymbol\varepsilon)- L(\mathbf{q},\dot\mathbf{q}) \right]dt = \int_{t_{1}}^{t_{2}}\; \left( \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \mathbf{q}} +  \dot\boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \dot\mathbf{q}}  \right)\,dt

where we have expanded the Lagrangian L to first order in the perturbation \boldsymbol\varepsilon(t).

Applying integration by parts to the last term results in

\delta \mathcal{S} =  \left[ \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \dot\mathbf{q}}\right]_{t_{1}}^{t_{2}} +  \int_{t_{1}}^{t_{2}}\;  \left( \boldsymbol\varepsilon \cdot \frac{\partial L}{\partial \mathbf{q}} - \boldsymbol\varepsilon \cdot \frac{d}{dt} \frac{\partial L}{\partial \dot\mathbf{q}} \right)\,dt

The boundary conditions \boldsymbol\varepsilon(t_{1}) = \boldsymbol\varepsilon(t_{2}) \ \stackrel{\mathrm{def}}{=}\  0 causes the first term to vanish

\delta \mathcal{S} =  \int_{t_{1}}^{t_{2}}\; \boldsymbol\varepsilon \cdot \left(\frac{\partial L}{\partial \mathbf{q}} - \frac{d}{dt} \frac{\partial L}{\partial \dot\mathbf{q}} \right)\,dt

Hamilton's principle requires that this first-order change \delta \mathcal{S} is zero for all possible perturbations \boldsymbol\varepsilon(t), i.e., the true path is a stationary point of the action functional \mathcal{S} (either a minimum, maximum or saddle point). This requirement can be satisfied if and only if

\frac{\partial L}{\partial \mathbf{q}} -  \frac{d}{dt}\frac{\partial L}{\partial \dot\mathbf{q}} = 0   Euler-Lagrange equations

These equations are called the Euler-Lagrange equations for the variational problem.

The conjugate momentum pk for a generalized coordinate qk is defined by the equation p_{k} \ \stackrel{\mathrm{def}}{=}\  \frac{\partial L}{\partial\dot q_{k}}.

An important special case of these equations occurs when L does not contain a generalized coordinate qk explicitly, i.e.,

if \frac{\partial L}{\partial q_{k}}=0, the conjugate momentum p_{k} \ \stackrel{\mathrm{def}}{=}\  \frac{\partial L}{\partial\dot q_{k}} is constant.

In such cases, the coordinate qk is called a cyclic coordinate. For example, if we use polar coordinates t, r, θ to describe the planar motion of a particle, and if L does not depend on θ, the conjugate momentum is the conserved angular momentum.

[edit] Example: Free particle in polar coordinates

Trivial examples help to appreciate the use of the action principle via the Euler-Lagrangian equations. A free particle (mass m and velocity v) in Euclidean space moves in a straight line. Using the Euler-Lagrange equations, this can be shown in polar coordinates as follows. In the absence of a potential, the Lagrangian is simply equal to the kinetic energy

L = \frac{1}{2} mv^2= \frac{1}{2}m \left( \dot{x}^2 + \dot{y}^2 \right)

in orthonormal (x,y) coordinates, where the dot represents differentiation with respect to the curve parameter (usually the time, t). In polar coordinates (r, φ) the kinetic energy and hence the Lagrangian becomes

L = \frac{1}{2}m \left( \dot{r}^2 + r^2\dot\varphi^2 \right).

The radial r and φ components of the Euler-Lagrangian equations become, respectively

\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{r}} \right)                                   - \frac{\partial L}{\partial r}                           = 0  \qquad                          \Rightarrow  \qquad                          \ddot{r} -  r\dot{\varphi}^2 = 0
\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{\varphi}}  \right)                                 -\frac{\partial L}{\partial \varphi}                           = 0  \qquad                          \Rightarrow  \qquad                          \ddot{\varphi} + \frac{2}{r}\dot{r}\dot{\varphi} = 0.

The solution of these two equations is given by

r\cos\varphi = a t + b
r\sin\varphi = c t + d

for a set of constants a, b, c, d determined by initial conditions. Thus, indeed, the solution is a straight line given in polar coordinates.

[edit] Comparison with Maupertuis' principle

Hamilton's principle and Maupertuis' principle are occasionally confused and both have been called (incorrectly) the principle of least action. They differ in three important ways:

  • their definition of the action...
Maupertuis' principle uses an integral over the generalized coordinates known as the abbreviated action \mathcal{S}_{0} \ \stackrel{\mathrm{def}}{=}\  \int \mathbf{p} \cdot d\mathbf{q} where \mathbf{p} = \left( p_{1}, p_{2}, \ldots, p_{N} \right) are the conjugate momenta defined above. By contrast, Hamilton's principle uses \mathcal{S}, the integral of the Lagrangian over time.
  • the solution that they determine...
Hamilton's principle determines the trajectory \mathbf{q}(t) as a function of time, whereas Maupertuis' principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis' principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy.) By contrast, Hamilton's principle directly specifies the motion along the ellipse as a function of time.
  • ...and the constraints on the variation.
Maupertuis' principle requires that the two endpoint states q1 and q2 be given and that energy be conserved along every trajectory. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times t1 and t2 be specified as well as the endpoint states q1 and q2.

[edit] Action principle for classical fields

The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravity.

The Einstein equation utilizes the Einstein-Hilbert action as constrained by a variational principle.

The path of a body in a gravitational field (i.e. free fall in space time, a so called geodesic) can be found using the action principle.

[edit] Action principle in quantum mechanics and quantum field theory

In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all imaginable paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, that gives the probability amplitudes of the various outcomes.

Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. In particular, it is fully appreciated and best understood within quantum mechanics. Richard Feynman's path integral formulation of quantum mechanics is based on a stationary-action principle, using path integrals. Maxwell's equations can be derived as conditions of stationary action.

[edit] References

  • Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 35-69.
  • Arnold VI. (1989) Mathematical Methods of Classical Mechanics, 2nd ed., Springer Verlag, pp. 59-61.