Lagrange multipliers

From Wikipedia, the free encyclopedia

Fig. 1. Drawn in green is the locus of points satisfying the constraint g(x,y) = c. Drawn in blue are contours of f.  Arrows represent the gradient, which points in a direction normal to the contour.
Enlarge
Fig. 1. Drawn in green is the locus of points satisfying the constraint g(x,y) = c. Drawn in blue are contours of f. Arrows represent the gradient, which points in a direction normal to the contour.

In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the local extrema of a function of several variables subject to one or more constraints. This method reduces a problem in n variables with k constraints to a solvable problem in n + k variables with no constraints. The method introduces a new unknown scalar variable, the Lagrange multiplier, for each constraint and forms a linear combination involving the multipliers as coefficients.

The justification for this can be carried out in the standard way as concerns partial differentiation, using either total differentials, or their close relatives, the chain rules. The objective is to find the conditions, for some implicit function, so that the derivative in terms of the independent variables of a function equals zero for some set of inputs.

Contents

[edit] Introduction

Consider a two-dimensional case. Suppose we have a function, f(x,y), to maximize subject to

g\left( x,y \right) = c,

where c is a constant. We can visualize contours of f given by

f \left( x, y \right)=d_n

for various values of dn, and the contour of g given by g(x,y) = c. Suppose we walk along the g = c contour. Since, in general, the contours of f and g will be distinct, traversing the g = c contour will generally intersect and cross many contours of f. In general, by moving along the line g = c we can increase or decrease the value of f. Only when g = c, i.e. the contour we are following, touches tangentially, but does not cross, a contour of f, do we not increase or decrease the value of f. This occurs at the constrained local extrema and the constrained inflection points of f.

A familiar example can be obtained from weather maps, with their contours for temperature and pressure: the constrained extrema will occur where the superposed maps show touching lines (isopleths).

Geometrically we translate the tangency condition to saying that the gradients of f and g are parallel vectors at the maximum. Introducing an unknown scalar, λ, we solve

\nabla \Big[f \left(x, y \right) + \lambda \left(g \left(x, y \right) - c \right) \Big] = 0

for λ ≠ 0.

Once values for λ are determined, we are back to the original number of variables and so can go on to find extrema of the new unconstrained function

F \left( x , y \right) = f \left( x , y \right) + \lambda \left( g \left( x , y \right) - c \right)

in traditional ways. That is, F(x,y) = f(x,y) for all (x,y) satisfying the constraint because g(x,y) − c equals zero on the constraint, but the extrema of F(x,y) are all on g(x,y) = c.

[edit] The method of Lagrange multipliers

Let f be a function defined on Rn, and let the constraints be given by gk(x) = 0 (perhaps by moving the constant to the left, as in gk(x) − c = 0). Now, define the Lagrangian, Λ, as

\Lambda(\mathbf x, \boldsymbol \lambda) = f + \sum_k \lambda_k g_k.

Observe that both the optimization criteria and constraints gk are compactly encoded as extrema of the Lagrangian:

\nabla_{\mathbf x} \Lambda = 0 \Leftrightarrow \nabla_{\mathbf x} f = - \sum_k \lambda_k \nabla_{\mathbf x} g_k,

and

\nabla_{\mathbf \lambda} \Lambda = 0 \Leftrightarrow g_k = 0.

Often the Lagrange multipliers have an interpretation as some salient quantity of interest. To see why this might be the case, observe that:

\frac{\partial \Lambda}{\partial {g_k}} = \lambda_k.

Thus, λk is the rate of change of the quantity being optimized as a function of the constraint variable. As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential, F = −∇V, can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In economics, the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the value of relaxing a given constraint (e.g. through bribery or other means).

The method of Lagrange multipliers is generalized by the Karush-Kuhn-Tucker conditions.

[edit] Example

[edit] Very simple example

Suppose you want to find the maximum value for

f(x,y) = x2y

with the condition that

x2 + y2 = 1

As there is just a single condition, we will use only one multiplier, say λ.

g(x,y) = x2 + y2 − 1
Φ(x,y,λ) = f(x,y) − λg(x,y) = x2y − λ(x2 + y2 − 1)

The maximum value is among the solution of the system of equations given by setting each of the partial derivatives of Φ equal to zero:

2xy − 2λx = 0
x2 − 2λy = 0
x2 + y2 − 1 = 0

[edit] Another example

Suppose we wish to find the discrete probability distribution with maximal information entropy. Then

f(p_1,p_2,\ldots,p_n) = -\sum_{k=1}^n p_k\log_2 p_k.

Of course, the sum of these probabilities equals 1, so our constraint is g(p) = 1 with

g(p_1,p_2,\ldots,p_n)=\sum_{k=1}^n p_k.

We can use Lagrange multipliers to find the point of maximum entropy (depending on the probabilities). For all k from 1 to n, we require that

\frac{\partial}{\partial p_k}(f+\lambda (g-1))=0,

which gives

\frac{\partial}{\partial p_k}\left(-\sum_{k=1}^n p_k \log_2 p_k + \lambda (\sum_{k=1}^n p_k - 1) \right) = 0.

Carrying out the differentiation of these n equations, we get

-\left(\frac{1}{\ln 2}+\log_2 p_k \right)  + \lambda = 0.

This shows that all pi are equal (because they depend on λ only). By using the constraint ∑k pk = 1, we find

p_k = \frac{1}{n}.

Hence, the uniform distribution is the distribution with the greatest entropy.

[edit] Economics

Constrained optimization plays a central role in economics. For example, the choice problem for a consumer is represented as one of maximizing a utility function subject to a budget constraint. The Lagrange multiplier has an economic interpretation as the shadow price associated with the constraint, in this case the marginal utility of income.

[edit] See also

[edit] External links

For references to Lagrange's original work and for an account of the terminology see the Lagrange Multiplier entry in

For additional text and interactive applets