In mathematics, the Karush–Kuhn–Tucker (KKT) conditions (also known as the Kuhn–Tucker conditions) are necessary for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. The KKT conditions were originally named after Harold W. Kuhn, and Albert W. Tucker, who first published the conditions.[1] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis.[2][3]
Contents |
We consider the following nonlinear optimization problem:
where x is the optimization variable, is the objective or cost function, are the inequality constraint functions, and are the equality constraint functions. The number of inequality and equality constraints are denoted m and l, respectively.
Suppose that the objective function and the constraint functions and are continuously differentiable at a point . If is a local minimum that satisfies some regularity conditions (see below), then there exist constants and , called KKT multipliers, such that
In the particular case , i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.
In order for a minimum point to satisfy the above KKT conditions, it should satisfy some regularity condition, the most used ones are listed below:
() is positive-linear dependent if there exists not all zero such that .
It can be shown that LICQ⇒MFCQ⇒CPLD⇒QNCQ, LICQ⇒CRCQ⇒CPLD⇒QNCQ (and the converses are not true), although MFCQ is not equivalent to CRCQ[4] . In practice weaker constraint qualifications are preferred since they provide stronger optimality conditions.
In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is necessary, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.
The necessary conditions are sufficient for optimality if the objective function and the inequality constraints are continuously differentiable convex functions and the equality constraints are affine functions.
It was shown by Martin in 1985[5] that the broader class of functions in which KKT conditions guarantees global optimality are the so called Type 1 invex functions.[6]
If we reconsider the optimization problem as a maximization problem with constant inequality constraints,
The value function is defined as:
(So the domain of V is )
Given this definition, each coefficient, , is the rate at which the value function increases as increases. Thus if each is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function f. This interpretation is especially important in economics and is used, for instance, in utility maximization problems.
With an extra constant multiplier , which may be zero, in front of the KKT stationarity conditions turn into
which are called the Fritz John conditions.
The KKT conditions belong to a wider class of the First Order Necessary Conditions (FONC), which allow for non-smooth functions using subderivatives.