Quadratic programming

From Wikipedia, the free encyclopedia

Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.

Problem formulation

The quadratic programming problem can be formulated as:[1]

Assume x belongs to {\mathbb  {R}}^{{n}} space. Both x and c are column vectors with n elements (n×1 matrices), and Q is a symmetric n×n matrix.

Minimize (with respect to x)

f({\mathbf  {x}})={\tfrac  {1}{2}}{\mathbf  {x}}^{T}Q{\mathbf  {x}}+{\mathbf  {c}}^{T}{\mathbf  {x}}.

Subject to one or more constraints of the form:

A{\mathbf  {x}}\leq {\mathbf  b} (inequality constraint)
E{\mathbf  {x}}={\mathbf  d} (equality constraint)

where {\mathbf  {x}}^{T} indicates the vector transpose of {\mathbf  {x}}. The notation A{\mathbf  x}\leq {\mathbf  b} means that every entry of the vector A{\mathbf  x} is less than or equal to the corresponding entry of the vector {\mathbf  b}.

A related programming problem, quadratically constrained quadratic programming, can be posed by adding quadratic constraints on the variables.

Solution methods

For general problems a variety of methods are commonly used, including

Convex quadratic programming is a special case of the more general field of convex optimization.

Equality constraints

Quadratic programming is particularly simple when there are only equality constraints; specifically, the problem is linear. By using Lagrange multipliers and seeking the extremum of the Lagrangian, it may be readily shown that the solution to the equality constrained problem is given by the linear system:

{\begin{bmatrix}Q&E^{T}\\E&0\end{bmatrix}}{\begin{bmatrix}{\mathbf  x}\\\lambda \end{bmatrix}}={\begin{bmatrix}-{\mathbf  c}\\{\mathbf  d}\end{bmatrix}}

where \lambda is a set of Lagrange multipliers which come out of the solution alongside {\mathbf  x}.

The easiest means of approaching this system is direct solution (for example, LU factorization), which for small problems is very practical. For large problems, the system poses some unusual difficulties, most notably that problem is never positive definite (even if Q is), making it potentially very difficult to find a good numeric approach, and there are many approaches to choose from dependent on the problem.[4]

If the constraints don't couple the variables too tightly, a relatively simple attack is to change the variables so that constraints are unconditionally satisfied. For example, suppose {\mathbf  d}=0 (generalizing to nonzero is straightforward). Looking at the constraint equations:

E{\mathbf  {x}}=0

introduce a new variable {\mathbf  y} defined by

Z{\mathbf  {y}}={\mathbf  x}

where {\mathbf  y} has dimension of {\mathbf  x} minus the number of constraints. Then

EZ{\mathbf  {y}}=0

and if Z is chosen so that EZ=0 the constraint equation will be always satisfied. Finding such Z entails finding the null space of E, which is more or less simple depending on the structure of E. Substituting into the quadratic form gives an unconstrained minimization problem:

{\tfrac  {1}{2}}{\mathbf  {x}}^{T}Q{\mathbf  {x}}+{\mathbf  {c}}^{T}{\mathbf  {x}}\quad \Rightarrow \quad {\tfrac  {1}{2}}{\mathbf  {y}}^{T}Z^{T}QZ{\mathbf  {y}}+(Z^{T}{\mathbf  {c}})^{T}{\mathbf  {y}}

the solution of which is given by:

Z^{T}QZ{\mathbf  {y}}=-Z^{T}{\mathbf  {c}}

Under certain conditions on Q, the reduced matrix Z^{T}QZ will be positive definite. It's possible to write a variation on the conjugate gradient method which avoids the explicit calculation of Z.[5]

Lagrangian duality

The Lagrangian dual of a QP is also a QP. To see that let us focus on the case where c=0 and Q is positive definite. We write the Lagrangian function as

L(x,\lambda )={\tfrac  {1}{2}}x^{{T}}Qx+\lambda ^{{T}}(Ax-b).

Defining the (Lagrangian) dual function g(\lambda ), defined as g(\lambda )=\inf _{{x}}L(x,\lambda ), we find an infimum of L, using \nabla _{{x}}L(x,\lambda )=0

x^{*}=-Q^{{-1}}A^{{T}}\lambda .

hence the dual function is

g(\lambda )=-{\tfrac  {1}{2}}\lambda ^{{T}}AQ^{{-1}}A^{{T}}\lambda -\lambda ^{{T}}b

hence the Lagrangian dual of the QP is

maximize: -{\tfrac  {1}{2}}\lambda ^{{T}}AQ^{{-1}}A^{{T}}\lambda -\lambda ^{{T}}b

subject to: \lambda \geqslant 0.

Besides the Lagrangian duality theory, there are other duality pairings (e.g. Wolfe, etc.).

Complexity

For positive definite Q, the ellipsoid method solves the problem in polynomial time.[6] If, on the other hand, Q is indefinite, then the problem is NP-hard.[7] In fact, even if Q has only one negative eigenvalue, the problem is NP-hard.[8]

Solvers and scripting (programming) languages

Name Brief info
AIMMS
AMPL A popular modeling language for large-scale mathematical optimization.
APMonitor
CPLEX Popular solver with an API (C,C++,Java,.Net, Python, Matlab and R). Free for academics.
Excel Solver Function
GAMS
Gurobi Solver with parallel algorithms for large-scale linear programs, quadratic programs and mixed-integer programs. Free for academic use.
IMSL A set of mathematical and statistical functions that programmers can embed into their software applications.
JOptimizer open source library for solving minimization problem with linear equality and convex inequality constraints (is implemented in Java)
Maple General-purpose programming language for mathematics. Solving a quadratic problem in Maple is accomplished via its QPSolve command.
MATLAB A general-purpose and matrix-oriented programming-language for numerical computing. Quadratic programming in MATLAB requires the Optimization Toolbox in addition to the base MATLAB product
Mathematica A general-purpose programming-language for mathematics, including symbolic and numerical capabilities.
MOSEK A solver for large scale optimization with API for several languages (C++,java,.net, Matlab and python)
NAG Numerical Library A collection of mathematical and statistical routines developed by the Numerical Algorithms Group for multiple programming languages (C, C++, Fortran, Visual Basic, Java and C#) and packages (MATLAB, Excel, R, LabVIEW). The Optimization chapter of the NAG Library includes routines for quadratic programming problems with both sparse and non-sparse linear constraint matrices, together with routines for the optimization of linear, nonlinear, sums of squares of linear or nonlinear functions with nonlinear, bounded or no constraints. The NAG Library has routines for both local and global optimization, and for continuous or integer problems.
OpenOptBSD licensed universal cross-platform numerical optimization framework, see its QP page and other problems involved. Uses NumPy arrays and SciPy sparse matrices.
OptimJ Free Java-based Modeling Language for Optimization supporting multiple target solvers and available as an Eclipse plugin.[9][10]
RGPL licensed universal cross-platform statistical computation framework, see its quadprog page
TOMLABSupports global optimization, integer programming, all types of least squares, linear, quadratic and unconstrained programming for MATLAB. TOMLAB supports solvers like Gurobi, CPLEX, SNOPT and KNITRO.

See also

References

Notes

  1. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. p. 449. ISBN 978-0-387-30303-1 .
  2. 2.0 2.1 Murty, Katta G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. MR 949214. 
  3. Delbos, F.; Gilbert, J.Ch. (2005). "Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems". Journal of Convex Analysis 12: 45–69. 
  4. Google search.
  5. Gould, Nicholas I. M.; Hribar, Mary E.; Nocedal, Jorge (April 2001). On the Solution of Equality Constrained Quadratic Programming Problems Arising in Optimization 23 (4). SIAM Journal of Scientific Computing. pp. 1376–1395. CiteSeerX: 10.1.1.129.7555. 
  6. Kozlov, M. K.; S. P. Tarasov and Leonid G. Khachiyan (1979). "[Polynomial solvability of convex quadratic programming]". Doklady Akademii Nauk SSSR 248: 1049–1051.  Translated in: Soviet Mathematics - Doklady 20: 1108–1111. 
  7. Sahni, S. (1974). "Computationally related problems". SIAM Journal on Computing 3: 262–279. 
  8. Pardalos, Panos M.; Vavasis, Stephen A. (1991). "Quadratic programming with one negative eigenvalue is NP-hard". Journal of Global Optimization 1 (1): 15–22. 
  9. OptimJ used in an optimization model for mixed-model assembly lines. University of Münster. 
  10. OptimJ used in an Approximate Subgame-Perfect Equilibrium Computation Technique for Repeated Games. 

Bibliography

  • Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 0-12-192350-9. MR 1150683. 
  • Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5.  A6: MP2, pg.245.

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.