Convex optimization

Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The convexity makes optimization easier than the general case since local minimum must be a global minimum, and first-order conditions are sufficient conditions for optimality.[1]

Convex minimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,[2] data analysis and modeling, finance, statistics (optimal design),[3] and structural optimization.[4] With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming. Many optimization problems can be reformulated as convex minimization problems. For example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function -f, which is convex.

Definition

Given a real vector space together with a convex, real-valued function defined on a convex subset of

the problem is to find any point in for which the number is smallest, i.e., a point such that

for all .

The convexity of makes the powerful tools of convex analysis applicable. In finite-dimensional normed spaces, the Hahn–Banach theorem and the existence of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, a duality theory generalizing that for linear programming, and effective computational methods.

Convex optimization problem

The general form of an optimization problem (also referred to as a mathematical programming problem or minimization problem) is to find some such that

for some feasible set and objective function . The optimization problem is called a convex optimization problem if is a convex set and is a convex function defined on . [5] [6]

Alternatively, an optimization problem of the form

is called convex if the functions are all convex functions.[7]

Standard form

Standard form is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:

A convex minimization problem is thus written as

Note that every equality constraint can be equivalently replaced by a pair of inequality constraints and . Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.

Following from this fact, it is easy to understand why has to be affine as opposed to merely being convex. If is convex, is convex, but is concave. Therefore, the only way for to be convex is for to be affine.

Theory

The following statements are true about the convex minimization problem:

These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.

Examples

The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:

Lagrange multipliers

Consider a convex minimization problem given in standard form by a cost function and inequality constraints for . Then the domain is:

The Lagrangian function for the problem is

For each point in that minimizes over , there exist real numbers called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. minimizes over all
  2. with at least one
  3. (complementary slackness).

If there exists a "strictly feasible point", that is, a point satisfying

then the statement above can be strengthened to require that .

Conversely, if some in satisfies (1)–(3) for scalars with then is certain to minimize over .

Methods

Convex minimization problems can be solved by the following contemporary methods:[8]

Other methods of interest:

Subgradient methods can be implemented simply and so are widely used.[9] Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.

Convex minimization with good complexity: Self-concordant barriers

The efficiency of iterative methods is poor for the class of convex problems, because this class includes "bad guys" whose minimum cannot be approximated without a large number of function and subgradient evaluations;[10] thus, to have practically appealing efficiency results, it is necessary to make additional restrictions on the class of problems. Two such classes are problems special barrier functions, first self-concordant barrier functions, according to the theory of Nesterov and Nemirovskii, and second self-regular barrier functions according to the theory of Terlaky and coauthors.

Quasiconvex minimization

Problems with convex level sets can be efficiently minimized, in theory. Yurii Nesterov proved that quasi-convex minimization problems could be solved efficiently, and his results were extended by Kiwiel.[11] However, such theoretically "efficient" methods use "divergent-series" stepsize rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.

Solving even close-to-convex but non-convex problems can be computationally intractable. Minimizing a unimodal function is intractable, regardless of the smoothness of the function, according to results of Ivanov.[12]

Convex maximization

Conventionally, the definition of the convex optimization problem (we recall) requires that the objective function f to be minimized and the feasible set be convex. In the special case of linear programming (LP), the objective function is both concave and convex, and so LP can also consider the problem of maximizing an objective function without confusion. However, for most convex minimization problems, the objective function is not concave, and therefore a problem and then such problems are formulated in the standard form of convex optimization problems, that is, minimizing the convex objective function.

For nonlinear convex minimization, the associated maximization problem obtained by substituting the supremum operator for the infimum operator is not a problem of convex optimization, as conventionally defined. However, it is studied in the larger field of convex optimization as a problem of convex maximization.[13]

The convex maximization problem is especially important for studying the existence of maxima. Consider the restriction of a convex function to a compact convex set: Then, on that set, the function attains its constrained maximum only on the boundary.[14] Such results, called "maximum principles", are useful in the theory of harmonic functions, potential theory, and partial differential equations.

The problem of minimizing a quadratic multivariate polynomial on a cube is NP-hard.[15] In fact, in the quadratic minimization problem, if the matrix has only one negative eigenvalue, is NP-hard.[16]

Extensions

Advanced treatments consider convex functions that can attain positive infinity, also; the indicator function of convex analysis is zero for every and positive infinity otherwise.

Extensions of convex functions include biconvex, pseudo-convex, and quasi-convex functions. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity ("abstract convex analysis").

See also

Notes

  1. Rockafellar, R. Tyrrell (1993). "Lagrange multipliers and optimality" (PDF). SIAM Review. 35 (2): 183–238.
  2. Boyd/Vandenberghe, p. 17.
  3. Chritensen/Klarbring, chapter 4.
  4. Boyd/Vandenberghe, chapter 7.
  5. Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1996). Convex analysis and minimization algorithms: Fundamentals. p. 291.
  6. Ben-Tal, Aharon; Nemirovskiĭ, Arkadiĭ Semenovich (2001). Lectures on modern convex optimization: analysis, algorithms, and engineering applications. pp. 335–336.
  7. Boyd/Vandenberghe, p. 7
  8. For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński, Bertsekas, and Boyd and Vandenberghe (interior point).
  9. Bertsekas
  10. Hiriart-Urruty & Lemaréchal (1993, Example XV.1.1.2, p. 277) discuss a "bad guy" constructed by Arkadi Nemirovskii.
  11. In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated):

    Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming (Series A). 90 (1). Berlin, Heidelberg: Springer. pp. 1–25. ISSN 0025-5610. MR 1819784. doi:10.1007/PL00011414. Kiwiel acknowledges that Yurii Nesterov first established that quasiconvex minimization problems can be solved efficiently.

  12. Nemirovskii and Judin
  13. Convex maximization is mentioned in the subsection on convex optimization in this textbook: Ulrich Faigle, Walter Kern, and George Still. Algorithmic principles of mathematical programming. Springer-Verlag. Texts in Mathematics. Chapter 10.2, Subsection "Convex optimization", pages 205-206.
  14. Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.
  15. Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.
  16. Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.