Linear matrix inequality

From Wikipedia, the free encyclopedia

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

LMI(y):=A_0+y_1A_1+y_2A_2+\cdots+y_m A_m\geq0\,

where

  • y=[y_i\,,~i\!=\!1\dots m] is a real vector,
  • A_0\,, A_1\,, A_2\,,\dots\,A_m are symmetric matrices in the subspace of n\times n symmetric matrices \mathbb{S}^n,
  • B\geq0 is a generalized inequality meaning B is a positive semidefinite matrix belonging to the positive semidefinite cone \mathbb{S}_+ in the subspace of symmetric matrices \mathbb{S}.

This linear matrix inequality specifies a convex constraint on y.

Contents

[edit] Applications

There are efficient numerical methods to determine whether an LMI is feasible (i.e., whether there exists a vector y such that LMI(y)\geq0 ), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification and signal processing can be formulated using LMIs. Also LMIs find application in Polynomial SOS. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

[edit] Solving LMIs

A major breakthrough in convex optimization lies in the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadii Nemirovskii.

[edit] References

  • Y. Nesterov and A. Nemirovsky, Interior Point Polynomial Methods in Convex Programming. SIAM, 1994.

[edit] External links