Ellipsoid method

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm, which finds an optimal solution in a finite number of steps.

The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.

Contents

History

The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan: Khachiyan's achievement was to prove the polynomial-time solvability of linear programs.

Following Khachiyan's work, the ellipsoid method was the only algorithm for solving linear programs whose runtime was probably polynomial. However, the interior-point method and variants of the simplex algorithm are much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case.

However, the ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remains important in combinatorial optimization theory.[1][2][3][4]

Description

A convex minimization problem consists of a convex function f_0(x): \mathbb{R}^n \to \mathbb{R} to be minimized over the variable x, convex inequality constraints of the form f_i(x) \leq 0, where the functions \ f_i are convex, and linear equality constraints of the form \ h_i(x) = 0. We are also given an initial ellipsoid \mathcal{E}^{(0)} \subset \mathbb{R}^n defined as

\mathcal{E}^{(0)} = \left \{z \in \mathbb{R}^n�: (z - x_0)^T P_{(0)}^{-1} (z-x_0) \leq 1   \right \}

containing a minimizer \ x^*, where P \succ 0 and x_0 \ is the center of \mathcal{E}. Finally, we require the existence of a cutting-plane oracle for the function f \ . One example of a cutting-plane is given by a subgradient g \ of f \ .

Unconstrained Minimization

At the k \ -th iteration of the algorithm, we have a point x^{(k)} at the center of an ellipsoid \mathcal{E}^{(k)} = \left \{x \in \mathbb{R}^n�: (x-x^{(k)})^T P_{(k)}^{-1} (x-x^{(k)}) \leq 1   \right \}. We query the cutting-plane oracle to obtain a vector g^{(k%2B1)} \in \mathbb{R}^n such that g^{(k%2B1)T} (x^* - x^{(k)} ) \leq 0. We therefore conclude that

x^* \in \mathcal{E}^{(k)} \cap \{z: g^{(k%2B1)T} (z - x^{(k)} ) \leq 0 \}.

We set \mathcal{E}^{(k%2B1)} to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute x^{(k%2B1)}. The update is given by

x^{(k%2B1)} = x^{(k)} - \frac{1}{n%2B1} P_{(k)} \tilde{g}^{(k%2B1)}
P_{(k%2B1)} = \frac{n^2}{n^2-1} \left(P_{(k)} - \frac{2}{n%2B1} P_{(k)} \tilde{g}^{(k%2B1)} \tilde{g}^{(k%2B1)T} P_{(k)} \right )

where \tilde{g}^{(k%2B1)} = (1/\sqrt{g^{(k%2B1)T} P g^{(k%2B1)}})g^{(k%2B1)}. The stopping criterion is given by the property that

\sqrt{g^{(k)T}P_{(k)}g^{(k)}} \leq \epsilon \Rightarrow f(x^{(k)}) - f(x^*) \leq \epsilon.

Sample sequence of iterates

Inequality-Constrained Minimization

At the k \ -th iteration of the algorithm for constrained minimization, we have a point x^{(k)} at the center of an ellipsoid \mathcal{E}^{(k)} as before. We also must maintain a list of values f_{\rm{best}}^{(k)} recording the smallest objective value of feasible iterates so far. Depending on whether or not the point x^{(k)} is feasible, we perform one of two tasks:

g_0^T(x^*-x^{(k)} ) %2B f_0(x^{(k)}) - f_{\rm{best}}^{(k)} \leq 0
g_j^T(z-x^{(k)}) %2B f_j(x^{(k)})\leq 0

for all feasible z \ .

Application to Linear Programming

Performance

The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is numerically stable. On even "small"-sized problems, it suffers from numerical instability and poor performance in practice.

However, the ellipsoid method is an important theoretical technique in combinatorial optimization. In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows.

References

  1. ^ M. Grötschel, L. Lovász, A. Schrijver: Geometric Algorithms and Combinatorial Optimization, Springer, 1988.
  2. ^ L. Lovász: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986.
  3. ^ V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in Algorithms and Theory of Computation Handbook, edited by M. J. Atallah, CRC Press 1999, 31-1 to 31-37.
  4. ^ V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45.

Bibliography

External links