Frank–Wolfe algorithm

From Wikipedia, the free encyclopedia

The Frank–Wolfe algorithm, also known as the convex combination algorithm, is a classic algorithm in operations research (OR). It was originally proposed by Marguerite Frank and Phil Wolfe in 1956 as a procedure for solving quadratic programming problems with linear constraints. At each step the objective function is linearized and then a step is taken in a direction that reduces the objective while maintaining feasibility. The algorithm can be seen as a generalization of the primal simplex algorithm for linear programming. In recent years it has been widely used for determining the equilibrium flows in transportation networks.

Contents

[edit] Problem statement

Minimize  f(\mathbf{x}) = \frac{1}{2} \mathbf{x}^{\mathrm{T}} E\mathbf{x} +  \mathbf{h}^{\mathrm{T}} \mathbf{x}
subject to  \mathbf{x} \epsilon \mathbf{P}.

Where the n×n matrix E is positive semidefinite, h is an n×1 vector, and \mathbf{P} represents a feasible region defined by a mix of linear inequality and equality constraints (for example Ax ≤ b, Cx = d).

[edit] Algorithm

Step 1. Initialization. Let k \leftarrow 0 and let x_0 \! be any point in \mathbf{P}.

Step 2. Convergence test. If  \nabla f(x_k)=0 then Stop, we have found the minimum.

Step 3. Direction-finding subproblem. The approximation of the problem that is obtained by replacing the function f with its first-order Taylor expansion around x_k \! is found. Solve for \bar{x}_k:

Minimize f(x_k) + \nabla^T f(x_k) \bar{x}_k
Subject to \bar{x}_k \epsilon \mathbf{P}
(note that this is a Linear Program. x_k \! is fixed during Step 3, while the minimization takes place by varying \bar{x}_k and is equivalent to minimization of \nabla^T f(x_k) \bar{x}_k).

Step 4. Step size determination. Find \lambda \! that minimizes  f(x_k+\lambda(\bar{x}_k-x_k)) subject to 0 \le \lambda \le 1 . If \lambda = 0 \! then Stop, we have found the minimum.

Step 5. Update. Let x_{k+1}\leftarrow x_k+\lambda(\bar{x}_k-x_k), let k \leftarrow k+1 and go back to Step 2.

[edit] Comments

The algorithm generally makes good progress towards the optimum during the first few iterations, but convergence often slows down substantially when close to the minimum point. For this reason the algorithm is perhaps best used to find an approximate solution. It can be shown that the worst case convergence rate is sublinear; however, in practice the convergence rate has been observed to improve in case of many constraints.[1]

[edit] References

[edit] Notes

  1. ^ "Nonlinear Programming", Dimitri Bertsekas, 2003, page 222. Athena Scientific, ISBN 1-886529-00-0.