Robust optimization

From Wikipedia, the free encyclopedia

In mathematics, robust optimization is an approach in optimization to deal with uncertainty. It is similar to the recourse model of stochastic programming, in that some of the parameters are random variables, except that feasibility for all possible realizations (called scenarios) is replaced by a penalty function in the objective. As such, the approach integrates goal programming with a scenario-based description of problem data. To illustrate, consider the LP:

\min  cx + dy: Ax=b, Bx + Cy = e, x, y \le 0,

where d, B, C and e are random variables with possible realizations {(d(s), B(s), C(s), e(s): s \in \{1,...,N\})}, where N = the number of scenarios. The robust optimization model for this LP is:

\min f(x, y(1), ..., y(N)) + wP(z(1), ..., z(N)): Ax=b, x \le 0,
\ B(s)x + C(s)y(s) + z(s) = e(s), and y(s) \ge 0,\, \forall s = 1,...,N,

where f is a function that measures the cost of the policy, P is a penalty function, and w > 0 (a parameter to trade off the cost of infeasibility). One example of f is the expected value: \ f(x, y) = cx + \sum_s{d(s)y(s)p(s)}, where p(s) = probability of scenario s. In a worst-case model, \ f(x,y) = \max_s{d(s)y(s)}. The penalty function is defined to be zero if (x, y) is feasible (for all scenarios) -- i.e., P(0)=0. In addition, P satisfies a form of monotonicity: worse violations incur greater penalty. This often has the form \ P(z) = U(z^+) + V(-z^-) -- i.e., the "up" and "down" penalties, where U and V are strictly increasing functions.

The above makes robust optimization similar (at least in the model) to a goal program. Recently, the robust optimization community defines it differently – it optimizes for the worst-case scenario. Let the uncertain MP be given by

\min f(x; s): x \in X(s),

where S is some set of scenarios (like parameter values). The robust optimization model (according to this more recent definition) is:

\min_x {\max_{s \in S} f(x; s)}\, x \in X(t)\, \forall t \in S,

The policy (x) is required to be feasible no matter what parameter value (scenario) occurs; hence, it is required to be in the intersection of all possible X(s). The inner maximization yields the worst possible objective value among all scenarios. There are variations, such as "adjustability" (i.e., recourse).


[edit] References

H.J. Greenberg. Mathematical Programming Glossary. World Wide Web, http://glossary.computing.society.informs.org/, 1996-2006. Edited by the INFORMS Computing Society.

Languages