Optimization problem

From Wikipedia, the free encyclopedia

In computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. More formally, an optimization problem A is a quadruple (I,f,m,g), where

  • I is a set of instances;
  • given an instance x \in I, f(x) is the set of feasible solutions;
  • given an instance x and a feasible solution y of x, m(x,y) denotes the measure of y, which is usually a positive real.
  • g is the goal function, and is either min or max.

The goal is then to find for some instance x an optimal solution, that is, a feasible solution y with

m(x, y) = g \{ m(x, y') \mid y' \in f(x) \} .


For each optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.

An NP optimization problem has the following further restrictions:

  • I can be recognized in polynomial time, and
  • the size of a feasible solution is polynomially bounded by the instance size.

This implies that the corresponding decision problem is in NP. Since interesting optimization problems usually fulfill these criteria, "optimization problem" is often used synonymously with "NP optimization problem".

In other languages