Distributed constraint optimization

From Wikipedia, the free encyclopedia

Distributed constraint optimization (DCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is either minimized or maximized.

Contents

[edit] Definitions

[edit] DCOP

A DCOP can be defined as a tuple \langle A, V, \mathcal{D}, f, \alpha, \sigma \rangle, where:

  • A is a set of agents;
  • V is a set of variables, \{v_1,v_2,\ldots,v_{|V|}\};
  • \mathcal{D} is a set of domains, \{D_1, D_2, \ldots, D_{|V|}\}, where each D \in \mathcal{D} is a finite set containing the values to which its associated variable may be assigned;
  • f is function[1][2]
    f : \bigcup_{S \in \mathcal{P}(V)}\prod_{v_i \in S} \left( \{v_i\} \times D_i \right) \rightarrow \mathbb{N} \cup \{\infty\}
    that maps every possible variable assignment to a cost. This function can also be thought of as defining constraints between variables;
  • α is a function \alpha : V \rightarrow A mapping variables to their associated agent. \alpha(v_i) \mapsto a_j implies that it is agent aj's responsibility to assign the value of variable vi. Note that it is not necessarily true that α is either an injection or surjection; and
  • σ is an operator that aggregates all of the individual f costs for all possible variable assignments. This is usually accomplished through summation:
    \sigma(f) \mapsto \sum_{s \in \bigcup_{S \in \mathcal{P}(V)}\prod_{v_i \in S} \left( \{v_i\} \times D_i \right)} f(s).

The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize σ(f) for a given assignment of the variables.

[edit] Context

A Context is a variable assignment for a DCOP. This can be thought of as a function mapping variables in the DCOP to their current values:

t : V \rightarrow (D \in \mathcal{D}) \cup \{\emptyset\}.

Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, t(v_i) \mapsto \emptyset implies that the agent α(vi) has not yet assigned a value to variable vi. Given this representation, the "domain" (i.e., the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the t function) as an input to the f function.

[edit] Example problems

[edit] Distributed Graph Coloring

The graph coloring problem is as follows: given a graph G = \langle N, E \rangle and a set of colors C, assign each vertex, n\in N, a color, c\in C, such that the number of adjacent vertices with the same color is minimized.

As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality | C | (there is one domain value for each possible color). For each vertex n_i \in N, create a variable in the DCOP v_i \in V with domain Di = C. For each pair of adjacent vertices \langle n_i, n_j \rangle \in E, create a constraint of cost 1 if both of the associated variables are assigned the same color:

(\forall c \in C : f(\langle v_i, c \rangle, \langle v_j, c \rangle ) \mapsto 1).

The objective, then, is to minimize σ(f).

[edit] Distributed Multiple Knapsack Problem

The Distributed Multiple- variant of the Knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let I be the set of items, K be the set of knapsacks, s : I \rightarrow \mathbb{N} be a function mapping items to their volume, and c : K \rightarrow \mathbb{N} be a function mapping knapsacks to their capacities.

To encode this problem as a DCOP, for each i \in I create one variable v_i \in V with associated domain Di = K. Then for all possible context t:

f(t) \mapsto \sum_{k \in K} \begin{cases}
                    0&  r(t,k) \leq c(k),\\
                    r(t,k)-c(k) & \text{otherwise},
                  \end{cases}

where r(t,k) is a function such that

r(t,k) = \sum_{v_i \in t^{-1}(k)} s(i).


[edit] Algorithms

Algorithm Name Year Introduced Memory Complexity Number of Messages Correctness/
Completeness
Implementations
NCBB
No-Commitment Branch and Bound[3]
2006 Polynomial (or any-space[4]) Exponential Proven Reference Implementation: not publicly released

DCOPolis (GNU LGPL)

DPOP
Distributed Pseudotree Optimization Procedure[5]
2005 Exponential Linear Proven Reference Implementation: FRODO (free but proprietary)

DCOPolis (GNU LGPL)

OptAPO
Asynchronous Partial Overlay[6]
2004 Polynomial Exponential Proven, but proof of completeness has been challenged[7] Reference Implementation: OptAPO

DCOPolis (GNU LGPL); In Development

Adopt
Asynchronous Backtracking[8]
2003 Polynomial (or any-space[9]) Exponential Proven Reference Implementation: Adopt

DCOPolis (GNU LGPL)

[edit] See also

[edit] Notes and references

  1. ^ "\mathcal{P}(\mathcal{V})" denotes the power set of V
  2. ^ "\times" and "\prod" denote the Cartesian product.
  3. ^ Chechetka, Anton & Sycara, Katia (May), “No-Commitment Branch and Bound Search for Distributed Constraint Optimization”, Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1427–1429 
  4. ^ Chechetka, Anton & Sycara, Katia (March), “An Any-space Algorithm for Distributed Constraint Optimization”, Proceedings of the AAAI Spring Symposium on Distributed Plan and Schedule Management 
  5. ^ Petcu, Adrian & Faltings, Boi (August), “DPOP: A Scalable Method for Multiagent Constraint Optimization”, Proceedings of the 19th International Joint Conference on Artificial Intelligence, IJCAI 2005, Edinburgh, Scotland, pp. 266-271 
  6. ^ Mailler, Roger & Lesser, Victor (2004), “Solving Distributed Constraint Optimization Problems Using Cooperative Mediation”, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, IEEE_Computer_Society, pp. 438–445 
  7. ^ Grinshpoun, Tal; Zazon, Moshe; Binshtok, Maxim & Meisels, Amnon (2007), “Termination Problem of the APO Algorithm”, Proceedings of the Eighth International Workshop on Distributed Constraint Reasoning, pp. 117–124 
  8. ^ Modi, Pragnesh Jay; Shen, Wei-Min; Tambe, Milind & Yokoo, Makoto (2003), “An asynchronous complete method for distributed constraint optimization”, Proceedings of the second international joint conference on autonomous agents and multiagent systems, ACM Press, pp. 161–168 
  9. ^ Matsui, Toshihiro; Matsuo, Hiroshi & Iwata, Akira (February), “Efficient Method for Asynchronous Distributed Constraint Optimization Algorithm”, Proceedings of Artificial Intelligence and Applications, pp. 727–732