Constraint satisfaction
From Wikipedia, the free encyclopedia
In artificial intelligence and operations research, constraint satisfaction is the process finding a solution to a set of constraints. Such constraints express allowed values for variables, and a solution is therefore an evaluation of these variable that satisfies all constraints.
The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used are constraints on a finite domain, to the point that constraint satisfaction problems are typically identified with problems based on constraints on a finite domain. Such problems are usually solved via search, in particular a form of backtracking or local search. Constraint propagation are other methods used on such problems; most of them are incomplete in general, that is, they may solve the problem or prove it unsatisfiable, but not always. Constraint propagation methods are also used in conjunction with search to make a given problem simpler to solve. Other considered kinds of constraints are on real or rational numbers; solving problems on these constraints is done via variable elimination or the simplex algorithm.
Constraint satisfaction originated in the field of artificial intelligence in the 1970s. During the 1980s and 1990s, embedding of constraints into a programming language were developed. Languages often used for constraint programming are Prolog and C++.
Contents |
[edit] Constraint satisfaction problem
As originally defined in artificial intelligence, constraints enumerate the possible values a set of variables may take. Informally, a finite domain is a finite set of arbitrary elements. A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way all constraints are satisfied by these values.
Formally, a constraint satisfaction problem is defined a triple , where X is a set of variables, D is a domain of values, and C is a set of constraints. Every constraint is in turn a pair , where t is a tuple of variables and R is a set of tuples of values; all these tuples having the same number of elements; as a result R is a relation. An evaluation of the variables is a function from variables to . Such an evaluation satisfies a constraint if . A solution is an evaluation that satisfies all constraints.
In practice, some constraints that are often used are expressed in compact form, rather than enumerating all possible values of the variables. The constraint expressing that the values of some variables are all different is one of the most used such constraints.
Problems that can be expressed as constraint satisfaction problems are the Eight queens puzzle, the Sudoku solving problem, the Boolean satisfiability problem, scheduling problems and various problems on graphs such as the graph coloring problem.
While usually not included in the above definition of a constraint satisfaction problem, arithmetic equations and inequalities bound the values of the variables they contain and can therefore be considered a form of constraints. Their domain is the set of numbers (either integer, rational, or real), which is infinite: therefore, the relations of these constraints may be infinite as well; for example, X = Y + 1 has an infinite number of pairs of satisfying values. Arithmetic equations and inequalities are often not considered within the definition of a "constraint satisfaction problem", which is limited to finite domains. They are however used often in constraint programming.
[edit] Solving
Constraint satisfaction problems (on finite domains) are typically solved using a form of search. The most used techniques are variants of backtracking, constraint propagation, and local search.
Backtracking is a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned. Several variants of backtracking exists. Backmarking improves the efficiency of checking consistency. Backjumping allows saving part of the search by backtracking "more than one variable" in some cases. Constraint learning infers and saves new constraints that can be later used to avoid part of the search. Look-ahead is also often used in backtracking to attempt to foresee the effects of choosing a variable or a value, thus sometimes determining in advance when a subproblem is satisfiable or unsatisfiable.
Constraint propagation techniques are methods used to modify a constraint satisfaction problem. More precisely, they are methods that enforce a form of local consistency, which are conditions related to the consistency of a group of variables and/or constraints. Constraints propagation has various uses. First, they turn a problem into one that is equivalent but is usually simpler to solve. Second, they may prove satisfiability or unsatisfiability of problems. This is not guaranteed to happen in general; however, it always happens for some forms of constraint propagation and/or for some certain kinds of problems. The most known and used form of local consistency are arc consistency, hyper-arc consistency, and path consistency. The most popular constraint propagation method is the AC-3 algorithm, which enforces arc consistency.
Local search methods are incomplete satisfiability algorithms. They may find a solution of a problem, but they may fail even if the problem is satisfiable. They work by iteratively improving a complete assignment over the variables. At each step, a small number of variables are changed value, with the overall aim of increasing the number of constraints satisfied by this assignment. In practice, local search appears to work well when these changes are also affected by random choices. Integration of search with local search have been developed, leading to hybrid algorithms.
Variable elimination and the simplex algorithm are used for solving linear and polynomial equations and disequations.
[edit] Complexity
Solving a constraint satisfaction problem on a finite domain is an NP complete problem. Research has shown a number of tractable subcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationship of the constraint satisfaction problem with problems in other areas such as finite model theory.
[edit] Constraint programming
Constraint programming is the use of constraints as a programming language to encode and solve problems. This is often done by embedding constraints into a programming language, which is called the host language. Constraint programming originated from a formalization of equalities of terms in Prolog II, leading to a general framework for embedding constraints into a logic programming language. The most common host languages are Prolog, C++, and Java, but other languages have been used as well.
[edit] Constraint logic programming
A constraint logic program is a logic program that contains constraints in the bodies of clauses. As an example, the clause A(X):-X>0,B(X)
is a clause containing the constraint X>0
in the body. Constraints can also be present in the goal. The constraints in the goal and in the clauses used to prove the goal are accumulated into a set called constraint store. This set contains the constraints the interpreter has assumed satisfiable in order to proceed in the evaluation. As a result, if this set is detected unsatisfiable, the interpreter backtracks. Equations of terms, as used in logic programming, are considered a particular form of constraints which can be simplified using unification. As a result, the constraint store can be considered an extension of the concept of substitution that is used in regular logic programming. The most common kinds of constraints used in constraint logic programming are constraints over integers/rational/real numbers and constraints over finite domains.
Concurrent constraint logic programming languages have also been developed. They significantly differ from non-concurrent constraint logic programming in that they are aimed at programming concurrent processes that may not terminate. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are also sometimes used within a non-concurrent constraint logic programming language. They allow for rewriting constraints or to infer new ones based on the truth of conditions.
[edit] Constraint satisfaction toolkits
Constraint satisfiaction toolkits are software libraries for imperative programming languages that are used to encode and solve a constraint satisfaction problem. The most common host language for these toolkits is C++, but implementations exists for Java and other programming languages.
[edit] Other constraint programming languages
Constraint toolkits are a way for embedding constraints into an imperative programming language. However, they are only used as external libraries for encoding and solving problems. An approach in which constraints are integrated into an imperative programming language is taken in the Kaleidoscope programming language.
Constraints have also been embedded into functional programming languages.
[edit] References
- Rossi, Francesca; Peter van Beek, Toby Walsh (ed.) (2006). Handbook of Constraint Programming,. Elsevier. ISBN-13 978-0-444-52726-4 ISBN-10 0-444-52726-5
- Dechter, Rina (2003). Constraint processing. Morgan Kaufmann. ISBN 1-55860-890-7
- Apt, Krzysztof (2003). Principles of constraint programming. Cambridge University Press. ISBN 0-521-82583-0
- Frühwirth, Thom; Slim Abdennadher (2003). Essentials of constraint programming. Springer. ISBN 3-540-67623-6
- Marriot, Kim; Peter J. Stuckey (1998). Programming with constraints: An introduction. MIT Press. ISBN 0-262-13341-5
- Jaffar, Joxan; Michael J. Maher (1994). "Constraint logic programming: a survey". Journal of logic programming 19/20: 503-581. DOI:10.1016/0743-1066(94)90033-7.
- Freuder, Eugene; Alan Mackworth (ed.) (1994). Constraint-based reasoning. MIT Press. ISBN
- Tsang, Edward (1993). Foundations of Constraint Satisfaction. Academic Press. ISBN 0-12-701610-4
- Van Hentenryck, Pascal (1989). Constraint Satisfaction in Logic Programming. MIT Press. ISBN 0-262-08181-4