Resolution (logic)
From Wikipedia, the free encyclopedia
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic and first-order logic.
Contents |
[edit] Resolution in propositional logic
[edit] Resolution rule
The resolution rule in propositional logic is a single valid inference rule producing, from two clauses, a new clause implied by them. The resolution rule takes two clauses - a clause is a disjunction of literals - containing complementary literals, and produces a new clause with all the literals of both except for the complementary ones. Formally, whereas ai and bj are complementary literals:
The clause produced by the resolution rule is called the resolvent of the two input clauses.
When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied (independently) for each such pair. However, only the pair of literals that are resolved upon can be removed: all other pair of literals remain in the resolvent clause.
[edit] A resolution technique
When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form. The steps are as follow:
- All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are conjunctively connected.
- The resulting sentence is transformed into a conjunctive normal form (treated as a set of clauses, S).
- The resolution rule is applied to all possible pairs of clauses that contains complementary literals. After each application of the resolution rule, the resulting sentence is simplified by removing repeated literals. If the sentence contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to S, and is considered for further resolution inferences.
- If after applying a resolution rule the empty clause is derived, the complete formula is unsatisfiable (or contradictory), and hence it can be concluded that the initial conjecture follows from the axioms.
- If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive any more new clauses, the conjecture is not a theorem of the original knowledge base.
One instance of this algorithm is the original Davis-Putnam algorithm that was later refined into the DPLL algorithm that removed the need for explicit representation of the resolvants.
[edit] Resolution in first order logic
In first order logic resolution condenses the traditional syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term logic:
All Greeks are Europeans. Homer is a Greek. Therefore, Homer is a European.
Or, more generally:
∀X, P(X) implies Q(X). P(a). Therefore, Q(a).
To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form. In this form, all quantification becomes implicit: universal quantifiers on variables (X, Y...) are simply omitted as understood, while existentially-quantified variables are replaced by Skolem functions.
¬P(X) ∨ Q(X) P(a) Therefore, Q(a)
So the question is, how does the resolution technique derive the last clause from the first two? The rule is simple:
- Find two clauses containing the same predicate, where it is negated in one clause but not in the other.
- Perform a unification on the two predicates. (If the unification fails, you made a bad choice of predicates. Go back to the previous step and try again.)
- If any unbound variables which were bound in the unified predicates also occur in other predicates in the two clauses, replace them with their bound values (terms) there as well.
- Discard the unified predicates, and combine the remaining ones from the two clauses into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
¬P(X)
in the first clause, and in non-negated form
P(a)
in the second clause. X is an unbound variable, while a is a bound value (atom). Unifying the two produces the substitution
X => a
Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion:
Q(a)
For another example, consider the syllogistic form
All Cretans are islanders. All islanders are liars. Therefore all Cretans are liars.
Or more generally,
∀X P(X) implies Q(X) ∀X Q(X) implies R(X) Therefore, ∀X P(X) implies R(X)
In CNF, the antecedents become:
¬P(X) ∨ Q(X) ¬Q(Y) ∨ R(Y)
(Note that the variable in the second clause was renamed to make it clear that variables in different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means that X and Y become the same variable anyway. Substituting this into the remaining clauses and combining them gives the conclusion:
¬P(X) ∨ R(X)
The resolution rule (with additional factoring) similarly subsumes all the other syllogistic forms of traditional logic.
[edit] External links
- Alex Sakharov, Resolution Principle at MathWorld.
- Alex Sakharov, Resolution at MathWorld.
- [[1]] Notes on computability and resolution