Talk:Knuth-Bendix completion algorithm
From Wikipedia, the free encyclopedia
I added a motivation section so hopefully the article is no longer too technical. I'd like to hear other people's thoughts before I remove the tag, though. TooMuchMath 01:45, 27 April 2006 (UTC)
- Well, I've got a Ph.D. in computer science. My first thesis (roughly B.Sc. equivalent) was on analysing and transforming unfailing completion proofs, my second one (M.Sc.) was on automatic learing of search heuristics for completion-based theorem proving, and my Ph.D. and PostDoc work was on superposition-based proving. I've taught rewriting systems (at a T.A. level) for several years. I barely recognize KB completion from the current article. Maybe it's because I come from the CS side. I consider the language and notation from BDP89 or BN98 to be nice and natural. I don't e.g. see why we need a monoid (what is our associative operation?), and I don't know what "generators" are in this context.
@InProceedings{BDP89, author = {L. Bachmair and N. Dershowitz and D.A. Plaisted}, title = { {Completion Without Failure} }, key = {BDP89}, booktitle = {Resolution of Equations in Algebraic Structures}, editor = {H. Ait-Kaci and M. Nivat}, year = {1989}, volume = {2}, publisher = {Academic Press}, pages = {1--30}, }
@Book{BN:REWRITING-98, author = {F. Baader and T. Nipkow}, title = { {Term Rewriting and All That} }, publisher = {Cambridge University Press}, year = {1998}, }
- --Stephan Schulz 07:12, 27 April 2006 (UTC)
Thanks for the comments Stephan. I encountered the subject of rewriting systems in Sims' book "Computation With Finitely Presented Groups" so it does sound like we are coming from different places. In fact my understanding is limited to rewriting systems wrt monoids and groups so I'm probably not the best person to rewrite this from the CS perspective. I would welcome any information from the CS perspective, though, and would really like to see this article develop into something that is useful for everyone.
To clarify the case of the monoid, the language Σ * over the alphabet Σ is the same thing as the free monoid F = Σ * over the generators X = Σ. The associative binary operation is string concatenation and the identity is the empty string. The finite presentation M = < X | R > is basically a way of assigning a partition to Σ * (by taking the transitive, reflexive, symmetric closure of R) so that elements of M are equivalence classes of Σ * . The goal (in my mind) of KB is to produce a system to find a minimum (wrt a reduction order <) representative of an equivalence class starting from any of its representatives .
TooMuchMath 17:20, 27 April 2006 (UTC)
- Ok, now it makes more sense. It looks like your version is restricted to what we call "string rewrite systems". You can define reduction systems and completion on a more general level, with a reduction system (R, =>) given by an arbitrary set R and a binary relation =>, and transforming => into a convergent relation using completion. The concrete case then uses first-order terms as the carrier set and equations/rules over terms to describe the relation. String rewrite systems are a further specialization for the case that you only have unary function symbols. --Stephan Schulz 20:24, 27 April 2006 (UTC)
I'd like to be able to tackle the general case but there are still a few things I don't quite grasp. If we start with an arbitrary set R and a relation => then it makes sense to talk about the transitive closure relation =>* given by reduction sequences. However I don't see how the notion of term and subterm fits in here. That is, we need a way to identify elements of R as compositions of other elements of R so that we may apply a rule a => b to terms other than a (IE applying this rule to ab we get the reduction ab => bb). Perhaps this is somehow captured in a grammar for R but we have taken R to be an arbitrary set, not a formal language.
Also, I don't understand what you mean by "transforming => into a convergent relation using completion." Is this simply the transitive closure =>*? I've never taken a course in logic so an example of a rewriting system that isn't a string rewriting system would be very helpful. TooMuchMath 21:47, 27 April 2006 (UTC)
- In the abstract case, you don't use composite elements. You assume an arbitrary binary relation over arbitrary elements. That each equation potentially defines equality for an infite number of instances in an infinite number of contexts is irrelevant in that abstract view. Manipulating equations manipulates all the instances in one go. As for a concrete example, here is a completion of the group axioms as in the KB70 paper (straight out of my prover E):
1 : : [++equal(f(X1,0), X1)] : initial("GROUP.lop", at_line_9_column_1) 2 : : [++equal(f(X1,i(X1)), 0)] : initial("GROUP.lop", at_line_12_column_1) 3 : : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : initial("GROUP.lop", at_line_15_column_1) 5 : : [++equal(f(X1,X2), f(X1,f(0,X2)))] : pm(3,1) 6 : : [++equal(f(X1,f(X2,i(f(X1,X2)))), 0)] : pm(2,3) 7 : : [++equal(f(0,X2), f(X1,f(i(X1),X2)))] : pm(3,2) 27 : : [++equal(f(X1,0), f(0,i(i(X1))))] : pm(7,2) 36 : : [++equal(X1, f(0,i(i(X1))))] : rw(27,1) 46 : : [++equal(f(X1,X2), f(X1,i(i(X2))))] : pm(5,36) 52 : : [++equal(f(0,X1), X1)] : rw(36,46) 60 : : [++equal(i(0), 0)] : pm(2,52) 63 : : [++equal(i(i(X1)), f(0,X1))] : pm(46,52) 64 : : [++equal(f(X1,f(i(X1),X2)), X2)] : rw(7,52) 67 : : [++equal(i(i(X1)), X1)] : rw(63,52) 74 : : [++equal(f(i(X1),X1), 0)] : pm(2,67) 79 : : [++equal(f(0,X2), f(i(X1),f(X1,X2)))] : pm(3,74) 83 : : [++equal(X2, f(i(X1),f(X1,X2)))] : rw(79,52) 134 : : [++equal(f(i(X1),0), f(X2,i(f(X1,X2))))] : pm(83,6) 151 : : [++equal(i(X1), f(X2,i(f(X1,X2))))] : rw(134,1) 165 : : [++equal(f(i(X1),i(X2)), i(f(X2,X1)))] : pm(83,151) 239 : : [++equal(f(X1,0), X1)] : 1 : 'final' 240 : : [++equal(f(X1,i(X1)), 0)] : 2 : 'final' 241 : : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : 3 : 'final' 242 : : [++equal(f(0,X1), X1)] : 52 : 'final' 243 : : [++equal(i(0), 0)] : 60 : 'final' 244 : : [++equal(i(i(X1)), X1)] : 67 : 'final' 245 : : [++equal(f(i(X1),X1), 0)] : 74 : 'final' 246 : : [++equal(f(X1,f(i(X1),X2)), X2)] : 64 : 'final' 247 : : [++equal(f(i(X1),f(X1,X2)), X2)] : 83 : 'final' 248 : : [++equal(i(f(X1,X2)), f(i(X2),i(X1)))] : 165 : 'final'
- I hope the format is more or less self-explaining. Notice non-unary terms like "f(X1,0)" with variables and constants. We start with the three initial equations for the group (neutral element, inverse element, associativity). The 10 equations marked with "final" represent the resulting convergent rewrite system. "pm" is short for "paramodulation", a generalization of critical pairs to non-unit clauses (we only have units in this example, though). "rw" is rewriting. Ordering of equations is done implicitely and not recorded. --Stephan Schulz 22:00, 27 April 2006 (UTC)
[edit] Wrong Algorithm
The algorithm described in the article appears to me to be a special case of the Knuth-Bendix algorithm that is described in Knuth and Bendix's paper from Computational Problems in Abstract Algebra.
In that paper, the algorithm operated not on strings, but on expression trees that could contain wildcard terms. Rather than finding a pair of confluent string-rewriting rules by finding two rules whose left sides shared a prefix and suffix, it would perform an expression unification on the left-hand sides; if the rules could be unified, a new rule was added to the system.
A typical example might involve a (prefix) binary operator symbol x, a unary operator symbol i, and a nullary operator symbol e. One might have the rules:
- xxABC -> xAxBC (associative law)
- xDiD -> e (right inverse)
Where A, B, C, and D are term variables that can stand for arbitrary terms. Unifying the left-hand sides yields the assignments A = A, B = B, C = ixAB, D = xAB. so that:
- xxABixAB -> xAxBixAB (a special case of the first rule)
- xxABixAB -> e (a special case of the second rule)
From which one can then derive:
- xAxBixAB -> e
The original Knuth-Bendix paper used this technique, and the algorithm, to find minimal sets of axioms for group theory, among other examples. Group theory normally starts with axioms like:
- xeA -> A (left identity)
- xAe -> A (right identity)
- xiAA -> e (left inverse)
- xAiA -> e (right inverse)
(Taking e to mean the identity element, iA to mean the inverse of A, and xAB to mean the product of A and B.) But in fact any three of these imply the fourth, and the algorithm was able to find a proof of this and to eliminate one of the axioms. The example I showed above has inferred the group-theoretic identity A(B(AB)-1) = e from the associative law and the right inverse law.
The algorithm described in the Wikipedia article, appears to be a much more limited special case.
-- Dominus 02:06, 16 August 2006 (UTC)
Addendum: The paper An Introduction to Knuth–Bendix Completion, A.D.D. Dick, The Computer Journal, 1991; 34: 2-15 would seem to be a more readable explanation than the original Knuth-Bendix paper itself, and bears out my assertion that the Wikipedia article is not describing the Knuth-Bendix algorithm fully. -- Dominus 02:55, 16 August 2006 (UTC)
- I doubt the current version is detailing the algorithm in its full generality also. It would be nice to describe the algorithm for a general equational specification and the resultant term rewriting system -- I have such a description (Term Rewriting Systems, Terese), but I don't quite have the time currently to provide it. Dysprosia 05:05, 16 August 2006 (UTC)
-
- You're right, see the older discussion. It's just that nobody has had time to do anything about it. If this goes on for long enough (weeks or months), I might find some. This is not a promise ;-). --Stephan Schulz 05:49, 16 August 2006 (UTC)
-
- The new tag about dispute of accuracy might be a little misleading. The current algorithm is quite correct as stated, and is the (unoptimized) Knuth-Bendix completion procedure for finding a reduced confluent rewriting system for a finitely presented monoid. The scope of the algorithm is fairly clearly indicated in that section. The only dispute is whether to offer the algorithm for finitely presented monoids as the algorithm. Resolving the dispute might be as simple as renaming the section. I'll do that and remove the dispute tag later this week if no one objects. JackSchmidt 00:15, 9 October 2007 (UTC)
-
-
- I agree that would be preferable; thanks. Moreover, I don't think there is any dispute that the algorithm as presented is not the algorithm. In this case, at least, there is a canonical source, namely the Knuth-Bendix paper reprinted in Leech. The algorithm as presented in that paper is the algorithm. -- Dominus 01:29, 9 October 2007 (UTC)
-