Logic programming
From Wikipedia, the free encyclopedia
Logic programming (which might better be called logical programming by analogy with mathematical programming, linear programming, etc.) is, broadly speaking, the use of mathematical logic for computer programming. John McCarthy [1958] was the first to publish a proposal that mathematical logic be used for programming. Arguably, the first logic programming language was Planner which featured pattern-directed invocation of procedural plans from both assertions and goals. In order to cope with the very limited memory systems that were available when it was developed, Planner used backtracking control structure so that only one possible computation path had to be stored at a time. Subsequently, Prolog was developed as a simplification of Planner that had pattern-directed invocation only from goals (also based on backtracking). From Planner there developed the programming languages QA-4, Popler, Conniver, and QLISP. The programming languages Mercury, Visual Prolog, Oz and Fril developed from Prolog. There are also concurrent logic programming languages (not based on backtracking) derived from Planner (e.g., Ether) and derived from Prolog (see Shapiro [1989] for a survey).
Contents |
[edit] Basis in mathematical logic
The point of logic programming is to bring the style of mathematical logic to computer programming. Mathematicians and philosophers find logic a useful tool for developing bodies of theory. Many problem domains are naturally expressed as a theory. To say a problem needs solving is often equivalent to asking if a new hypothesis is consistent with, or is a consequence of an existing theory. Logic provides a way of proving whether the question is true or false. The process of constructing a proof is well-known, so logic is a reliable way to answer such questions. Logic programming systems automate this process. Artificial Intelligence was an important influence on the development of logic programming.
[edit] Prolog
The programming language Prolog was developed in 1972 by Alain Colmerauer. It emerged from a collaboration between Colmerauer in Marseille and Robert Kowalski in Edinburgh. Colmerauer was working on natural language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer and Kowalski discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They saw that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL-resolution [1971], behave as top-down parsers.
During the follwing summer of 1972, Kowalski, again working with Colmerauer, observed that SL-resolution treats universally quantified clauses in the declarative form of implications
B1 and ... and Bn implies H
as goal-reduction procedures
to show/solve H, show/solve B1 and ... and Bn.
This dual declarative/procedural interpretation later became formalised in the Prolog notation
H :- B1, ..., Bn.
which can be read (and used) both declaratively and procedurally. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, where H, B1, ..., Bn are all atomic predicate logic formulae, and that SL-resolution could be restricted (and generalised) to LUSH or SLD-resolution.
Colmerauer, with Philippe Roussel, used this dual interpretation of clauses as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. Kowalski's procedural interpretation and LUSH were later described in a 1973 memo, published in 1974.
The close relationship between the declarative and the procedural interpretations remains a distinctive feature of logic programming languages, though the relationship becomes more complex when negation, disjunction and other quantifiers are allowed in programs.
[edit] Negation in logic programming
With the inclusion of negation, the relationship between the declarative and procedural interpretations becomes less straightforward. The so-called negation-as-failure inference rule states that, if a goal Q cannot be proven from a program then its negation not(Q) is regarded as proven. This is clearly not the case in general in classical logic; it is quite possible that neither Q nor not(Q) is a consequence of a set of axioms. As a result, this rule leads to some logical anomalies and practical difficulties. A backward chaining proof rule augmented with negation-as-failure is incomplete, meaning that it is unable to prove all of the logical consequences that follow from a declarative reading of its programs. However, most Prolog dialects do implement negation as failure typically using the characters \+.
Though not complete, the negation-as-failure rule is a sound inference rule (under certain restrictions) with respect to the completion of a program. Completion of a logic program was first defined by Keith Clark and amounts roughly to regarding the set of all the program clauses with the same predicate on the left hand side, say
H :- Body1. .... H :- Bodyk.
as the single equivalence formula
H iff (Body1 or ... or Bodyk)
where iff means equivalence (if and only if). Writing the completion also requires explicit use of the equality predicate and the inclusion of a set of appropriate axioms for equality. The notion of completion is closely related to McCarthy's circumscription technique for handling non-monotonic reasoning, and the assumption of a closed world.
[edit] Prolog Implementation
The first Prolog implementation was Marseille Prolog, developed in 1972. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.
[edit] Limitations of using mathematical logic for programming
John McCarthy proposed that mathematical logic be used as the foundation for the epistemology of computer systems. Under the leadership of Marvin Minsky and Seymour Papert a different approach based on procedural approaches was developed at MIT. When Planner was developed, the question arose about the relationship between the two approaches. Robert Kowalski developed the thesis that “computation could be subsumed by deduction” and quoted with approval “Computation is controlled deduction.” which he attributed to Pat Hayes in his 1988 paper on the early history of Prolog. Contrary to Kowalski and Hayes, Carl Hewitt developed the thesis that logical deduction was incapable of carrying out concurrent computation in open systems. (See Indeterminacy in computation.)
The answer to the question about the relationship between the logical and procedural approaches is that the procedural approach has a different mathematical semantics (see Denotational semantics) from the semantics of mathematical logic (see Model theory).
[edit] Concurrent logic programming
Keith Clark, Hervé Gallaire, Steve Gregory, Vijay Saraswat, Udi Shapiro, Kazunori Ueda, etc. developed a family of Prolog-like concurrent message passing systems using unification of shared variables and data structure streams for messages. Efforts were made to base these systems on mathematical logic, and they were used as the basis of the Japanese Fifth Generation Project (ICOT).
Like the Actor model, the Prolog-like concurrent systems were based on message passing and consequently were subject to the same indeterminacy. This was the basis of an argument by Carl Hewitt and Gul Agha [1988] claiming that the Prolog-like concurrent systems were neither deductive nor logical.
[edit] Higher-order logic programming
Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.
[edit] Linear logic programming
Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO[Andreoli & Pareschi, 1991], Lolli [Hodas & Miller, 1994], ACL [Kobayashi & Yonezawa, 1994], and Forum [Miller, 1996]. Forum provides a goal-direct interpretation of all of linear logic.
[edit] Application domains
- Expert systems, where the program generates a recommendation or answer from a large model of the application domain.
- Automated theorem proving, where the program generates novel theorems to extend some existing body of theory.
[edit] History
Logic Programming is an idea that has been investigated in the context of artificial intelligence since at least the time of John McCarthy [1958] which proposed
- "programs to manipulate in a suitable formal language (most likely a part of the predicate calculus) common instrumental statements. The basic program will draw immediate conclusions from a list of premises. These conclusions will be either declarative or imperative sentences. When an imperative sentence is deduced the program takes a corresponding action.”
McCarthy justified his proposal as follows
- "'The main advantages we expect the advice taker to have is that its behavior will be improvable merely by making statements to it, telling it about its symbolic environment and what is wanted from it. To make these statements will require little if any knowledge of the program or the previous knowledge of the advice taker. One will be able to assume that the advice taker will have available to it a fairly wide class of immediate logical consequences of anything it is told and its previous knowledge. This property is expected to have much in common with what makes us describe certain humans as having common sense. We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows."
[edit] See also
- Constraint logic programming
- Formal methods
- Functional programming
- Programming paradigm
- Inductive logic programming
[edit] References
- John McCarthy. Programs with common sense Symposium on Mechanization of Thought Processes. National Physical Laboratory. Teddington, England. 1958.
- Fisher Black. A deductive question answering system Harvard University. Thesis. 1964.
- James Slagle. Experiments with a Deductive Question-Answering Program CACM. December, 1965.
- Cordell Green. Application of Theorem Proving to Problem Solving IJCAI 1969.
- Carl Hewitt. Planner: A Language for Proving Theorems in Robots IJCAI 1969.
- Gerry Sussman and Terry Winograd. Micro-planner Reference Manual AI Memo No, 203, MIT Project MAC, July 1970.
- Carl Hewitt. Procedural Embedding of Knowledge In Planner IJCAI 1971.
- Terry Winograd. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language MIT AI TR-235. January 1971.
- Bruce Anderson. Documentation for LIB PICO-PLANNER School of Artificial Intelligence, Edinburgh University. 1972
- Bruce Baumgart. Micro-Planner Alternate Reference Manual Stanford AI Lab Operating Note No. 67, April 1972.
- Julian Davies. Popler 1.6 Reference Manual University of Edinburgh, TPU Report No. 1, May 1973.
- Jeff Rulifson, Jan Derksen, and Richard Waldinger. QA4, A Procedural Calculus for Intuitive Reasoning SRI AI Center Technical Note 73, November 1973.
- Robert Kowalski and Donald and Kuehner, Linear Resolution with Selection Function Artificial Intelligence, Vol. 2, 1971, pp. 227-60.
- Robert Kowalski Predicate Logic as a Programming Language Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973. Also in Proceedings IFIP Congress, Stockholm, North Holland Publishing Co., 1974, pp. 569-574.
- Drew McDermott and Gerry Sussman. The Conniver Reference Manual MIT AI Memo 259A. January 1974.
- Earl Sacerdoti, et al. QLISP: A Language for the Interactive Development of Complex Systems AFIPS National Computer Conference. 1976.
- Bill Kornfeld and Carl Hewitt. The Scientific Community Metaphor IEEE Transactions on Systems, Man, and Cybernetics. January 1981.
- Bill Kornfeld. The Use of Parallelism to Implement a Heuristic Search IJCAI 1981.
- Bill Kornfeld. Parallelism in Problem Solving MIT EECS Doctoral Dissertation. August 1981.
- Bill Kornfeld. Combinatorially Implosive Algorithms CACM. 1982
- Carl Hewitt. The Challenge of Open Systems Byte Magazine. April 1985.
- Robert Kowalski. The limitation of logic Proceedings of the 1986 ACM fourteenth annual conference on Computer science.
- Ehud Shapiro (Editor). Concurrent Prolog MIT Press. 1987.
- Robert Kowalski. The Early Years of Logic Programming CACM. January 1988.
- Ehud Shapiro. The family of concurrent logic programming languages ACM Computing Surveys. September 1989.
- Carl Hewitt and Gul Agha. Guarded Horn clause languages: are they deductive and Logical? International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991.
- Shunichi Uchida and Kazuhiro Fuchi Proceedings of the FGCS Project Evaluation Workshop Institute for New Generation Computer Technology (ICOT). 1992.
- Carl Hewitt. The repeated demise of logic programming and why it will be reincarnated What Went Wrong and Why: Lessons from AI Research and Applications. Technical Report SS-06-08. AAAI Press. March 2006.
- J. W. Lloyd. Foundations of Logic Programming (2nd edition). Springer-Verlag 1987.
- Jean-Marc Andreoli and Remo Pareschi. Linear Objects: Logical Processes with Built-In Inheritance, Proceeding of the Seventh International Conference on Logic Programming, Jerusalem, May 1990.
- Joshua Hodas and Dale Miller. Logic Programming in a Fragment of Intuitionistic Linear Logic, Information and Computation, 1994, 110(2), 327-365.
- Naoki Kobayashi and Akinori Yonezawa. Asynchronous communication model based on linear logic, Formal Aspects of Computing, 1994, 279-294.
- Dale Miller. Forum: A Multiple-Conclusion Specification Language, Theoretical Computer Science, 1996, 165 (1), 201--232.
- Dale Miller. Overview of linear logic programming, in Linear Logic in Computer Science, edited by Thomas Ehrhard, Jean-Yves Girard, Paul Ruet, and Phil Scott. Cambridge University Press. London Mathematical Society Lecture Note, Volume 316.