Backward chaining
Backward chaining (or backward reasoning) is an inference method that can be described (in lay terms) as working backward from the goal(s). It is used in automated theorem provers, inference engines, proof assistants and other artificial intelligence applications.[1]
In game theory, its application to (simpler) subgames in order to find a solution to the game is called backward induction. In chess, it is called retrograde analysis, and it is used to generate tablebases for chess endgames for computer chess.
Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog.[2]
How it works
Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if there is data available that will support any of these consequents.[3] An inference engine using backward chaining would search the inference rules until it finds one which has a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (in order for one's goal to be confirmed one must also provide data that confirms this new rule).
For example, suppose that the goal is to conclude whether Tweety or Fritz is a frog, given information about each of them, and that the rule base contains the following four rules:
- If X croaks and eats flies – Then X is a frog
- If X chirps and sings – Then X is a canary
- If X is a frog – Then X is green
- If X is a canary – Then X is yellow
Let us illustrate backward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts:
- Fritz croaks
- Fritz eats flies
- Tweety eats flies
- Tweety chirps
- Tweety is yellow
With backward reasoning, the computer can answer the question "Who is a frog?" in four steps: In its reasoning, the computer uses a placeholder (here: question mark) for the answer.
1. ? is a frog
Based on rule 1, the computer can derive:
2. ? croaks and eats flies
Based on logic, the computer can derive:
3. ? croaks and ? eats flies
Based on the facts, the computer can derive:
4. Fritz croaks and Fritz eats flies
This derivation will cause the computer to produce Fritz as the answer to the question "Who is a frog?".
Note that the computer has not used any knowledge about Tweety to compute that Fritz is a frog.
Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent) which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule which is used is modus ponens.
Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.
Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines.[4]
See also
References
- ↑ Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 317. ISBN 0-8129-1731-6.
- ↑ Michel Chein; Marie-Laure Mugnier (2009). Graph-based knowledge representation: computational foundations of conceptual graphs. Springer. p. 297. ISBN 978-1-84800-285-2.
- ↑ Definition of backward chaining as a depth-first search method:
- Russell & Norvig 2009, p. 337
- ↑ Languages that support backward chaining:
- Russell & Norvig 2009, p. 339