Forward chaining
From Wikipedia, the free encyclopedia
Forward chaining is one of the two main methods of reasoning when using inference rules (in artificial intelligence). The other is backward chaining.
Forward chaining starts with the available data and uses inference rules to extract more data (from an end user for example) until an optimal goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the If clause is known to be true. When found it can conclude, or infer, the Then clause, resulting in the addition of new information to its dataset.
Inference engines will often cycle through this process until an optimal goal is reached.
For example, suppose that the goal is to conclude the color of my pet Fritz, given that he croaks and eats flies, and that the rulebase contains the following two rules:
- If Fritz croaks and eats flies - Then Fritz is a frog
- If Fritz is a frog - Then Fritz is green
The given rule (that Fritz croaks and eats flies) would first be added to the knowledgebase, as the rulebase is searched for a consequent that matches its antecedent; This is true of the first Rule, so the conclusion (that Fritz is a Frog) is also added to the knowledgebase, and the rulebase is again searched. This time, the second rules' antecedent matches our consequent, so we add to our knowledgebase the new conclusion (that Fritz is green). Nothing more can be inferred from this information, but we have now accomplished our goal of determining the color of Fritz.
Forward-chaining inference is often called data driven — in contrast to backward-chaining inference, which is referred to as goal driven reasoning. The top-down approach of forward chaining is commonly used in expert systems, such as CLIPS. One of the advantages of forward-chaining over backwards-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change.