Inference engine

From Wikipedia, the free encyclopedia

In computer science, and specifically the branches of knowledge engineering and artificial intelligence, an inference engine is a computer program that tries to derive answers from a knowledge base. It is the "brain" that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions. Inference engines are considered to be a special case of reasoning engines, which can use more general methods of reasoning.

Contents

[edit] Architecture

The separation of inference engines as a distinct software component stems from the typical production system architecture. This architecture relies on a data store, or working memory, serving as a global database of symbols representing facts or assertions about the problem; on a set of rules which constitute the program, stored in a rule memory of production memory; and on an inference engine, required to execute the rules. (Executing rules is also referred to as firing rules.) The inference engine must determine which rules are relevant to a given data store configuration and choose which one(s) to apply. The control strategy used to select rules is often called conflict resolution.

An inference engine has three main elements. They are:

  1. An interpreter. The interpreter executes the chosen agenda items by applying the corresponding base rules.
  2. A scheduler. The scheduler maintains control over the agenda by estimating the effects of applying inference rules in light of item priorities or other criteria on the agenda.
  3. A consistency enforcer. The consistency enforcer attempts to maintain a consistent representation of the emerging solution.

[edit] The recognize-act cycle

The inference engine can be described as a form of finite state machine with a cycle consisting of three action states: match rules, select rules, and execute rules.

In the first state, match rules, the inference engine finds all of the rules that are satisfied by the current contents of the data store. When rules are in the typical condition-action form, this means testing the conditions against the working memory. The rule matchings that are found are all candidates for execution: they are collectively referred to as the conflict set. Note that the same rule may appear several times in the conflict set if it matches different subsets of data items. The pair of a rule and a subset of matching data items is called an instantiation of the rule.

In many applications, where large volume of data are concerned and/or when performance time considerations are critical, the computation of the conflict set is a non-trivial problem. Earlier research work on inference engines focused on better algorithms for matching rules to data. The Rete algorithm, developed by Charles Forgy, is an example of such a matching algorithm; it was used in the OPS series of production system languages. Daniel P. Miranker later improved on Rete with another algorithm, TREAT, which combined it with optimization techniques derived from relational database systems.

The inference engine then passes along the conflict set to the second state, select rules. In this state, the inference engine applies some selection strategy to determine which rules will actually be executed. The selection strategy can be hard-coded into the engine or may be specified as part of the model. In the larger context of AI, these selection strategies as often referred to as heuristics following Allen Newell's Unified theory of cognition.

In OPS5, for instance, a choice of two conflict resolution strategies is presented to the programmer. The LEX strategy orders instantiations on the basis of recency of the time tags attached to their data items. Instantiations with data items having recently matched rules in previous cycles are considered with higher priority. Within this ordering, instantiations are further sorted on the complexity of the conditions in the rule. The other strategy, MEA, puts special emphasis on the recency of working memory elements that match the first condition of the rule. (The latter heuristic is heavily used in means-ends analysis.)

Finally the selected instantiations are passed over to the third state, execute rules. The inference engine executes or fires the selected rules, with the instantiation's data items as parameters. Usually the actions in the left-hand side of a rule change the data store, but they may also trigger further processing outside of the inference engine (interacting with users through a graphical user interface or calling local or remote programs, for instance). Since the data store is usually updated by firing rules, a different set of rules will match during the next cycle after these actions are performed.

The inference engine then cycles back to the first state and is ready to start over again. This control mechanism is referred to as the recognize-act cycle. The inference engine stops either on a given number of cycles, controlled by the operator, or on a quiescent state of the data store when no rules match the data.

[edit] Data-driven computation versus procedural control

The inference engine control is based on the frequent reevaluation of the data store states, not on any static control structure of the program. The computation is often qualified as data-driven or pattern-directed in contrast to the more traditional procedural control. Rules can communicate with one another only by way of the data, whereas in traditional programming languages procedures and functions explicitly call one another. Unlike instructions, rules are not executed sequentially and it is not always possible to determine through inspection of a set of rules which rule will be executed first or cause the inference engine to terminate.

In contrast to a procedural computation, in which knowledge about the problem domain is mixed in with instructions about the flow of control—although object-oriented programming languages mitigate this entanglement—the inference engine model allows a more complete separation of the knowledge (in the rules) from the control (the inference engine).

[edit] See also

[edit] List of Engines