Horizon effect
From Wikipedia, the free encyclopedia
The horizon effect (or horizon problem) is a problem in artificial intelligence.
When searching a large game tree (for instance using minimax or alpha-beta pruning) it is often not feasible to search the entire tree, so the tree is only searched down to a certain depth. This results in the horizon effect where a significant change exists just over the "horizon" (slightly beyond the depth the tree has been searched). Evaluating the partial tree thus gives a misleading result.
An example of the horizon effect occurs when some negative event is inevitable but postponable, but because only a partial game tree has been analysed, it will appear to the system that the event can be avoided when in fact this is not the case. For example, in chess, assume a situation where black is searching the game tree to the six plies depth and from current position, it can see that it is going to lose queen in the sixth ply. Now let there be such a combination that by sacrificing a rook, the AI can push the loss of the queen to the eight ply. This is of course a bad move, because it leads to losing not only the queen, but also a rook. However, since the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Sacrificing of the rook seems to be better than losing the queen, so the sacricifing move is returned as the best option.
The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for certain class of moves of major importance to the game state, such as captures.
Rewriting the evaluation function for leaf nodes and/or analyzing sufficiently more nodes will solve many horizon effect problems.
[edit] References
- Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, pp. 174, ISBN 0-13-790395-2, <http://aima.cs.berkeley.edu/>