TD-Gammon

TD-Gammon was a computer backgammon program developed in 1992 by Gerald Tesauro at IBM's Thomas J. Watson Research Center. Its name comes from the fact that it is an artificial neural net trained by a form of temporal-difference learning, specifically TD-lambda.

TD-Gammon achieved a level of play just slightly below that of the top human backgammon players of the time. It explored strategies that humans had not pursued and led to advances in the theory of correct backgammon play.

Contents

Algorithm for play and learning

During play, TD-Gammon examines at each turn all possible legal moves and all their possible responses (two-ply look-ahead), feeds each resulting board position into its evaluation function, and chooses the move that leads to the board position that got the highest score. In this respect, TD-Gammon is no different than almost any other computer board-game program. TD-Gammon's innovation was in how it learned its evaluation function.

TD-Gammon's learning algorithm is as follows:[1]

1. Each example consists of feeding the program a complete transcript of a game: all board positions from beginning to end, and a vector Y consisting of four bits, indicating the outcome of the game: White wins normally, Black wins normally, White wins a gammon, Black wins a gammon. (Backgammons are ignored because of their extreme rarity.)

2. The final Y vector is compared against the evaluation of the final board position, and the weights in the neural net are updated to bring the evaluation function closer to Y. Then, for each preceding board position, the weights in the neural net are updated to make the evaluation function for each Y(t) closer to Y(t+1).

Thus the evaluation function is made to be more and more internally consistent: the evaluation of any board position is made to grow closer to the evaluation of the board after the following move (hence "temporal-difference learning").

Experiments and stages of training

Unlike previous neural-net backgammon programs such as Neurogammon (also written by Tesauro), where an expert trained the program by supplying the "correct" evaluation of each position, TD-Gammon was at first programmed "knowledge-free".[1] In early experimentation, using only a raw board encoding with no human-designed features, TD-Gammon reached a level of play comparable to Neurogammon: that of an intermediate-level human backgammon player.

Even though TD-Gammon discovered insightful features on its own, Tesauro wondered if its play could be improved by using hand-designed features like Neurogammon's. Indeed, the self-training TD-Gammon with expert-designed features soon surpassed all previous computer backgammon programs. It stopped improving after about 1,500,000 games (self-play) using 80 hidden units.[2]

Advances in backgammon theory

TD-Gammon's exclusive training through self-play (rather than tutelage) enabled it to explore strategies that humans previously hadn't considered or had ruled out erroneously. Its success with unorthodox strategies had a significant impact on the backgammon community.[1]

For example, on the opening play, the conventional wisdom was that given an roll of 2-1, 4-1, or 5-1, White should move a single checker from point 6 to point 5. Known as "slotting", this technique trades the risk of a hit for the opportunity to develop an aggressive position. TD-Gammon found that the more conservative play of 24-23 was superior. Tournament players began experimenting with TD-Gammon's move, and found success. Within a few years, slotting had disappeared from tournament play (it's now reappearing for 2-1[3], though).

Backgammon expert Kit Woolsey found that TD-Gammon's positional judgement, especially its weighing of risk against safety, was superior to his own or any human's.

TD-Gammon's excellent positional play was undercut by occasional poor endgame play. The endgame requires a more analytic approach, sometimes with extensive lookahead. TD-Gammon's limitation to two-ply lookahead put a ceiling on what it could achieve in this part of the game. TD-Gammon's strengths and weaknesses were the opposite of symbolic artificial intelligence programs and most computer software in general: it was good at matters that require an intuitive "feel", bad at systematic analysis.

References

  1. ^ a b c Tesauro, Gerald (March 1995). "Temporal Difference Learning and TD-Gammon". Communications of the ACM 38 (3). http://www.research.ibm.com/massive/tdl.html. Retrieved 2010-02-08. 
  2. ^ Sutton, Richard S.; Andrew G. Barto (1998). Reinforcement Learning: An Introduction. MIT Press. pp. Table 11.1. http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node108.html. 
  3. ^ "Backgammon: How to Play the Opening Rolls". http://www.bkgm.com/openings.html.