Multi-armed bandit

From Wikipedia, the free encyclopedia

A multi-armed bandit, also sometimes called a K-armed bandit, is a simple machine learning problem based on an analogy with a traditional slot machine (one-armed bandit) but with more than one lever. When pulled, each lever provides a reward drawn from a distribution associated to that specific lever. The objective of the gambler is to maximize the collected reward sum through iterative pulls. It is classically assumed that the gambler has no initial knowledge about the levers. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the lever that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other levers.

Contents

[edit] Empirical motivation

The multi-armed bandit problem, originally described by Robbins in 1952, is a simple model of an agent that simultaneously attempts to acquire new knowledge and to optimize its decisions based on existing knowledge. Practical examples include clinical trials where the effects of different experimental treatments need to be investigated while minimizing patient losses, and adaptive routing efforts for minimizing delays in a network. The questions arising in these cases are related to the problem of balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in reinforcement learning.

The model can also be used to control dynamic allocation of resources to different projects, answering the question "which project should I work on" given uncertainty about the difficulty and payoff of each possibility.

[edit] The multi-armed bandit model

The multi-armed bandit (or just bandit for short) can be seen as a set of real distributions B = \{R_1, \dots ,R_K\}, each distribution being associated with the rewards delivered by one of the K levers. Let \mu_1, \dots , \mu_K be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon H is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret ρ after T rounds is defined as the difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: \rho = T \mu^* - \sum_{t=1}^T \widehat{r}_t, where μ * is the maximal reward mean, μ * = maxkk}, and \widehat{r}_t is the reward at time t. A strategy whose average regret per round ρ / T tends to zero with probability 1 when the number of played rounds tends to infinity is a zero-regret strategy. Intuitively, zero-regret strategies are guaranteed to converge to an optimal strategy, not necessarily unique, if enough rounds are played.

[edit] Variations

Another formulation of the multi-armed bandit has each arm representing an independent markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the markov state evolution probabilities. There is also a reward depending on the current state of the machine.

In a generalisation called the Restless Bandit Problem, the states of non-played arms can also evolve over time.

Whittle has also discussed systems where the number of choices (about which arm to play) increases over time: an Arm Acquiring Bandit.

[edit] Common bandit strategies

Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the three broad categories detailed below.

[edit] Semi-uniform strategies

Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken.

  • Epsilon-greedy strategy: The best lever is selected for a proportion 1 − ε of the trials, and another lever is randomly selected (with uniform probability) for a proportion ε. A typical parameter value might be ε = 0.1, but this can vary widely depending on circumstances and predilections.
  • Epsilon-first strategy: A pure exploration phase is followed by a pure exploitation phase. For N trials in total, the exploration phase occupies εN trials and the exploitation phase (1 − ε)N trials. During the exploration phase, a lever is randomly selected (with uniform probability); during the exploitation phase, the best lever is always selected.
  • Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of ε decreases as the experiment progresses, resulting in highly explorative behaviour at the start and highly exploitative behaviour at the finish.

[edit] Probability matching strategies

Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever.

[edit] Pricing strategies

Pricing strategies establish a price for each lever. The lever of highest price is always pulled.

[edit] References

  • H. Robbins. Some Aspects of the Sequential Design of Experiments. In Bulletin of the American Mathematical Society, volume 58, pages 527–535, 1952.
  • Richard Sutton and Andrew Barto. Reinforcement Learning. MIT Press, 1998. (available online)
  • Bandit project (bandit.sourceforge.net), open source implementation of many bandit strategies.
  • S. Dayanik, W. Powell, and K. Yamazaki (2007). Index policies for discounted bandit problems with availability constraints ([1])
  • Peter Whittle. Arm-acquiring bandits. Ann. Probab., 9:284–292, 1981. (Available online)
  • Peter Whittle. Restless bandits: Activity allocation in a changing world. Appl. Prob., 25(A):287–298, 1988.
  • Sudipto Guha, Kamesh Munagala, Peng Shi, On Index Policies for Restless Bandit Problems, 2007 arXiv:0711.3861v1
  • Warren B. Powell, Approximate Dynamic Programming, John Wiley and Sons, New York, 2007 (Chapter 10).

[edit] See also