Multi-armed bandit
From Wikipedia, the free encyclopedia
A multi-armed bandit, also sometimes called a K-armed bandit, is a simple machine learning problem based on an analogy with a traditional slot machine (one-armed bandit) but with more than one lever. When pulled, each lever provides a reward drawn from a distribution associated to that specific lever. The objective of the gambler is to maximize the collected reward sum through iterative pulls. It is classically assumed that the gambler has no initial knowledge about the levers, but through repeated trials, he can focus on the most rewarding levers.
Contents |
[edit] Empirical motivation
The multi-armed bandit problem, originally described by Robbins in 1952, is a simple model of an agent that simultaneously attempts to acquire new knowledge and to optimize its decisions based on existing knowledge. Practical examples include clinical trials where the effects of different experimental treatments need to be investigated while minimizing patient losses, and adaptive routing efforts for minimizing delays in a network. The questions arising in these cases are related to the problem of balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in reinforcement learning.
[edit] The multi-armed bandit model
The multi-armed bandit (bandit for short) can be seen as a set of real distributions , each distribution being associated with the rewards delivered by one of the K levers. Let be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon H is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret ρ after T rounds is defined as the difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: , where μ * is the maximal reward mean, μ * = maxk{μk}, and is the reward at time t. A strategy whose average regret per round ρ / T tends to zero with probability 1 when the number of played rounds tends to infinity is a zero-regret strategy. Intuitively, zero-regret strategies are guaranteed to converge to an optimal strategy, not necessarily unique, if enough rounds are played.
[edit] Common bandit strategies
Many strategies are known to solve the bandit problem, and can be put into the three broad categories detailed below.
[edit] Semi-uniform strategies
Semi-uniform strategies were the earliest (and simplest) strategies discovered to solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken.
- Epsilon-greedy strategy: The best lever is selected for a proportion 1 − ε of the trials, and another lever is randomly selected (with uniform probability) for a proportion ε. A typical parameter value might be ε = 0.1, but this can vary widely depending on circumstances and predilections.
- Epsilon-first strategy: A pure exploration phase is followed by a pure exploitation phase. For N trials in total, the exploration phase occupies εN trials and the exploitation phase (1 − ε)N trials. During the exploration phase a lever is randomly selected (with uniform probability); during the exploitation phase the best lever is always selected.
- Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of ε decreases as the experiment progresses, resulting in highly explorative behaviour at the start and highly exploitative behaviour at the finish.
[edit] Probability matching strategies
Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever.
- to be completed (softmax, gaussmatch, Exp3)
[edit] Pricing strategies
Pricing strategies establish a price for each lever. The lever of highest price is always pulled.
- to be completed (interval estimation, poker)
[edit] References
H. Robbins. Some Aspects of the Sequential Design of Experiments. In Bulletin of the American Mathematical Society, volume 55, pages 527–535, 1952.
Richard Sutton and Andrew Barto. Reinforcement Learning. MIT Press, 1998. (available online)
Bandit project (bandit.sourceforge.net), open source implementation of many bandit strategies.