In statistics, particularly in the design of sequential experiments, a multi-armed bandit takes its name from a traditional slot machine (one-armed bandit). Multiple levers are considered in the motivating applications in statistics. When pulled, each lever provides a reward drawn from a distribution associated with that specific lever. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls.[1][2]
In practice, multi-armed bandits have been used to model the problem of managing research projects in a large organization, like a science foundation or a pharmaceutical company. Given its fixed budget, the problem is to allocate resources among the competing projects, whose properties are only partially known now but may be better understood as time passes.[1][2]
In the early versions of the multi-armed bandit problem, the gambler has no initial knowledge about the levers. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the lever that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other levers.
The multi-armed bandit is sometimes called a -armed bandit or -armed bandit.
Contents |
The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge and to optimize its decisions based on existing knowledge. There are many practical applications:
In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in reinforcement learning.
The model can also be used to control dynamic allocation of resources to different projects, answering the question "which project should I work on" given uncertainty about the difficulty and payoff of each possibility.
Originally considered by Allied scientists in World War II, it proved so intractable that it was proposed the problem be dropped over Germany so that German scientists could also waste their time on it.[4] It was formulated by Herbert Robbins in 1952.
The multi-armed bandit (or just bandit for short) can be seen as a set of real distributions , each distribution being associated with the rewards delivered by one of the K levers. Let be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon H is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret after T rounds is defined as the difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: , where is the maximal reward mean, , and is the reward at time t. A strategy whose average regret per round tends to zero with probability 1 when the number of played rounds tends to infinity is a zero-regret strategy. Intuitively, zero-regret strategies are guaranteed to converge to an optimal strategy, not necessarily unique, if enough rounds are played.
Another formulation of the multi-armed bandit has each arm representing an independent markov machine. Each time a particular arm is played, the state of that machine advances to a new one, chosen according to the Markov state evolution probabilities. There is a reward depending on the current state of the machine. In a generalisation called the "restless bandit problem", the states of non-played arms can also evolve over time.[5] There has also been discussion of systems where the number of choices (about which arm to play) increases over time.[6]
Computer science researchers have studied multi-armed bandits under worst-case assumptions, obtaining positive results for finite numbers of trials with both stochastic [7] and nonstochastic[8] arm payoffs.
Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the three broad categories detailed below.
Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken.
Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever.
Pricing strategies establish a price for each lever. The lever of highest price is always pulled.
These strategies minimize the assignment of any patient to an inferior arm ("physician's duty"). In a typical case, they minimize expected successes lost (ESL), that is, the expected number of favorable outcomes that were missed because of assignment to an arm later proved to be inferior. Another version minimizes resources wasted on any inferior, more expensive, treatment.[3]