Reinforcement learning

From Wikipedia, the free encyclopedia

Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment in which the agent perceives its current state and takes actions. The environment, in return, provides a reward (which can be positive or negative). Reinforcement learning algorithms attempt to find a policy for maximizing cumulative reward for the agent over the course of the problem.

The environment is typically formulated as a finite-state Markov decision process (MDP), and reinforcement learning algorithms for this context are highly related to dynamic programming techniques. State transition probabilities and reward probabilities in the MDP are typically stochastic but stationary over the course of the problem.

Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The exploration vs. exploitation tradeoff in reinforcement learning has been mostly studied through the multi-armed bandit problem.

Formally, the basic reinforcement learning model consists of:

  1. a set of environment states S;
  2. a set of actions A; and
  3. a set of scalar "rewards" in \Bbb{R}.

At each time t, the agent perceives its state st \in S and the set of possible actions A(st). It chooses an action a\inA(st) and receives from the environment the new state st+1 and a reward rt+1. Based on these interactions, the reinforcement learning agent must develop a policy π:S\rightarrowA which maximizes the quantity R=r0+r1+...+rn for MDPs which have a terminal state, or the quantity Rtγtrt for MDPs without terminal states (where γ is some "future reward" discounting factor between 0.0 and 1.0).

Thus, reinforcement learning is particularly well suited to problems which include a long-term versus short-term reward tradeoff. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon and chess.

Contents

[edit] Algorithms

After we have defined an appropriate return function to be maximised, we need to specify the algorithm that will be used to find the policy with the maximum return. There are two main approaches, the value function approach and the direct approach.

The direct approach entails the following two steps: a) For each possible policy, sample returns while following it. b) Choose the policy with the largest expected return. One problem with this is that the number of policies can be extremely large, or even infinite. Another is that returns might be stochastic, in which case a large number of samples will be required to accurately estimate the return of each policy. The direct approach is the basis for the algorithms used in Evolutionary robotics.

The problems with the direct approach might be ameliorated if we assume some structure in the problem and somehow allow samples generated from one policy to influence the estimates made for another. Value function approaches do this by only maintaining a set of estimates of expected returns for one policy π (usually either the current or the optimal one). In such approaches one attempts to estimate either the expected return starting from state s and following π thereafter,

V(s) = E[R|s,π],

or the expected return when taking action a in state s and following π thereafter,

Q(s,a) = E[R|s,π],

If someone gives us Q for the optimal policy, we can always choose optimal actions by simply choosing the action with the highest value at each state. In order to do this using V, we must either have a model of the environment, in the form of probabilities P(s'|s,a), which allow us to calculate Q simply through

Q(s,a) = V(s')P(s' | s,a),
s'

or we can employ so-called Actor-Critic methods, in which the model is split into two parts: the critic, which maintains the state value estimate V, and the actor, which is responsible for choosing the appropriate actions at each state.

Given a fixed policy π, Estimating E[R|.] for γ=0 is trivial, as one only has to average the immediate rewards. The most obvious way to do this for γ>0 is to average the total return after each state. However this type of Monte Carlo sampling requires the MDP to terminate.

Thus carrying out this estimation for γ > 0 in the general does not seem obvious. In fact, it is quite simple once one realises that the expectation of R forms a recursive Bellman equation: E[R | st] = rt + γE[R | st + 1]

By replacing those expectations with our estimates, V, and performing gradient descent with a squared error cost function, we obtain the temporal difference learning algorithm TD(0). In the simplest case, the set of states and actions are both discrete and we maintain tabular estimates for each state. Similar state-action pair methods are SARSA and Q-Learning. All methods feature extensions whereby some approximating architecture is used, though in some cases convergence is not guaranteed. The estimates are usually updated with some form of gradient descent, though there have been recent developments with least squares methods for the linear approximation case.

The above methods not only all converge to the correct estimates for a fixed policy, but can also be used to find the optimal policy. This is usually done by following a policy π that is somehow derived from the current value estimates, i.e. by choosing the action with the highest evaluation most of the time, while still occasionally taking random actions in order to explore the space. Proofs for convergence to the optimal policy also exist for the algorithms mentioned above, under certain conditions. However, all those proofs only demonstrate asymptotic convergence and little is known theoretically about the behaviour of RL algorithms in the small-sample case, apart from within very restricted settings.

An alternative method to find the optimal policy is to search directly in policy space. Policy space methods define the policy as a parametrised function π(s,θ) with parameters θ. Commonly, a gradient method is employed to adjust the parameters. However, the application of gradient methods is not trivial, since no gradient information is assumed. Rather, the gradient itself must be estimated from noisy samples of the return. Since this greatly increases the computational cost, it can be advantageous to use a more powerful gradient method than steepest gradient descent. Policy space gradient methods have received a lot of attention in the last 5 years and have now reached a relatively mature stage, but they remain an active field. There are many other approaches, such as simulated annealing, that can be taken to explore the policy space. Work on these other techniques is less well developed.

[edit] Current research

Current research topics include: Alternative representations (such as the Predictive State Representation approach), gradient descent in policy space, small-sample convergence results, algorithms and convergence results for partially observable MDPs, modular and hierarchical reinforcement learning. Recently, reinforcement learning has been used in the domain of Psychology to explain human learning and performance. In particular, it has be used in cognitive models that simulate human performance during problem solving and/or skill acquisition (e.g., Fu & Anderson, 2006).

[edit] References

Leslie Pack Kaelbling; Michael L. Littman; Andrew W. Moore, Reinforcement Learning: A Survey, Journal of Artificial Intelligence Research 4 (1996) pp. 237–285

Richard S. Sutton; Andrew G. Barto, Reinforcement Learning, MIT Press, 1998, ISBN 0262193981 (full text online)

Dimitri P. Bertsekas; John Tsitsiklis, Neuro-Dynamic Programming, Athena Scientific, 1996, ISBN 1886529108

Jan Peters; Sethu Vijayakumar; Stefan Schaal, Reinforcement Learning for Humanoid Robotics, IEEE-RAS International Conference on Humanoid Robots

Fu, W.-T., Anderson, J. R. (2006). Recurrent Choice to Skilled Learning Model. Learning: A Reinforcement Learning Model. Learning: A Reinforcement Learning Model. Journal of Experimental Psychology: General, 135 (2), 184-206.

[edit] External Links