Q-learning

Q-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. A recent variation called delayed Q-learning has shown substantial improvements, bringing Probably approximately correct learning (PAC) bounds to Markov decision processes [1]

Contents

Algorithm

The problem model consists of an agent, states S and a set of actions per state A. By performing an action a \in A, the agent can move from state to state. Each state provides the agent a reward (a real or natural number). The goal of the agent is to maximize its total reward. It does this by learning which action is optimal for each state.

The algorithm therefore has a function which calculates the Quality of a state-action combination:

Q: S \times A \to \mathbb{R}

Before learning has started, Q returns a fixed value, chosen by the designer. Then, each time the agent is given a reward (the state has changed) new values are calculated for each combination of a state s from S, and action a from A. The core of the algorithm is a simple value iteration update. It assumes the old value and makes a correction based on the new information.

Q(s_t,a_t) \leftarrow \underbrace{Q(s_t,a_t)}_{\rm old~value} %2B \underbrace{\alpha_t(s_t,a_t)}_{\rm learning~rate} \times \left[ \overbrace{\underbrace{R(s_{t})}_{\rm reward} %2B \underbrace{\gamma}_{\rm discount~factor} \underbrace{\max_{a_{t%2B1}}Q(s_{t%2B1}, a_{t%2B1})}_{\rm max~future~value}}^{\rm learned~value} - \underbrace{Q(s_t,a_t)}_{\rm old~value}\right]

Where R(s_{t}) is the reward observed from s_{t}, \alpha_t(s, a) (0 < \alpha \le 1) the learning rate (may be the same for all pairs). The discount factor \gamma is such that 0 \le \gamma < 1

The above formula is equivalent to:

Q(s_t,a_t) \leftarrow Q(s_t,a_t)(1-\alpha_t(s_t,a_t)) %2B \alpha_t(s_t,a_t) [R(s_{t}) %2B \gamma \max_{a_{t%2B1}}Q(s_{t%2B1}, a_{t%2B1})]

An episode of the algorithm ends when state s_{t%2B1} is a final state (or, "absorbing state").

Note that for all final states s_f, Q(s_f, a) is never updated and thus retains its initial value.

Influence of variables on the algorithm

Learning rate

The learning rate determines to what extent the newly acquired information will override the old information. A factor of 0 will make the agent not learn anything, while a factor of 1 would make the agent consider only the most recent information.

Discount factor

The discount factor determines the importance of future rewards. A factor of 0 will make the agent "opportunistic" by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the Q values may diverge.

Implementation

Q-learning at its simplest uses tables to store data. This very quickly loses viability with increasing levels of complexity of the system it is monitoring/controlling. One answer to this problem is to use an (adapted) artificial neural network as a function approximator, as demonstrated by Tesauro in his Backgammon playing temporal difference learning research. An adaptation of the standard neural network is required because the required result (from which the error signal is generated) is itself generated at run-time.

Early study

Q-learning was first introduced by Watkins[2] in 1989.

The convergence proof was presented later by Watkins and Dayan[3] in 1992.

See also

External links

References

  1. ^ Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. Pac model-free reinforcement learning. In Proc. 23nd ICML 2006, pages 881–888, 2006.
  2. ^ Watkins, C.J.C.H., (1989), Learning from Delayed Rewards. Ph.D. thesis, Cambridge University.
  3. ^ Watkins and Dayan, C.J.C.H., (1992), 'Q-learning.Machine Learning', ISBN : 8:279-292