Q-learning

From Wikipedia, the free encyclopedia

Q-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. A strength with Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. A recent variation called delayed-Q learning has shown substantial improvements, bringing PAC bounds to Markov Decision Processes.

Contents

[edit] Algorithm

The core of the algorithm is a simple value iteration update. For each state, s, from the state set S, and for each action, a, from the action set A, we can calculate an update to its expected discounted reward with the following expression:

Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha_t(s_t,a_t) [r_t + \gamma \max_{a}Q(s_{t+1}, a) - Q(s_t,a_t)]

where rt is an observed real reward at time t, αt(s,a) are the learning rates such that 0 ≤αt(s,a)≤ 1, and γ is the discount factor such that 0 ≤γ < 1.

[edit] Implementation

Q-Learning at its simplest uses tables to store data. This very quickly loses viability with increasing levels of complexity of the system it is monitoring/controlling. One answer to this problem is to use an (adapted) Artificial Neural Network as a function approximator, as demonstrated by Tesauro in his Backgammon playing Temporal Difference Learning research. An adaptation of the standard neural network is required because the required result (from which the error signal is generated) is itself generated at run-time.

[edit] See also

[edit] External links


Languages