Delta rule

From Wikipedia, the free encyclopedia

The delta rule is a gradient descent learning rule for updating the weights of the artificial neurons in a single-layer perceptron. For a neuron j with activation function g(x) the delta rule for j's ith weight wji is given by

Δwji = α(tjyj)g'(hj)xi,

where α is a small constant called learning rate, g(x) is the neuron's activation function, tj is the target output, yj is the actual output, and xi is the ith input. It holds h_j=\sum x_i w_{ji} and yj = g(hj). The delta rule is commonly stated in simplified form for a perceptron with a linear activation function as

Δwji = α(tjyj)xi

[edit] Derivation of the delta rule

The delta rule is derived by attempting to minimize the error in the output of the perceptron through gradient descent. The error for a perceptron with j outputs can be measured as

E=\sum_{j} \frac{1}{2}(t_j-y_j)^2.

In this case, we wish to move through "weight space" of the neuron (the space of all possible values of all of the neuron's weights) in proportion to the gradient of the error function with respect to each weight. In order to do that, we calculate the partial derivative of the error with respect to each weight. For the ith weight, this derivative can be written as

\frac{\partial E}{ \partial w_{ji} }.

Because we are only concerning ourselves with the jth neuron, we can substitute the error formula above while omitting the summation:

\frac{\partial E}{ \partial w_{ji} } = \frac{ \partial \left ( \frac{1}{2} \left( t_j-y_j \right ) ^2 \right ) }{ \partial w_{ji} }

Next we use the chain rule to split this into two derivatives:

= \frac{ \partial \left ( \frac{1}{2} \left( t_j-y_j \right ) ^2 \right ) }{ \partial y_j } \frac{ \partial y_j }{ \partial w_{ij} }

To find the left derivative, we simply apply the general power rule:

= - \left ( t_j-y_j \right ) \frac{ \partial y_j }{ \partial w_{ij} }

To find the right derivative, we again apply the chain rule, this time differentiating with respect to the total input to j, hj:

= - \left ( t_j-y_j \right ) \frac{ \partial y_j }{ \partial h_j } \frac{ \partial h_j }{ \partial w_{ij} }

Note that the output of the neuron yj is just the neuron's activation function g() applied to the neuron's input hj. We can therefore write the derivative of yj with respect to hj simply as g()'s first derivative:

= - \left ( t_j-y_j \right ) g'(h_j) \frac{ \partial h_j }{ \partial w_{ij} }

Next we rewrite hj in the last term as the sum over all k weights of each weight wjk times its corresponding input xk:

= - \left ( t_j-y_j \right ) g'(h_j) \frac{ \partial \left ( \sum_{k} x_k w_{jk} \right ) }{ \partial w_{ij} }

Because we are only concerned with the ith weight, the only term of the summation that is relevant is xiwji. Clearly,

\frac{ \partial x_i w_{ji} }{ \partial w_{ji} }=x_i,

giving us our final equation for the gradient:

\frac{\partial E}{ \partial w_{ji} } = - \left ( t_j-y_j \right ) g'(h_j) x_i

As noted above, gradient descent tells us that our change for each weight should be proportional to the gradient. Choosing a proportionality constant α and eliminating the minus sign to enable us to move the weight in the negative direction of the gradient to minimize error, we arrive at our target equation:

Δwji = α(tjyj)g'(hj)xi.

[edit] See also

In other languages