Early stopping

In machine learning, early stopping is a form of regularization used when a machine learning model (such as a neural network) is trained by on-line gradient descent. In early stopping, the training set is split into a new training set and a validation set. Gradient descent is applied to the new training set. After each sweep through the new training set, the network is evaluated on the validation set. When the performance with the validation test stops improving, the algorithm halts. The network with the best performance on the validation set is then used for actual testing, with a separate set of data (the validation set is used in learning to decide when to stop).

This technique is a simple but efficient hack to deal with the problem of overfitting. Overfitting is a phenomenon in which a learning system, such as a neural network gets very good at dealing with one data set at the expense of becoming very bad at dealing with other data sets. Early stopping is effectively limiting the used weights in the network and thus imposes a regularization, effectively lowering the VC dimension.

Early stopping is a very common practice in neural network training and often produces networks that generalize well. However, while often improving the generalization it does not do so in a mathematically well-defined way.

Contents

Method

  1. Divide the available data into training and validation sets.
  2. Use a large number of hidden units.
  3. Use very small random initial values.
  4. Use a slow learning rate.
  5. Compute the validation error rate periodically during training.
  6. Stop training when the validation error rate "starts to go up".

It is crucial to realize that the validation error is not a good estimate of the generalization error. One method for getting an unbiased estimate of the generalization error is to run the net on a third set of data, the test set, that is not used at all during the training process. The error on the test set gives estimate on generalization; to have the outputs of the net approximate target values given inputs that are not in the training set.

Advantages

Early stopping has several advantages:

Issues

See also

References

External links