Stochastic gradient descent
From Wikipedia, the free encyclopedia
Stochastic gradient descent is a general optimization algorithm, but is typically used to fit the parameters of a machine learning model.
In standard (or "batch") gradient descent, the true gradient is used to update the parameters of the model. The true gradient is usually the sum of the gradients caused by each individual training example. The parameter vectors are adjusted by the negative of the true gradient multiplied by a step size. Therefore, batch gradient descent requires one sweep through the training set before any parameters can be changed.
In stochastic (or "on-line") gradient descent, the true gradient is approximated by the gradient of the cost function only evaluated on a single training example. The parameters are then adjusted by an amount proportional to this approximate gradient. Therefore, the parameters of the model are updated after each training example. For large data sets, on-line gradient descent can be much faster than batch gradient descent.
There is a compromise between the two forms, which is often called "mini-batches", where the true gradient is approximated by a sum over a small number of training examples.
Stochastic gradient descent is a form of stochastic approximation. The theory of stochastic approximations gives conditions on when stochastic gradient descent converges.
Some of the most popular stochastic gradient descent algorithms are the least mean squares (LMS) adaptive filter and the backpropagation algorithm.
[edit] References
- Stochastic Learning. Lecture by Léon Bottou for the Machine Learning Summer School 2003 in Tübingen. Also in Advanced Lectures on Machine Learning edited by Olivier Bousquet and Ulrike von Luxburg, ISBN 3-540-23122-6, 2004
- Introduction to Stochastic Search and Optimization by James C. Spall, ISBN 0-471-33052-3, 2003
- Pattern Classification by Richard O. Duda, Peter E. Hart, David G. Stork, ISBN 0-471-05669-3, 2000