Stochastic approximation
From Wikipedia, the free encyclopedia
Stochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. The first, and prototypical, algorithms of this kind were the Robbins-Monro and Kiefer-Wolfowitz algorithms.
In the Robbins-Monro algorithm, introduced in 1951[1], one has a function M(x) for which one wishes to find the value of x, x0, satisfying M(x0)=α. However, what is observable is not M(x), but rather a random variable N(x) such that E(N(x)|x)=M(x). The algorithm is then to construct a sequence x1, x2, ... which satisfies
-
-
-
-
-
-
-
-
-
-
-
- xi+1=xi+ai(α-N(xi)).
-
-
-
-
-
-
-
-
-
-
Here, a1, a2, ... is a sequence of positive step sizes. Robbins and Monro proved that, if N(x) is uniformly bounded, M(x) is nondecreasing, M'(x0) exists and is positive, and if ai satisfies a set of bounds (fulfilled if one takes ai=1/i), then xi converges in L2 (and hence also in probability) to x0.[1], Theorem 2. In general, the ai's need not equal 1/i. However, to ensure convergence, they should converge to zero, and in order to average out the noise in N(x), they should converge slowly.
In the similar Kiefer-Wolfowitz algorithm, introduced a year later[2], one wishes to find the maximum, x0, of the unknown M(x) and constructs a sequence x1, x2, ... such that
-
-
-
-
-
-
-
-
-
-
-
- xi+1=xi+(ai/ci)(N(xi+ci)-N(xi-ci)).
-
-
-
-
-
-
-
-
-
-
Here, a1, a2, ... is a sequence of positive step sizes which serve the same function as in the Robbins-Monro algorithm, and c1, c2, ... is a sequence of positive step sizes which are used to estimate, via finite differences, the derivative of M. Kiefer and Wolfowitz showed that, if ai and ci satisfy various bounds (fulfilled by taking ai=1/i, ci=1/i1/3), and M(x) and N(x) satisfy some technical conditions, then the sequence xi converges in probability to x0.
An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.[3],[4] These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size ai should not converge to zero but should be chosen so as to track the function.[3], 2nd ed., chapter 3
[edit] See also
[edit] References
- ^ a b A Stochastic Approximation Method, Herbert Robbins and Sutton Monro, Annals of Mathematical Statistics 22, #3 (September 1951), pp. 400–407.
- ^ Stochastic Estimation of the Maximum of a Regression Function, J. Kiefer and J. Wolfowitz, Annals of Mathematical Statistics 23, #3 (September 1952), pp. 462–466.
- ^ a b Stochastic Approximation Algorithms and Applications, Harold J. Kushner and G. George Yin, New York: Springer-Verlag, 1997. ISBN 038794916X; 2nd ed., titled Stochastic Approximation and Recursive Algorithms and Applications, 2003, ISBN 0387008942.
- ^ Stochastic Approximation and Recursive Estimation, Mikhail Borisovich Nevel'son and Rafail Zalmanovich Has'minskiĭ, translated by Israel Program for Scientific Translations and B. Silver, Providence, RI: American Mathematical Society, 1973, 1976. ISBN 0821815970.