Cybenko Theorem
From Wikipedia, the free encyclopedia
Formulated by Cybenko in 1989, the theorem states:
Let be any continuous sigmoid-type function [e.g., ]. Then, given any continuous real-valued function f on [0,1]n (or any other compact subset of Rn) and ε > 0, there exists vectors and and a parameterized function such that
for all
where
and and .
This theorem says that a single hidden layer, feed forward neural network is capable of approximating any continuous, multivariate function to the desired degree of accuracy and that failure to map a function arises from poor choices for and or an insufficient number of hidden neurons.
[edit] References
- Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p.48