Extreme learning machine

Extreme learning machines are feedforward neural network for classification or regression with a single layer of hidden nodes, where the weights connecting inputs to hidden nodes are randomly assigned and never updated (i.e. they are a random projection). The weights between hidden nodes and outputs are learned in a single step, which essentially amounts to learning a linear model. The name "extreme learning machine" (ELM) was given to such models by Guang-Bin Huang.

According to their creators, these models are able to produce good generalization performance and learn thousands of times faster than networks trained using backpropagation.[1]

Algorithm

The simplest ELM training algorithm learns a model of the form

where W1 is the matrix of input-to-hidden-layer weights, σ is some activation function, and W2 is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows:

  1. Fill W1 with Gaussian random noise;
  2. estimate W2 by least-squares fit to a matrix of response variables Y, computed using the pseudoinverse +, given a design matrix X:

Reliability

The black-box character of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input.[2][3] Another approach focuses on the incorporation of continuous constraints into the learning process of ELMs[4][5] which are derived from prior knowledge about the specific task. This is reasonable, because machine learning solutions have to guarantee a safe operation in many application domains. The mentioned studies revealed that the special form of ELMs, with its functional separation and the linear read-out weights, is particularly well suited for the efficient incorporation of continuous constraints in predefined regions of the input space.

Controversy

The purported invention of the ELM, in 2008, provoked some debate. In particular, it was pointed out in a letter to the editor of IEEE Transactions on Neural Networks that the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers on RBF networks in the late 1980s, and experiments with multi-layer perceptrons with similar randomness had appeared in about the same timeframe; Guang-Bin Huang replied by pointing out subtle differences.[6] In a 2015 paper, Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets,[7] including hierarchical structured ELM.[8] Recent research replaces the random weights with constrained random weights.[9]

See also

References

  1. Huang, Guang-Bin; Zhu, Qin-Yu; Siew, Chee-Kheong (2006). "Extreme learning machine: theory and applications". Neurocomputing. 70 (1): 489–501. CiteSeerX 10.1.1.217.3692Freely accessible. doi:10.1016/j.neucom.2005.12.126.
  2. "Batch intrinsic plasticity for extreme learning machines". Proc. of International Conference on Artificial Neural Networks.
  3. "Optimizing extreme learning machines via ridge regression and batch intrinsic plasticity". Neurocomputing: 23–30.
  4. "Reliable integration of continuous constraints into extreme learning machines". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 21 (supp02): 35–50. 2013-10-31. ISSN 0218-4885. doi:10.1142/S021848851340014X.
  5. Neumann, Klaus (2014). Reliability. University Library Bielefeld. pp. 49–74.
  6. Wang, Lipo P.; Wan, Chunru R. "Comments on "The Extreme Learning Machine"". IEEE Trans. Neural Networks. CiteSeerX 10.1.1.217.2330Freely accessible.
  7. Huang, Guang-Bin (2015). "What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle" (PDF). Cognitive Computing. 7. doi:10.1007/s12559-015-9333-0.
  8. Zhu, W.; Miao, J.; Qing, L.; Huang, G. B. (2015-07-01). "Hierarchical Extreme Learning Machine for unsupervised representation learning". 2015 International Joint Conference on Neural Networks (IJCNN): 1–8. doi:10.1109/IJCNN.2015.7280669.
  9. Zhu, W.; Miao, J.; Qing, L. (2014-07-01). "Constrained Extreme Learning Machine: A novel highly discriminative random feedforward neural network". 2014 International Joint Conference on Neural Networks (IJCNN): 800–807. doi:10.1109/IJCNN.2014.6889761.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.