Neuroevolution

From Wikipedia, the free encyclopedia

Neuroevolution, or neuro-evolution, is a form of machine learning that uses genetic algorithms to train artificial neural networks. It is useful for applications such as games and robot motor control, where it is easy to measure a network's performance at a task but difficult or impossible to create a syllabus of correct input-output pairs for use with a supervised learning algorithm. In the classification scheme for neural network learning these methods usually belong in the reinforcement learning category.

Contents

[edit] Features

There are many neuroevolutionary algorithms. A distinction is made between those that evolve the values of the connection weights for a network of pre-specified topology, vs. those that evolve the topology of the network in addition to the weights. Although there are no standardized terms for this distinction as a whole, adding or removing a network's connections during evolution may be referred to as complexification or simplification, respectively [1]. Networks that have both their connection weights and topology evolved are referred to as TWEANNs (Topology & Weight Evolving Artificial Neural Networks).

A further distinction is made between methods that evolve the structure (topology) of the neural networks in parallel to the parameters (e.g. synaptic weights) and those that develop them separately. A comparison between two such methods applied to robot control can be found here.

[edit] Direct and Indirect Encoding of Networks

Direct encoding schemes specify in the genome every connection and node that appear in the network. In contrast, indirect encoding methods usually only specify rules for constructing a network [2][3].

[edit] Examples

Examples of Neuroevolution methods that evolve both network structure and parameters include:

[edit] See also

[edit] References

  1. ^ http://www.ucs.louisiana.edu/~dxj2534/james_gecco04.pdf
  2. ^ /c/1997c/tops/dvips
  3. ^ Yohannes Kassahun, Mark Edgington, Jan Hendrik Metzen, Gerald Sommer and Frank Kirchner. Common Genetic Encoding for Both Direct and Indirect Encodings of Networks. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2007), London, UK, 1029-1036, 2007.
  4. ^ Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5:54–65, 1994. [1]
  5. ^ Xin Yao and Yong Liu. A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8(3):694–713, May 1997. [2]
  6. ^ http://nn.cs.utexas.edu/downloads/papers/stanley.ieeetec05.pdf
  7. ^ http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
  8. ^ http://nn.cs.utexas.edu/downloads/papers/stanley.ieeetec05.pdf
  9. ^ Yohannaes Kassahun and Gerald Sommer. Efficient reinforcement learning through evolutionary acquisition of neural topologies. In Proceedings of the 13th European Symposium on Artificial Neural Networks (ESANN 2005), pages 259–266, Bruges, Belgium, April 2005. [3]
  10. ^ Nils T Siebel and Gerald Sommer. Evolutionary reinforcement learning of artificial neural networks. International Journal of Hybrid Intelligent Systems 4(3): 171-183, October 2007. [4]

[edit] External links

Languages