Geoffrey Hinton

From Wikipedia, the free encyclopedia

Geoffrey Hinton is a British born computer scientist most noted for his work on the mathematics and applications of neural networks, and their relationship to information theory.

A simple introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural networks, mixtures of experts, Helmholtz machines and product of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.

Hinton was the first winner of the David E. Rumelhart Prize. He is currently a professor in the computer science department at the University of Toronto and he is the founding director of the Gatsby Computational Neuroscience Unit at University College London.

Hinton was born on December 6, 1947 and is the great-great-grandson of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton [1].

[edit] External links

In other languages