Geoffrey Hinton
From Wikipedia, the free encyclopedia
Geoffrey Hinton (December 6, 1947- ) is a British born informatician most noted for his work on the mathematics and applications of neural networks, and their relationship to information theory.
Geoffrey Hinton |
---|
Hinton graduated from Cambridge in 1970, with a B.A. in Experimental Psychology, and from Edinburgh in 1978, with a Ph.D. in Artificial Intelligence. He has worked at Sussex, UCSD, Cambridge, Carnegie Mellon University and University College London. He was the founding director of the Gatsby Computational Neuroscience Unit at University College London, and is currently a professor in the computer science department at the University of Toronto
A simple introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural networks, mixtures of experts, Helmholtz machines and product of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.
Hinton was the first winner of the David E. Rumelhart Prize.
Hinton is the great-great-grandson of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton [1].
[edit] External links
- Published papers (chronological)
- Homepage (at UofT)
- [2] Full CV
- Gatsby Computational Neuroscience Unit (founding director)