Geoffrey Hinton

Geoffrey Hinton
Born 6 December 1947
Wimbledon, London
Residence Canada
Fields Neural computation, Artificial Intelligence, Machine Learning
Thesis Relaxation and its role in vision (1977)
Doctoral advisor H. Christopher Longuet-Higgins
Known for Backpropagation, Boltzmann machine, deep learning
Notable awards AAAI Fellow (1990)
Rumelhart Prize (2001)
IJCAI Award for Research Excellence (2005)
Website
www.cs.toronto.edu/~hinton/

Geoffrey (Geoff) Everest Hinton FRS (born 6 December 1947) is a British-born cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. He now divides his time working for Google and University of Toronto.[1] He is the co-inventor of the backpropagation and contrastive divergence training algorithms and is an important figure in the deep learning movement.[2]

Career

Hinton graduated from Cambridge in 1970, with a Bachelor of Arts in experimental psychology, and from Edinburgh in 1978, with a PhD in artificial intelligence. He has worked at Sussex, University of California San Diego, Cambridge, Carnegie Mellon University and University College London. He was the founding director of the Gatsby Computational Neuroscience Unit at University College London, and is currently a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012.[3] Hinton joined Google in March 2013 when his company, DNNresearch Inc, was acquired. He is planning to "divide his time between his university research and his work at Google".[4]

Research interests

An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has authored over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm for training multi-layer neural networks that has been widely used for practical applications. He co-invented Boltzmann machines with Terry Sejnowski. His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.

Honours and awards

Hinton was the first winner of the David E. Rumelhart Prize. He was elected a Fellow of the Royal Society in 1998.[5]

In 2001, Hinton was awarded an Honorary Doctorate from the University of Edinburgh.

Hinton was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award.

He has also been awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering.[6]

In 2013, Hinton was awarded an Honorary Doctorate from the Université de Sherbrooke.

Personal life

Hinton is the great-great-grandson both of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton.[7] His father is Howard Hinton.

References

  1. Daniela Hernandez (7 May 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired. Retrieved 10 May 2013.
  2. "How a Toronto professor’s research revolutionized artificial intelligence". Toronto Star, Kate Allen, Apr 17 2015
  3. https://www.coursera.org/course/neuralnets
  4. "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Retrieved 13 March 2013.
  5. "Fellows of the Royal Society". The Royal Society. Retrieved 14 March 2013.
  6. "Artificial intelligence scientist gets M prize". CBC News. 14 February 2011.
  7. The Isaac Newton of logic

External links