Timeline of machine learning
This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events are included.
Overview
Decade | Summary |
---|---|
<1950s | Statistical methods are discovered and refined. |
1950s | Pioneering machine learning research is conducted using simple algorithms. |
1960s | Bayesian methods are introduced for probabilistic inference in machine learning[1]. |
1970s | 'AI Winter' caused by pessimism about machine learning effectiveness. |
1980s | Rediscovery of backpropagation causes a resurgence in machine learning research. |
1990s | Work on machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions — or “learn” — from the results.[2] Support vector machines and recurrent neural networks become popular. |
2000s | Kernel methods grow in popularity[3], and competitive machine learning becomes more widespread[4]. |
2010s | Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. |
Timeline
Year | Event Type | Caption | Event |
---|---|---|---|
1763 | Discovery | The Underpinnings of Bayes' Theorem | Thomas Bayes's work An Essay towards solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.[5] The essay presents work which underpins Bayes theorem. |
1805 | Discovery | Least Squares | Adrien-Marie Legendre describes the "méthode des moindres carrés", known in English as the least squares method.[6] The least squares method is used widely in data fitting. |
1812 | Bayes' Theorem | Pierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes' Theorem.[7] | |
1913 | Discovery | Markov Chains | Andrey Markov first describes techniques he used to analyse a poem. The techniques later become known as Markov chains.[8] |
1950 | Turing's Learning Machine | Alan Turing proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows genetic algorithms.[9] | |
1951 | First Neural Network Machine | Marvin Minsky and Dean Edmonds build the first neural network machine, able to learn, the SNARC.[10] | |
1952 | Machines Playing Checkers | Arthur Samuel joins IBM's Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play checkers.[11] | |
1957 | Discovery | Perceptron | Frank Rosenblatt invents the perceptron while working at the Cornell Aeronautical Laboratory.[12] The invention of the perceptron generated a great deal of excitement and widely covered in the media.[13] |
1967 | Nearest Neighbor | The nearest neighbor algorithm was created, which is the start of basic pattern recognition. The algorithm was used to map routes.[14] | |
1969 | Limitations of Neural Networks | Marvin Minsky and Seymour Papert publish their book Perceptrons, describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.[15][16] | |
1970 | Automatic Differentation (Backpropagation) | Seppo Linnainmaa publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[17][18] This corresponds to the modern version of backpropagation, but is not yet named as such.[19][20][21][22] | |
1979 | Stanford Cart | Students at Stanford University develop a cart that can navigate and avoid obstacles in a room.[23] | |
1980 | Discovery | Neocognitron | Kunihiko Fukushima first publishes his work on the Neocognitron, a type of artificial neural network.[24] Neocognition later inspires convolutional neural networks.[25] |
1981 | Explanation Based Learning | Gerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.[26] | |
1982 | Discovery | Recurrent Neural Network | John Hopfield popularizes Hopfield networks, a type of recurrent neural network that can serve as content-addressable memory systems.[27] |
1985 | NetTalk | A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.[28] | |
1986 | Discovery | Backpropagation | The process of backpropagation is described by David Rumelhart, Geoff Hinton and Ronald J. Williams.[29] |
1989 | Discovery | Reinforcement Learning | Christopher Watkins develops Q-learning, which greatly improves the practicality and feasibility of reinforcement learning.[30] |
1989 | Commercialization | Commercialization of Machine Learning on Personal Computers | Axcelis, Inc. releases Evolver, the first software package to commercialize the use of genetic algorithms on personal computers.[31] |
1992 | Achievement | Machines Playing Backgammon | Gerald Tesauro develops TD-Gammon, a computer backgammon program that utilises an artificial neural network trained using temporal-difference learning (hence the 'TD' in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.[32] |
1995 | Discovery | Random Forest Algorithm | Tin Kam Ho publishes a paper describing Random decision forests.[33] |
1997 | IBM Deep Blue Beats Kasparov | IBM’s Deep Blue beats the world champion at chess.[34] | |
1995 | Discovery | Support Vector Machines | Corinna Cortes and Vladimir Vapnik publish their work on support vector machines.[35][36] |
1997 | Discovery | LSTM | Sepp Hochreiter and Jürgen Schmidhuber invent Long-short term memory recurrent neural networks,[37] greatly improving the efficiency and practicality of recurrent neural networks. |
1998 | MNIST database | A team led by Yann LeCun releases the MNIST database, a dataset comprising a mix of handwritten digits from American Census Bureau employees and American high school students.[38] The MNIST database has since become a benchmark for evaluating handwriting recognition. | |
2002 | Torch Machine Learning Library | Torch, a software library for machine learning, is first released.[39] | |
2006 | The Netflix Prize | The Netflix Prize competition is launched by Netflix. The aim of the competition was to use machine learning to beat Netflix's own recommendation software's accuracy in predicting a user's rating for a film given their ratings for previous films by at least 10%.[40] The prize was won in 2009. | |
2010 | Kaggle Competition | Kaggle, a website that serves as a platform for machine learning competitions, is launched.[41] | |
2011 | Achievement | Beating Humans in Jeopardy | Using a combination of machine learning, natural language processing and information retrieval techniques, IBM's Watson beats two human champions in a Jeopardy! competition.[42] |
2012 | Achievement | Recognizing Cats on YouTube | The Google Brain team, led by Andrew Ng and Jeff Dean, create a neural network that learns to recognise cats by watching unlabeled images taken from frames of YouTube videos.[43][44] |
2014 | Leap in Face Recognition | Facebook researchers publish their work on DeepFace, a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.[45] | |
2014 | Sibyl | Researchers from Google detail their work on Sibyl,[46] a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.[47] | |
2016 | Achievement | Beating Humans in Go | Google's AlphaGo program becomes the first Computer Go program to beat an unhandicapped professional human player[48] using a combination of machine learning and tree search techniques.[49] |
See also
- History of artificial intelligence
- Machine learning
- Timeline of artificial intelligence
- Timeline of machine translation
References
- ↑ Solomonoff, Ray J. "A formal theory of inductive inference. Part II." Information and control 7.2 (1964): 224-254.
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Hofmann, Thomas, Bernhard Schölkopf, and Alexander J. Smola. "Kernel methods in machine learning." The annals of statistics (2008): 1171-1220.
- ↑ Bennett, James, and Stan Lanning. "The netflix prize." Proceedings of KDD cup and workshop. Vol. 2007. 2007.
- ↑ Bayes, Thomas (1 January 1763). "An Essay towards solving a Problem in the Doctrine of Chance" (PDF). Philosophical Transactions. 53: 370–418. doi:10.1098/rstl.1763.0053. Retrieved 15 June 2016.
- ↑ Legendre, Adrien-Marie (1805). Nouvelles méthodes pour la détermination des orbites des comètes (in French). Paris: Firmin Didot. p. viii. Retrieved 13 June 2016.
- ↑ O'Connor, J J; Robertson, E F. "Pierre-Simon Laplace". School of Mathematics and Statistics, University of St Andrews, Scotland. Retrieved 15 June 2016.
- ↑ Hayes, Brian. "First Links in the Markov Chain". American Scientist. Sigma Xi, The Scientific Research Society. 101 (March–April 2013): 92. doi:10.1511/2013.101.1. Retrieved 15 June 2016.
Delving into the text of Alexander Pushkin’s novel in verse Eugene Onegin, Markov spent hours sifting through patterns of vowels and consonants. On January 23, 1913, he summarized his findings in an address to the Imperial Academy of Sciences in St. Petersburg. His analysis did not alter the understanding or appreciation of Pushkin’s poem, but the technique he developed—now known as a Markov chain—extended the theory of probability in a new direction.
- ↑ Turing, Alan (October 1950). "COMPUTING MACHINERY AND INTELLIGENCE". MIND. 59 (236): 433–460. doi:10.1093/mind/LIX.236.433. Retrieved 8 June 2016.
- ↑ Crevier 1993, pp. 34–35 and Russell & Norvig 2003, p. 17
- ↑ McCarthy, John; Feigenbaum, Ed. "Arthur Samuel: Pioneer in Machine Learning". AI Magazine (3). Association for the Advancement of Artificial Intelligence. p. 10. Retrieved 5 June 2016.
- ↑ Rosenblatt, Frank (1958). "THE PERCEPTRON: A PROBABILISTIC MODEL FOR INFORMATION STORAGE AND ORGANIZATION IN THE BRAIN" (PDF). Psychological Review. 65 (6): 386–408. doi:10.1037/h0042519.
- ↑ Mason, Harding; Stewart, D; Gill, Brendan (6 December 1958). "Rival". The New Yorker. Retrieved 5 June 2016.
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Cohen, Harvey. "The Perceptron". Retrieved 5 June 2016.
- ↑ Colner, Robert. "A brief history of machine learning". SlideShare. Retrieved 5 June 2016.
- ↑ Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.
- ↑ Seppo Linnainmaa (1976). Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2), 146-160.
- ↑ Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389-400.
- ↑ Griewank, Andreas and Walther, A.. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008.
- ↑ Jürgen Schmidhuber (2015). Deep learning in neural networks: An overview. Neural Networks 61 (2015): 85-117. ArXiv
- ↑ Jürgen Schmidhuber (2015). Deep Learning. Scholarpedia, 10(11):32832. Section on Backpropagation
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Fukushima, Kunihiko (1980). "Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern The Recognitron Unaffected by Shift in Position" (PDF). Biological Cybernetics. 36: 193–202. PMID 7370364. doi:10.1007/bf00344251. Retrieved 5 June 2016.
- ↑ Le Cun, Yann. "Deep Learning". Retrieved 5 June 2016.
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Hopfield, John (April 1982). "Neural networks and physical systems with emergent collective computational abilities" (PDF). Proceedings of the National Academy of Sciences of the United States of America. 79: 2554–2558. PMC 346238 . PMID 6953413. doi:10.1073/pnas.79.8.2554. Retrieved 8 June 2016.
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Rumelhart, David; Hinton, Geoffrey; Williams, Ronald (9 October 1986). "Learning representations by back-propagating errors" (PDF). Nature. 323: 533–536. doi:10.1038/323533a0. Retrieved 5 June 2016.
- ↑ Watksin, Christopher (1 May 1989). "Learning from Delayed Rewards" (PDF).
- ↑ Markoff, John (29 August 1990). "BUSINESS TECHNOLOGY; What's the Best Answer? It's Survival of the Fittest". New York Times. Retrieved 8 June 2016.
- ↑ Tesauro, Gerald (March 1995). "Temporal Difference Learning and TD-Gammon". Communications of the ACM. 38 (3). doi:10.1145/203330.203343.
- ↑ Ho, Tin Kam (August 1995). "Random Decision Forests" (PDF). Proceedings of the Third International Conference on Document Analysis and Recognition. Montreal, Quebec: IEEE. 1: 278–282. ISBN 0-8186-7128-9. doi:10.1109/ICDAR.1995.598994. Retrieved 5 June 2016.
- ↑ Marr, Marr. "A Short History of Machine Learning - Every Manager Should Read". Forbes. Retrieved 28 Sep 2016.
- ↑ Golge, Eren. "BRIEF HISTORY OF MACHINE LEARNING". A Blog From a Human-engineer-being. Retrieved 5 June 2016.
- ↑ Cortes, Corinna; Vapnik, Vladimir (September 1995). "Support-vector networks" (PDF). Machine Learning. Kluwer Academic Publishers. 20 (3): 273–297. ISSN 0885-6125. doi:10.1007/BF00994018. Retrieved 5 June 2016.
- ↑ Hochreiter, Sepp; Schmidhuber, Jürgen (1997). "LONG SHORT-TERM MEMORY" (PDF). Neural Computation. 9 (8): 1735–1780. PMID 9377276. doi:10.1162/neco.1997.9.8.1735.
- ↑ LeCun, Yann; Cortes, Corinna; Burges, Christopher. "THE MNIST DATABASE of handwritten digits". Retrieved 16 June 2016.
- ↑ Collobert, Ronan; Benigo, Samy; Mariethoz, Johnny (30 October 2002). "Torch: a modular machine learning software library" (PDF). Retrieved 5 June 2016.
- ↑ "The Netflix Prize Rules". Netflix Prize. Netflix. Retrieved 16 June 2016.
- ↑ "About". Kaggle. Kaggle Inc. Retrieved 16 June 2016.
- ↑ Markoff, John (17 February 2011). "Computer Wins on ‘Jeopardy!’: Trivial, It’s Not". New York Times. p. A1. Retrieved 5 June 2016.
- ↑ Le, Quoc; Ranzato, Marc’Aurelio; Monga, Rajat; Devin, Matthieu; Chen, Kai; Corrado, Greg; Dean, Jeff; Ng, Andrew (12 July 2012). "Building High-level Features Using Large Scale Unsupervised Learning". CoRR. arXiv:1112.6209 .
- ↑ Markoff, John (26 June 2012). "How Many Computers to Identify a Cat? 16,000". New York Times. p. B1. Retrieved 5 June 2016.
- ↑ Taigman, Yaniv; Yang, Ming; Ranzato, Marc’Aurelio; Wolf, Lior (24 June 2014). "DeepFace: Closing the Gap to Human-Level Performance in Face Verification". Conference on Computer Vision and Pattern Recognition. Retrieved 8 June 2016.
- ↑ Canini, Kevin; Chandra, Tushar; Ie, Eugene; McFadden, Jim; Goldman, Ken; Gunter, Mike; Harmsen, Jeremiah; LeFevre, Kristen; Lepikhin, Dmitry; Llinares, Tomas Lloret; Mukherjee, Indraneel; Pereira, Fernando; Redstone, Josh; Shaked, Tal; Singer, Yoram. "Sibyl: A system for large scale supervised machine learning" (PDF). Jack Baskin School Of Engineering. UC Santa Cruz. Retrieved 8 June 2016.
- ↑ Woodie, Alex (17 July 2014). "Inside Sibyl, Google’s Massively Parallel Machine Learning Platform". Datanami. Tabor Communications. Retrieved 8 June 2016.
- ↑ "Google achieves AI 'breakthrough' by beating Go champion". BBC News. BBC. 27 January 2016. Retrieved 5 June 2016.
- ↑ "AlphaGo". Google DeepMind. Google Inc. Retrieved 5 June 2016.
This article is issued from
Wikipedia.
The text is licensed under Creative Commons - Attribution - Sharealike.
Additional terms may apply for the media files.