AI winter
From Wikipedia, the free encyclopedia
An AI Winter is a collapse in the perception of artificial intelligence research. The term was coined by analogy with the relentless spiral of a nuclear winter: a chain reaction of pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[1][2]
It first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). Two leading AI researchers, Roger Schank and Marvin Minsky, warned the business community that enthusiasm for AI had spiraled out of control and that disappointment would certainly follow. They were right. Just three years later, the billion-dollar AI industry began to collapse.[1]
The process of hype, disappointment and funding cuts are common in many emerging technologies (consider the railway mania or the dot-com bubble), but the problem has been particularly acute for AI. The pattern has occurred many times:
- 1966: the failure of machine translation,
- 1970: the abandonment of connectionism,
- 1971−75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University,
- 1973: the large decrease in AI research in the United Kingdom in response to the Lighthill Report,
- 1973−74: DARPA's cutbacks to academic AI research in general,
- 1987: the collapse of the Lisp machine market,
- 1993: expert systems slowly reaching the bottom,
- 1990 or so: the quiet disappearance of the fifth-generation computer project's original goals
- and the generally bad reputation AI has had since.
The worst times for AI have been 1974−80 and 1987 to the present. Sometimes one or the other of these periods (or some part of them) is referred to as the AI winter.[3]
The historical episodes known as AI winters are collapses only in the perception of AI by government bureacrats and venture capitalists. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[4] Ray Kurzweil agrees: "Many observers still think that the AI winter was the end of the story and that nothing since come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."[5] He adds unequivocally: "the AI winter is long since over."[6]
Contents |
[edit] Early episodes
[edit] Machine translation and the ALPAC report of 1966
- See also: History of machine translation
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".[7]
However, researchers had underestimated the profound difficulty of disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made ludicrous mistakes. An anecdotal example was "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten."[8] Similarly, "out of sight, out of mind" became "blind idiot." Later researchers would call this the commonsense knowledge problem.
By 1964, National Research Council had become concerned about the lack progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.[9][7]
Machine translation is still an open research problem in the 21st century.
[edit] The abandonment of perceptrons in 1969
- See also: Perceptron and Frank Rosenblatt
A perceptron is a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he made optimistic claims about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages."[10] An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. They showed that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's claims had been grossly exaggerated. Famously it was shown that the perceptron could not learn the XOR function.[11] The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. This despite Stephen GrossBerg's papers in 1972 and 1973 describing networks capable of solving the XOR and other problems.[12]
Eventually, the work of Hopfield and others would revive the field and thereafter it would become a vital and useful part of artificial intelligence. The specific problems brought up by Perceptrons were ultimately addressed using backpropagation and other modern machine learning techniques, developed by Paul Werbos in 1974 and championed by David Rumelhart in the early 80s.[13] Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[10]
[edit] The setbacks of 1974
[edit] The S.U.R. debacle
DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped, and felt it had been promised, to get a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and cancelled a three million dollar a year grant.[14][15]
Many years later, successful commercial speech recognition systems would use the technology developed by the Carnegie Mellon team (such as hidden Markov models) and the market for speech recognition systems would reach $4 billion by 2001.[15]
[edit] The Lighthill report
- See also: Lighthill report
Professor Sir James Lighthill was asked by Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn't be done in other sciences. The report led to the complete dismantling of AI research in that country.[16][17]
[edit] DARPA's funding cuts
After the passage of Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research." Researchers now had to show that their work would soon produce some useful military technology. The Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. As a result, AI research proposals were held to a very high standard. Pure undirected research of the kind that had gone on in the 60s would not be funded by DARPA.[18]
By 1974, funding for AI projects was hard to find. AI researcher Hans Moravec believed that "it was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'"[16] and he blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[19]
[edit] The setbacks of the late 80s and early 90s
[edit] The collapse of the Lisp machine market in 1987
In the 1980s a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and Lisp Machines Inc. who built specialized computers, called Lisp machines, that were optimized to process the programming language Lisp, the preferred language for AI.[20]
In 1987, three years after Minsky and Schank's prediction, the market for specialized AI hardware collapsed. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines.[21] There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[22]
Commercially, many Lisp machine companies failed, like Symbolics, Lisp Machines Inc., Texas Instruments, Xerox, Lucid Inc., etc. However, a number of customer companies (that is, companies using systems written in Lisp and developed on Lisp machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work. The maturation of Common Lisp saved many systems such as ICAD which found application in Knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from Lisp to a C++ (variant) on the PC via object-oriented technology and helped establish the o-o technology (including providing major support for the development of UML).
[edit] The fall of expert systems
Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[23][2] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case based reasoning or universal database access.[citation needed]
[edit] The fizzle of the fifth generation
- See also: Fifth generation computer
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million dollars for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. Indeed, some of them had not been met in 2001. As with other AI projects, expectations had run much higher than what was actually possible.[24]
[edit] Mechanisms behind AI winters
The AI winters can be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and examplified by the railway mania and dotcom bubble. The hype cycle concept for new technology looks at perception of technology in more detail. It describes a common pattern in development of new technology, where an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment" and later recovery and maturation of the technology. The key point is that since scientific and technological progress can't keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.
Another factor is AI's place in the organisation of universities. Research on AI often take the form of interdisciplinary research. One example is the Master of Artificial Intelligence [1] program at K.U. Leuven which involve lecturers from Philosophy to Mechanical Engineering. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.
Downturns in the national economy cause budget cuts in universities. The "core contents" tendency worsen the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK, [25] when universities had to make cuts and the question was only which programs should go.
[edit] The state of AI today
[edit] The winter that wouldn't end
A survey of recent reports suggests that AI's reputation is still less than pristine:
- Alex Castro in The Economist, 2007: "[Investors] were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."[26]
- Patty Tascarella in Pittsburgh Business Times, 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding"[27]
- John Markoff in the New York Times, 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[28]
[edit] AI behind the scenes
Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry,[29] such as machine translation, data mining, industrial robotics, logistics,[30] speech recognition,[31] banking software,[32] medical diagnosis[32] and Google's search engine[33] to name a few.
The field of AI receives little or no credit for these successes. Now no longer considered a part of AI, each has been reduced to the status of just another item in the tool chest of computer science.[34] Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[35] This is called the "AI effect" and is expressed most succinctly by Tesler's Theorem: "AI is whatever hasn't been done yet."[36]
In fact, many researchers in AI today deliberately call their work by other names, such as informatics, machine learning, knowledge-based systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding.
[edit] Fear of another winter
Concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists. For example, some researchers feared that the widely publicised promises in the early 1990s that Cog would show the intelligence of a human two-year-old might lead to an AI winter. In fact, the Cog project and the success of Deep Blue seems to have led to an increase of interest in strong AI in that decade from both government and industry.[citation needed]
[edit] Hope of another spring
There are also constant reports that another AI spring is imminent:
- Jim Hendler and Devika Subramanian in AAAI Newsletter, 1999: "Spring is here! Far from the AI winter of the past decade, it is now a great time to be in AI."[37]
- Ray Kurzweil in his book The Singularity is Near, 2005: "The AI Winter is long since over"[38]
- Heather Halvenstein in Computerworld, 2005: "Researchers now are emerging from what has been called an 'AI winter'"[39]
- John Markoff in The New York Times, 2005: "Now there is talk about an A.I. spring among researchers"[28]
[edit] AI now
Technologies developed by AI researchers have achieved commercial success in a number of domains, for example fuzzy logic controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Toureg [2] and VW Caravell feature the DSP transmission which utilizes Fuzzy logic, a number of Škoda variants (Škoda Fabia) also currently include a Fuzzy Logic based controller). Camera sensors widely utilize Fuzzy Logic [3] to enable focus (ironically).
Heuristic Search and Data Analytics are both technologies that have developed from the Evolutionary Computing and Machine Learning subdivision of the AI research community. Again, these techniques have been applied to a wide range of real world problems with considerable commercial success.
In the case of Heuristic Search, iLog [4] has developed a large number of applications including deriving job shop schedules for many manufacturing installations [5]. Many telecommunications companies also make use of this technology in the management of their workforces, for example BT Group has deployed heuristic search[40] in a scheduling application that provides the work schedules of 20000 engineers.
Data Analytics technology utilizing algorithms for the automated formation of classifiers that were developed in the supervised machine learning community in the 1990s (for example, TDIDT, Support Vector Machines, Neural Nets, IBL) are now used pervasively by companies for marketing survey targeting and discovery of trends and features in data sets.
[edit] AI funding
Primarily the way researchers and economists judge the status of an AI winter is by reviewing which AI projects are being funded, how much and by whom. Trends in funding are often set by major funding agencies in the developed world. Currently, DARPA and a civilian funding program called EU-FP7 provide much of the funding for AI research in the US and European Union.
As of 2007 DARPA is soliciting AI research proposals under a number of programs including The Grand Challenge Program, Cognitive Technology Threat Warning System (CT2WS), "Human Assisted Neural Devices (SN07-43)", "Autonomous Real-Time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS)" and "Urban Reasoning and Geospatial Exploitation Technology (URGENT)"
Perhaps best known, is DARPA's Grand Challenge Program [6] which has developed fully automated road vehicles that can successfully navigate real world terrain [7] in a fully autonomous fashion.
DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However James Hendler, the manager of the DARPA program at the time, expressed some disappointment [8] with the outcome of the program.
The EU-FP7 funding program, provides financial support to researchers within the European Union. Currently it funds AI research under the Cognitive Systems: Interaction and Robotics Programme (€193m), the Digital Libraries and Content Programme (€203m) and the FET programme (€185m)[41]
[edit] See also
- History of artificial intelligence
- Artificial intelligence
- Economic bubble
- Software crisis
- AI effect
[edit] Notes
- ^ a b Crevier 1993, p. 203
- ^ a b AI Expert Newsletter: W is for Winter
- ^ Two examples: (1) Howe 1994: "Lighthill's [1973] report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade ― the so-called '"AI Winter'", (2) Russell & Norvig 2003, p. 24: "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after that came a period called the 'AI Winter'".
- ^ Quoted in Kurzweil 2005, p. 263
- ^ Kurzweil 2005, p. 264
- ^ Kurzweil 2005, p. 289
- ^ a b John Hutchins 2005 The history of machine translation in a nutshell.
- ^ Russell & Norvig 2003, p. 21
- ^ Crevier 1993, p. 203
- ^ a b Crevier 1993, pp. 102−5
- ^ Minsky & Papert 1969
- ^ Grossberg, Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52 (1973), 213−57,
- ^ Crevier 1993, pp. 214−6 and Russell & Norvig 2003, p. 25
- ^ Crevier 1993, p. 110
- ^ a b NRC 1999, under "Success in Speech Recognition"
- ^ a b Crevier 1993, p. 117
- ^ Howe 1994
- ^ NRC 1999, under "Shift to Applied Research Increases Investment" (only the sections before 1980 apply to the current discussion).
- ^ Crevier 1993, p. 115
- ^ Crevier 1993, pp. 161−2, 197−203
- ^ One reason is that, as processor chips become more complex, the cost of designing one becomes greater relative to the cost of producing each copy. The value of high production volumes becomes correspondingly greater. Eventually, the cost-performance of a chip made in high volumes passes that of a specialized chip with lower production volumes, even if the latter is architecturally more suited to a given application area.[citation needed]
- ^ Crevier 1993, pp. 209-210
- ^ Crevier 1993, pp. 204-208
- ^ Crevier 1993, pp. 211-212
- ^ http://www.guardian.co.uk/obituaries/story/0,,2122424,00.html obituary of Donald Michie in The Guardian
- ^ Alex Castro in Are you talking to me? The Economist Technology Quarterly (June 7, 2007)
- ^ Robotics firms find fundraising struggle, with venture capital shy. By Patty Tascarella. Pittsburgh Business Times (August 11, 2006)
- ^ a b Markoff, John. "Behind Artificial Intelligence, a Squadron of Bright Real People", The New York Times, 2005-10-14. Retrieved on 2007-07-30.
- ^ NRC 1999 under "Artificial Intelligence in the 90s", and Kurzweil 2005, p. 264
- ^ Russell & Norvig 2003, p. 28
- ^ For the new state of the art in AI based speech recognition, see Are You Talking to Me?
- ^ a b "AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis." Nick Bostrom, AI set to exceed human brain power CNN.com (July 26, 2006)
- ^ For the use of AI at Google, see Google's man behind the curtain, Google backs character recognition and Spying an intelligent search engine.
- ^ Kurzweil 2005, p. 265
- ^ AI set to exceed human brain power CNN.com (July 26, 2006)
- ^ As quoted in Hofstadter 1979:601. Larry Tesler actually feels he was misquoted: see his note at the bottom of Larry Tesler's Resume
- ^ The Sixteenth National Conference on Artificial Intelligence
- ^ Kurzweil 2005, p. 289
- ^ Heather Havenstein Spring comes to AI Winter, Computer World, 2/14/2005
- ^ Success Stories.
- ^ Information and Communication Technologies in FP7, overview document for European Union funding. Retrieved 20 September 2007.
[edit] References
- Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
- Howe, J. (November 1994), Artificial Intelligence at Edinburgh University : a Perspective, <http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html>
- Kurzweil, Ray (2005), The Singularity is Near, Viking Press
- Lighthill, Professor Sir James (1973), “Artificial Intelligence: A General Survey”, Artificial Intelligence: a paper symposium, Science Research Council
- Minsky, Marvin & Papert, Seymour (1969), Perceptrons: An Introduction to Computational Geometry, The MIT Press
- NRC (1999), “Developments in Artificial Intelligence”, Funding a Revolution: Government Support for Computing Research, National Academy Press
- Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, <http://aima.cs.berkeley.edu/>
[edit] External links
- ComputerWorld article (February 2005)
- AI Expert Newsletter (January 2005)
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- Patterns of Software- a collection of essays by Richard P. Gabriel, including several autobiographical essays
- Review of ``Artificial Intelligence: A General Survey by John McCarthy