AI winter
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter. The field has experienced several cycles of hype, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. There were two major winters in 1974–80 and 1987–93[2] and several smaller episodes, including:
- 1966: the failure of machine translation,
- 1970: the abandonment of connectionism,
- 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University,
- 1973: the large decrease in AI research in the United Kingdom in response to the Lighthill report,
- 1973–74: DARPA's cutbacks to academic AI research in general,
- 1987: the collapse of the Lisp machine market,
- 1988: the cancellation of new spending on AI by the Strategic Computing Initiative,
- 1993: expert systems slowly reaching the bottom, and
- 1990s: the quiet disappearance of the fifth-generation computer project's original goals.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[3] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the '80s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.[3]
Hypes are common in many emerging technologies, such as the railway mania or the dot-com bubble. An AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[4] Ray Kurzweil agrees: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."[5] He adds: "the AI winter is long since over."[6]
Early episodes
Machine translation and the ALPAC report of 1966
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".[7]
However, researchers had underestimated the profound difficulty of disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made ludicrous mistakes. An anecdotal example was "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten."[8] Similarly, "out of sight, out of mind" became "blind idiot." Later researchers would call this the commonsense knowledge problem.
By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.[3][7]
Machine translation is still an open research problem in the 21st century, which has been met with some success (Google Translate, Yahoo Babel Fish).
The abandonment of connectionism in 1969
- See also: Perceptrons and Frank Rosenblatt
Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behavior. Examples of this kind of work, called "connectionism", include Walter Pitts and Warren McCullough's first description of a neural network for logic and Marvin Minsky's work on the SNARC system. In the late '50s, most of these approaches were abandoned when researchers began to explore symbolic reasoning as the essence of intelligence, following the success of programs like the Logic Theorist and the General Problem Solver.[9]
However, one type of connectionist work continued: the study of perceptrons, invented by Frank Rosenblatt, who kept the field alive with his salesmanship and the sheer force of his personality.[10] He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".[11] Mainstream research into perceptrons came to an abrupt end in 1969, when Marvin Minsky and Seymour Papert published the book Perceptrons, which was perceived as outlining the limits of what perceptrons could do.
Connectionist approaches were abandoned for the next decade or so. While important work, such as Paul Werbos' discovery of backpropagation, continued in a limited way, major funding for connectionist projects was difficult to find in the 1970s and early '80s.[12] The "winter" of connectionist research came to an end in the middle '80s, when the work of John Hopfield, David Rumelhart and others revived large scale interest in neural networks.[13] Rosenblatt did not live to see this, however. He died in a boating accident shortly after Perceptrons was published.[11]
The setbacks of 1974
The Lighthill report
In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn't be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions.[14]
The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institute was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.[15] McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning."[16]
The report led to the complete dismantling of AI research in England.[14] AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This "created a bow-wave effect that led to funding cuts across Europe," writes James Hendler.[17] Research would not revive on a large scale until 1983, when Alvey (a research project of the British Government) began to fund AI again from a war chest of £350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.
DARPA's funding cuts of the early '70s
During the 1960s, the Defense Advanced Research Projects Agency (then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with almost no strings attached. DARPA's director in those years, J. C. R. Licklider believed in "funding people, not projects"[18] and allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A. Simon or Allen Newell) to spend it almost any way they liked.
This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research."[19] Pure undirected research of the kind that had gone on in the '60s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find.[19]
AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[20] The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.[21]
While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI[22] and justifying DARPA's pragmatic policy.[23]
The SUR debacle
DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year grant.[24]
Many years later, successful commercial speech recognition systems would use the technology developed by the Carnegie Mellon team (such as hidden Markov models) and the market for speech recognition systems would reach $4 billion by 2001.[25]
The setbacks of the late '80s and early '90s
The collapse of the Lisp machine market in 1987
In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and Lisp Machines Inc. who built specialized computers, called Lisp machines, that were optimized to process the programming language Lisp, the preferred language for AI.[26]
In 1987, three years after Minsky and Schank's prediction, the market for specialized AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz LISP offered increasingly more powerful versions of LISP. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[27] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987 they had become more powerful than the more expensive Lisp machines. The desktop computers had rule-based engines such as CLIPS available.[28] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[29]
Commercially, many Lisp companies failed, like Symbolics, Lisp Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox abandoned the field. However, a number of customer companies (that is, companies using systems written in Lisp and developed on Lisp machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.
The fall of expert systems
By the early 90s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[1][30] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from Lisp to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML).
The fizzle of the fifth generation
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. Indeed, some of them had not been met in 2001, or 2011. As with other AI projects, expectations had run much higher than what was actually possible.[31]
Cutbacks at the Strategic Computing Initiative
In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included strong AI as long term objective. The program was under the direction of the Information Processing Technology Office (IPTO) and was also directed at supercomputing and microelectronics. By 1985 it had spent $100 million and 92 projects were underway at 60 institutions, half in industry, half in universities and government labs. AI research was generously funded by the SCI.[32]
Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally," "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.[33]
Lasting effects of the AI winters
The winter that wouldn't end
A survey of reports from the mid-2000s suggests that AI's reputation was still less than stellar:
- Alex Castro, quoted in The Economist, 7 June 2007: "[Investors] were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."[34]
- Patty Tascarella in Pittsburgh Business Times, 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[35]
- John Markoff in the New York Times, 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[36]
AI under different names
Many researchers in AI today deliberately call their work by other names, such as informatics, machine learning, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence."[36]
AI behind the scenes
"Many observers still think that the AI winter was the end of the story and that nothing since come of the AI field," writes Ray Kurzweil, "yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."[5] In the late '90s and early 21st century, AI technology became widely used as elements of larger systems,[37][5] but the field is rarely credited for these successes. Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[38] Rodney Brooks adds "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[4]
Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics,[39] speech recognition,[40] banking software,[41] medical diagnosis[41] and Google's search engine.[42]
Fuzzy logic controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Touareg[43] and VW Caravell feature the DSP transmission which utilizes Fuzzy logic, a number of Škoda variants (Škoda Fabia) also currently include a Fuzzy Logic based controller). Camera sensors widely utilize fuzzy logic to enable focus.
Heuristic search and data analytics are both technologies that have developed from the evolutionary computing and machine learning subdivision of the AI research community. Again, these techniques have been applied to a wide range of real world problems with considerable commercial success.
In the case of Heuristic Search, ILOG has developed a large number of applications including deriving job shop schedules for many manufacturing installations.[44] Many telecommunications companies also make use of this technology in the management of their workforces, for example BT Group has deployed heuristic search[45] in a scheduling application that provides the work schedules of 20000 engineers.
Data analytics technology utilizing algorithms for the automated formation of classifiers that were developed in the supervised machine learning community in the 1990s (for example, TDIDT, Support Vector Machines, Neural Nets, IBL) are now used pervasively by companies for marketing survey targeting and discovery of trends and features in data sets.
AI funding
Primarily the way researchers and economists judge the status of an AI winter is by reviewing which AI projects are being funded, how much and by whom. Trends in funding are often set by major funding agencies in the developed world. Currently, DARPA and a civilian funding program called EU-FP7 provide much of the funding for AI research in the US and European Union.
As of 2007, DARPA is soliciting AI research proposals under a number of programs including The Grand Challenge Program, Cognitive Technology Threat Warning System (CT2WS), "Human Assisted Neural Devices (SN07-43)", "Autonomous Real-Time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS)" and "Urban Reasoning and Geospatial Exploitation Technology (URGENT)"
Perhaps best known, is DARPA's Grand Challenge Program[46] which has developed fully automated road vehicles that can successfully navigate real world terrain[47] in a fully autonomous fashion.
DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However James Hendler, the manager of the DARPA program at the time, expressed some disappointment[48] with the outcome of the program.
The EU-FP7 funding program, provides financial support to researchers within the European Union. Currently it funds AI research under the Cognitive Systems: Interaction and Robotics Programme (€193m), the Digital Libraries and Content Programme (€203m) and the FET programme (€185m)[49]
Fear of another winter
Concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists. For example, some researchers feared that the widely publicized promises in the early 1990s that Cog would show the intelligence of a human two-year-old might lead to an AI winter. In fact, the Cog project and the success of Deep Blue seems to have led to an increase of interest in strong AI in that decade from both government and industry.
James Hendler in 2008, observed that AI funding both in the EU and the US were being channeled more into applications and cross-breeding with traditional sciences, such as bioinformatics.[28] This shift away from basic research is happening at the same time as there's a drive towards applications of e.g. the semantic web. Invoking the pipeline argument, (see underlying causes) Hendler saw a parallel with the '80s winter and warned of a coming AI winter in the '10s.
Hope of another spring
There are also constant reports that another AI spring is imminent or has already occurred:
- Raj Reddy, in his presidential address to AAAI, 1988: "[T]he field is more exciting than ever. Our recent advances are significant and substantial. And the mythical AI winter may have turned into an AI spring. I see many flowers blooming."[50]
- Pamela McCorduck in Machines Who Think: "In the 1990s, shoots of green broke through the wintry AI soil."[51]
- Jim Hendler and Devika Subramanian in AAAI Newsletter, 1999: "Spring is here! Far from the AI winter of the past decade, it is now a great time to be in AI."[52]
- Ray Kurzweil in his book The Singularity is Near, 2005: "The AI Winter is long since over"[4]
- Heather Halvenstein in Computerworld, 2005: "Researchers now are emerging from what has been called an 'AI winter'"[53]
- John Markoff in The New York Times, 2005: "Now there is talk about an A.I. spring among researchers"[36]
- James Hendler, in the Editorial of the 2007 May/June issue of IEEE Intelligent Systems (Hendler 2007): "Where Are All the Intelligent Agents?"
Underlying causes behind AI winters
Several explanations have been put forth for the cause of AI winters in general. As AI progressed from government funded applications to commercial ones, new dynamics came into play. While hype is the most commonly cited cause, the explanations are not necessarily mutually exclusive.
Hype
The AI winters can be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified by the railway mania and dotcom bubble. In a common pattern in development of new technology, an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress can't keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.
Institutional factors
Another factor is AI's place in the organisation of universities. Research on AI often takes the form of interdisciplinary research. One example is the Master of Artificial Intelligence[54] program at K.U. Leuven which involve lecturers from Philosophy to Mechanical Engineering. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.
Economic factors
Downturns in the national economy cause budget cuts in universities. The "core contents" tendency worsen the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK,[55] when universities had to make cuts and the question was only which programs should go.
Empty pipeline
It is common to see the relationship between basic research and technology as a pipeline. Advances in basic research give birth to advances in applied research, which in turn leads to new commercial applications. From this it is often argued that a lack of basic research will lead to a drop in marketable technology some years down the line. This view was advanced by James Hendler in 2008,[28] claiming that the fall of expert systems in the late '80s were not due to an inherent and unavoidable brittleness of expert systems, but to funding cuts in basic research in the '70s. These expert systems advanced in the '80s through applied research and product development, but by the end of the decade, the pipeline had run dry and expert systems were unable to produce improvements that could have overcome the brittleness and secured further funding.
Failure to adapt
The fall of the Lisp machine market and the failure of the fifth generation computers were cases of expensive advanced products being overtaken by simpler and cheaper alternatives. This fits the definition of a low-end disruptive technology, with the Lisp machine makers being marginalized. Expert systems were carried over to the new desktop computers by for instance CLIPS, so the fall of the Lisp machine market and the fall of expert systems are strictly speaking two separate events. Still, the failure to adapt to such a change in the outside computing milieu is cited as one reason for the 1980s AI winter.[28]
Arguments and debates on past and future of AI
Several philosophers, cognitive scientists and computer scientists have speculated on where AI might have failed and what lies in its future. Hubert Dreyfus gave arguments for why AI is impossible to achieve. Others critics like Noam Chomsky have argued that AI is headed in the wrong direction, in part because of its heavy reliance on statistical techniques.[56] Chomsky's comments fit into a larger debate with Peter Norvig, centered around the role of statistical methods in AI. The exchange between the two started with comments made by Chomsky at a symposium at MIT[57] to which Norvig wrote a response.[58]
See also
Notes
- ↑ 1.0 1.1 AI Expert Newsletter: W is for Winter Archived 9 November 2013 at the Wayback Machine
- ↑ Different sources use different dates for the AI winter. Consider: (1) Howe 1994: "Lighthill's [1973] report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade ― the so-called '"AI Winter'", (2) Russell & Norvig 2003, p. 24: "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after that came a period called the 'AI Winter'".
- ↑ 3.0 3.1 3.2 Crevier 1993, p. 203.
- ↑ 4.0 4.1 4.2 Kurzweil 2005, p. 263.
- ↑ 5.0 5.1 5.2 Kurzweil 2005, p. 264.
- ↑ Kurzweil 2005, p. 289.
- ↑ 7.0 7.1 John Hutchins 2005 The history of machine translation in a nutshell.
- ↑ Russell & Norvig 2003, p. 21.
- ↑ McCorduck 2004, pp. 52–107
- ↑ Pamela McCorduck quotes one colleague as saying, "He was a press agent's dream, a real medicine man." (McCorduck 2004, p. 105)
- ↑ 11.0 11.1 Crevier 1993, pp. 102–5
- ↑ Crevier 1993, pp. 102–105, McCorduck 2004, pp. 104–107, Russell & Norvig 2003, p. 22
- ↑ Crevier 1993, pp. 214–6 and Russell & Norvig 2003, p. 25
- ↑ 14.0 14.1 Crevier 1993, p. 117, Russell & Norvig 2003, p. 22, Howe 1994 and see also Lighthill 1973
- ↑ "BBC Controversy Lighthill debate 1973". BBC "Controversy" debates series. ARTIFICIAL_INTELLIGENCE-APPLICATIONS¯INSTITUTE. 1973. Archived from the original on 1 May 2013. Retrieved 13 August 2010.
- ↑ McCarthy, John (1993). "Review of the Lighthill Report". Archived from the original on 30 September 2008. Retrieved 10 September 2008.
- ↑ Hendler, James. "Avoiding Another AI Winter" (PDF). Archived from the original (PDF) on 12 February 2012.
- ↑ Crevier 1993, p. 65
- ↑ 19.0 19.1 NRC 1999, under "Shift to Applied Research Increases Investment" (only the sections before 1980 apply to the current discussion).
- ↑ Crevier 1993, p. 115
- ↑ Crevier 1993, p. 117
- ↑ Russell & Norvig 2003, p. 25
- ↑ NRC 1999
- ↑ Crevier 1993, pp. 115–116 (on whom this account is based). Other views include McCorduck 2004, pp. 306–313 and NRC 1999 under "Success in Speech Recognition".
- ↑ NRC 1999 under "Success in Speech Recognition".
- ↑ Crevier 1993, pp. 161–2, 197–203
- ↑ Brooks, Rodney. "Design of an Optimizing, Dynamically Retargetable Compiler for Common LISP" (PDF). Lucid, Inc. Archived from the original (PDF) on 20 August 2013.
- ↑ 28.0 28.1 28.2 28.3 Avoiding another AI Winter, James Hendler, IEEE Intelligent Systems (March/April 2008 (Vol. 23, No. 2) pp. 2–4
- ↑ Crevier 1993, pp. 209–210
- ↑ Crevier 1993, pp. 204–208
- ↑ Crevier 1993, pp. 211–212
- ↑ McCorduck 2004, pp. 426–429
- ↑ McCorduck 2004, pp. 430–431
- ↑ Alex Castro in Are you talking to me? The Economist Technology Quarterly (7 June 2007) Archived 13 June 2008 at the Wayback Machine
- ↑ Robotics firms find fundraising struggle, with venture capital shy. By Patty Tascarella. Pittsburgh Business Times (11 August 2006) Archived 26 March 2014 at the Wayback Machine
- ↑ 36.0 36.1 36.2 Markoff, John (14 October 2005). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. Retrieved 30 July 2007.
- ↑ NRC 1999 under "Artificial Intelligence in the 90s"
- ↑ AI set to exceed human brain power CNN.com (26 July 2006) Archived 3 November 2006 at the Wayback Machine
- ↑ Russell & Norvig 2003, p. 28
- ↑ For the new state of the art in AI based speech recognition, see Are You Talking to Me? Archived 13 June 2008 at the Wayback Machine
- ↑ 41.0 41.1 "AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis." Nick Bostrom, AI set to exceed human brain power CNN.com (26 July 2006) Archived 3 November 2006 at the Wayback Machine
- ↑ For the use of AI at Google, see Google's man behind the curtain, Google backs character recognition and Spying an intelligent search engine.
- ↑ Touareg Short Lead Press Introduction, Volkswagen of America Archived 16 February 2012 at the Wayback Machine
- ↑ http://findarticles.com/p/articles/mi_m0KJI/is_7_117/ai_n14863928
- ↑ Success Stories. Archived 4 October 2011 at the Wayback Machine
- ↑ Grand Challenge Home Archived 24 December 2010 at the Wayback Machine
- ↑ DARPA Archived 6 March 2009 at the Wayback Machine
- ↑ untitled
- ↑ Information and Communication Technologies in FP7, overview document for European Union funding. Retrieved 20 September 2007.
- ↑ Reddy, Raj (1988). "Foundations and Grand Challenges of Artificial Intelligence" (PDF). Association for the Advancement of Artificial Intelligence. Archived from the original (PDF) on 5 June 2012.
- ↑ McCorduck 2004, p. 418
- ↑ The Sixteenth National Conference on Artificial Intelligence Archived 2 October 2013 at the Wayback Machine
- ↑ Heather Havenstein Spring comes to AI Winter, Computer World, 14 February 2005 Archived 31 January 2009 at the Wayback Machine
- ↑ Master Artificial Intelligence Archived 6 July 2011 at the Wayback Machine
- ↑ http://www.guardian.co.uk/obituaries/story/0,,2122424,00.html obituary of Donald Michie in The Guardian Archived 27 January 2008 at the Wayback Machine
- ↑ Yarden Katz, "Noam Chomsky on Where Artificial Intelligence Went Wrong", The Atlantic, 1 November 2012 Archived 2 November 2012 at WebCite
- ↑ Noam Chomsky, "Pinker/Chomsky Q&A from MIT150 Panel" Archived 17 May 2013 at the Wayback Machine
- ↑ Peter Norvig, "On Chomsky and the Two Cultures of Statistical Learning" Archived 31 May 2011 at WebCite
References
- Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
- Hendler, James (2007). "Where Are All the Intelligent Agents?". IEEE Intelligent Systems 22 (3): 2–3. doi:10.1109/MIS.2007.62
- Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University : a Perspective". Archived from the original on 17 August 2007. Retrieved 30 August 2007
- Kurzweil, Ray (2005). "The Singularity is Near". Viking Press
- Lighthill, Professor Sir James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council
- Minsky, Marvin; Papert, Seymour (1969). "Perceptrons: An Introduction to Computational Geometry". The MIT Press
- McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1
- NRC (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. Retrieved 30 August 2007
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
External links
- ComputerWorld article (February 2005)
- AI Expert Newsletter (January 2005)
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- Patterns of Software- a collection of essays by Richard P. Gabriel, including several autobiographical essays
- Review of ``Artificial Intelligence: A General Survey by John McCarthy
- Other Freddy II Robot Resources Includes a link to the 90 minute 1973 "Controversy" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government.
|