AI winter

From Wikipedia, the free encyclopedia

The term AI winter generally denotes a period beginning in the late 1970s during which Artificial intelligence research, spending, and interest declined. According to AINewsletter, the term was "coined by analogy with 'nuclear winter'". The term is now sometimes used to describe any real or perceived downturn in funding for or interest in AI.

AI winters are characterized by several symptoms, and it is sometimes difficult to discriminate causes from effects. Suggested causes include the increased prevalence of microcomputers, the failure of several AI-related companies, and an end to several government funding programs. During an AI winter, AI researchers tend not to call their research AI but something else, for example knowledge-based systems or informatics. Machines and languages associated with AI (such as Lisp and Prolog) lose popularity. Because this is an example of a positive feedback loop, symptoms become causes and cause further symptoms.

Typically in an AI winter, some subdisciplines of AI are scapegoated and given the label AI. Nevertheless, the history of artificial intelligence has shown continuous progress in AI.

Contents

[edit] Onset

The onset of AI winter in the US was blamed partially on the book Perceptrons by Marvin Minsky and Seymour Papert. This book showed that the then naïve enthusiasm for neural networks was ill-founded, as the only known algorithms for training them were provably incapable of learning fairly arbitrary classes of problems. This obviously had the most impact on neural networks research, but because of the high profile of this area of research at the time, and also the lack of progress in the previous 15 years in other areas (particularly machine translation), it initiated a general lack of both faith and funding in the field. The specific problems brought up by Perceptrons were ultimately addressed using backpropagation and other modern machine learning techniques. The embarrassment caused to the field has resulted in a much greater emphasis on formal or neat methods.

The onset of AI winter in the UK was blamed primarily on the Lighthill report. Professor Sir James Lighthill told the British Parliament that nothing being done in AI couldn't be done in other sciences. Despite the fact that Lighthill was shown to be fundamentally mistaken in a public debate broadcast by the BBC, funding did not recover for over a decade. The winners of the debate were a team composed of Donald Michie, Richard Gregory and John McCarthy.

Giving sole blame to a single document in either country probably oversimplifies the problem. Certainly both the British and American governments were already growing impatient with AI by the late 1960s, as they had already spent tens of millions of dollars on the field since the 1956 Dartmouth Conference. In the US, one of the main motivations for this funding was the promise of machine translation. Because of Cold War concerns, the US government was particularly interested in the automatic, instant translation of Russian. Machine translation is still an open research problem in the 21st century.

Some influence might be attributable to the Language/Action Perspective which was further described by Terry Winograd and Fernando Flores (their book was Understanding Computers and Cognition -- A New Foundation for Design [citation needed]) in the 1986. "They offered a sharp critique of the field" (AI). Their conclusion was "that software is unlikely to ever exhibit intelligent behavior" (both quotes from article in the Communications of the ACM, May 2006).

Concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists. For example, some researchers feared that the widely publicised promises in the early 1990s that Cog would show the intelligence of a human two-year-old might lead to an AI winter. In fact, the Cog project and the success of Deep Blue seems to have lead to an increase of interest in strong AI in that decade from both government and industry.

[edit] Analysis

AI Winter is an example of a human trait specifically associated with computational phenomena. That AI is a prime candidate for experiencing the consequences of this 'trait' results from several factors related to the phenomenon (of a foundational and fundamental nature).

One of the primary factors concerns expectations and the management thereof. AI had made impressive strides, but the expectations far exceeded reality. Systems could describe small, restricted situations in human language, and interpret simple commands, but the jump from parlor trick to viability was larger than expected. Complicating the matter are several things, such as the "magic" associated with computing and all things artificial, the possibility that some system developers may exploit this human weakness, the fact that intelligence had been to the time of AI thought of as principally human, and the evidence of smartness shown by some AI systems, and the like.

With the onset of the AI winter, AI did not cease; rather, it became more pervasive as an embedded capability, an observation humorously expressed as "AI is whatever we do not understand".

[edit] Impact on Lisp and Lisp Machines

The AI winter resulted in a shift of interest away from functional programming languages such as the Lisp dialect Common Lisp.

Commercially, many Lisp machine companies failed, like Symbolics, Lisp Machines Inc., Lucid Inc., etc. However, a number of customer companies (that is, companies using systems written in Lisp and developed on Lisp machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work. The maturation of Common Lisp saved many systems such as ICAD.

The timeframe of the AI winter corresponds with the advent of personal computing, which has led a number of commentators like Richard P. Gabriel, Harvey Newquist, Andrew Binstock (editor-in-chief of UNIX Review) to assert that Lisp machines were only viable for a short time, having taken advantage of Moore's Law to temporarily be superior in terms of cost/performance to both workstations and mainframes, but when Moore's Law and other advances in computer technology began to favor workstations, the companies that made up the affected sector of the industry refused to switch from Lisp machine hardware and operating systems (like Symbolics' Genera) to Unix and x86 platforms.

It has been suggested that functional languages have enjoyed a renaissance in the past few years, that there has been an emergence of very capable multi-paradigm languages (such as Python), and that an "AI spring" may have already occurred. Members of high-profile high-tech companies are currently willing to publicly express interest in building "human-level AI" (Markoff 2006). Some companies such as the online search company Google are rumoured to hire any Lisp programmers they can find, in addition to their more well-known desire for Python programmers.

[edit] See also

[edit] References

  • Professor Sir James Lighthill, FRS,"Artificial Intelligence: A General Survey" in Artificial Intelligence: A paper symposium, Science Research Council 1973.
  • John Markoff, "Brainy Robots Start Stepping Into Daily Life", The New York Times July 18, 2006, Section A, Page 1
  • Marvin Minsky and Seymour Papert, Perceptrons: An Introduction to Computational Geometry. The MIT Press, 1969.
  • Harvey Newquist, The Brain Makers, Sams Publishing, 1994. ISBN 0-672-30412-0

[edit] External links