ELIZA effect
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (April 2007) |
The ELIZA effect, in computer science, is the tendency to subconsciously assume computer behaviors are analogous to human behaviors, despite conscious knowledge to the contrary. It is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.
Contents |
[edit] Origin
The effect is named for the 1966 chatterbot ELIZA, developed by MIT computer scientist Joseph Weizenbaum. ELIZA parodied a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions :
- Human: Well, my boyfriend made me come here.
- ELIZA: Your boyfriend made you come here?
- Human: He says I'm depressed much of the time.
- ELIZA: I am sorry to hear you are depressed.
- Human: It's true. I'm unhappy.
- ELIZA: Do you think coming here will help you not to be unhappy?[1]
ELIZA was found to be surprisingly successful in eliciting emotional responses from users[2]. However, none of ELIZA's code was designed to do so. Upon observation, researchers discovered users subconsciously assuming ELIZA's questions implied interest in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.[3]
[edit] Positive and negative consequences
AI and human–computer interaction programmers may intentionally use the ELIZA effect as part of computational semiotics or as a strategy to pass the Turing test. While this strategy permits efficient coding (a few lines of code have large effects on human perception of output), it is also a risky proposition. If the user observes that the ELIZA effect has occurred, the rejection of unconscious assumptions often leads to the deduction of the programming method used. This constitutes failure of the Turing test. AI programmers try to avoid the ELIZA effect during testing, as it can blind them to other deficiencies in program output.
The ELIZA effect can also cause negative consequences if the user's assumptions do not match program behavior. For instance, it may interfere with debugging by obscuring the actual causes of program behavior. Programming languages are usually designed to prevent unintended ELIZA effects by restricting keywords and carefully avoiding potential misinterpretations[citation needed].
[edit] See also
[edit] Notes
- ^ Güzeldere, Güven, dialogues with colorful personalities of early ai, <http://www.stanford.edu/group/SHR/4-2/text/dialogues.html>. Retrieved on 30 July 2007
- ^ Joseph Weizenbaum, Die Macht der Computer und die Ohnmacht der Vernunft
- ^ Billings, Lee. "Rise of Roboethics", Seed, 07-16-2007. "(Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems - a phenomenon now known as the "Eliza Effect.""
[edit] References
- Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995)
- Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997)
- ELIZA effect, from the Jargon File, version 4.4.7. Accessed 8 October 2006.
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.