Chinese room

From Wikipedia, the free encyclopedia

See also: Philosophy of artificial intelligence

The Chinese Room argument is a thought experiment and associated arguments designed by John Searle (Searle 1980) to show that a symbol processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.

Contents

[edit] The experiment

Searle asks his audience to imagine that many years from now, people have constructed a computer that behaves as if it understands Chinese. The computer takes Chinese characters as input and, following a program, produces other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of artificial intelligence would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks the audience to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the computer program, and processes the Chinese characters according to the instructions in the book. Searle notes that he does not understand a word of Chinese. He simply manipulates what to him are meaningless squiggles, using the book and whatever other equipment is provided in the room, such as paper, pencils, erasers, and filing cabinets. After manipulating the symbols, Searle will produce the answer in Chinese. Since the computer passed the Turing test, so does Searle running its program by hand: "Nobody just looking at my answers can tell that I don't speak a word of Chinese," Searle writes.[1]

Searle argues that his lack of understanding goes to show that computers do not understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is. They don't understand what they're "saying", just as he doesn't. Since they do not have conscious mental states like "understanding", they can not properly be said to have minds.

[edit] History

Searle's argument originally appeared in his paper "Minds, Brains, and Programs", published in the journal Behavioral and Brain Sciences in 1980.[2] It would eventually become the journal's "most influential target article"[3] and considerable literature has grown up around it. Most of the discussion consists of attempts to interpret and refute it: as editor Stevan Harnad notes, "the overwhelming majority still think that the Chinese Room Argument is dead wrong."[4] Pat Hayes quipped that the field of cognitive science should be defined as "the ongoing research program of showing Searle's Chinese Room Argument to be false."[3]

[edit] Searle's targets: "strong AI" and computationalism

Although the Chinese Room argument was originally presented to refute the statements of artificial intelligence researchers, philosophers have come to see it as a part of the philosophy of mind—a challenge to functionalism and the computational theory of mind,[5] and related to such questions as the mind-body problem,[6] the problem of other minds,[7] the symbol grounding problem and the hard problem of consciousness.[8]

AI founder Herbert Simon announced in 1955 that "there are now in the world machines that think, that learn and create"[9] and claimed they had "solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind."[10] John Haugeland summarizes the philosophical position of early AI researchers as follows:

"AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."[11]

Statements like these assume a philosophical position that Searle calls "strong AI":

"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[12]

Searle also ascribes these positions to proponents of strong AI:

  • AI systems can be used to explain the mind.[13]
  • The brain is irrelevant to understanding the mind.[14]
  • The Turing test is definitive.[15]

Stevan Harnad argues that these positions can be reformulated as "recognizable tenets of computationalism, a position (unlike 'strong AI') that is actually held by many thinkers, and hence one worth refuting."[16] He characterizes the key components of strong AI as "mental states are computational states" (which is why computers can have mental states, and why computers can help explain the mind), "computational states are implementation-independent" (which is how the brain is irrelevant), and, since the implementation is not important, the only empirical data that matters is how the system functions (which is why the Turing test is definitive). This last point is a version of functionalism.[17]

Searle's argument centers on the question of whether computers can be programmed to have mental states like understanding (that is, mental states with what philosophers call "intentionality") and thus have a "mind" in the same way people do. Although Searle only addresses "mind", "mental states", "intentionality" and "understanding", David Chalmers has argued that "it is fairly clear that consciousness is at the root of the matter".[18] Searle disagrees and maintains that intentionality is independent of consciousness.

Searle's argument does not limit how intelligent machines can behave. (Searle's "strong AI" should not be confused with strong AI, a term used by futurists to describe artificial intelligence that rivals human intelligence.) The Chinese room argument does not address this issue directly, and leaves open the possibility that a machine could be built that acts intelligently, but doesn't have a mind or intentionality in the same way brains do.[19] Since the primary mission of AI research is only to create useful systems that act intelligently, Searle's arguments are not considered an issue for AI research. As Stuart Russell and Peter Norvig write, "most AI researchers ... don't care about the strong AI hypothesis."[20]

[edit] Replies

The replies to Searle's argument can be classified by what they claim to show.[21]

  • Those that identify who it is who speaks Chinese.
  • Those that demonstrate how meaningless symbols can become meaningful.
  • Those that suggest that the Chinese room should be redesigned more along the lines of a brain.
  • Those that demonstrate ways that Searle's argument is misleading.

Some of the arguments (robot and brain simulation, for example) fall into multiple categories.

[edit] System and virtual mind replies: finding the mind

These two replies attempt to answer the question: since the man in the room doesn't speak Chinese, where is the "mind" that does?

These replies address the key ontological issues of mind vs. body and simulation vs. reality.

Systems reply.[22] The "systems reply" argues that it is the whole system that understands Chinese, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets. While Searle can only understand English, the complete system can understand Chinese. The system doesn't understand English, just as Searle doesn't understand Chinese. The man is part of the system, just as the hippocampus is a part of the brain. The fact that the man understands nothing is irrelevant, and is no more surprising than the fact that the hippocampus understands nothing by itself.

Searle's response is to consider what happens if the man memorizes the rules and keeps track of everything in his head. Then the only component of the system is the man himself. Since the man still doesn't understand Chinese and since Searle believes that it is obvious that there is nothing else there, he concludes that nothing understands Chinese, and the fact that the man appears to understand Chinese proves nothing.[23] Since his critics insist that there is something else there, Searle accuses them of dualism, at least in the limited sense that the Chinese mind does not seem connected to the brain the same way a normal mind is.[24]

Virtual mind reply.[25] A more precise response is that there is a Chinese speaking mind in Searle's room, but that it is virtual. A fundamental property of computing machinery is that one machine can "implement" another: any (Turing complete) computer can do a step-by-step simulation of any other machine.[26] In this way, a machine can simultaneously be two machines at once: for example, it can be a Macintosh and a word processor at the same time. A virtual machine depends on the hardware (in that if you turn off the Macintosh, you turn off the word processor as well), yet is different from the hardware. (This is how the position resists dualism, the idea that the mind is a separate "substance". There can be two machines in the same place, both made of the same substance, if one of them is virtual.) A virtual machine is also "implementation independent" in that it doesn't matter what sort of hardware it runs on: a PC, a Macintosh, a supercomputer, a brain or Searle in his Chinese room.[27] Cole extends this argument to show that a program could be written that implements two minds at once – for example, one speaking Chinese and the other Korean. While there is only one system and only one man in the room, there may be an unlimited number of "virtual minds."[28]

Searle would respond that such a mind is only a simulation. He writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[29] Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter."[30] The question is, is the human mind like the pocket calculator, essentially composed of information? Or is it like the rainstorm, which can't be duplicated using digital information alone? (The issue of simulation is also discussed in the article synthetic intelligence.)

What they do and don't prove. These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.[31]

However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."[23] Without additional evidence both Searle and his critics are left with the intuitions they had at the start: Searle can't imagine that a simulated mind can "understand" while his critics can.

[edit] Robot and semantics replies: finding the meaning

As far as the man in the room is concerned, the symbols he writes are just meaningless "squiggles." But if the Chinese room really "understands" what it's saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize.

These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply.[32] Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[33]

Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robots eyes."[34] (See Mary's Room for a similar thought experiment.)

Derived meaning.[35] Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols he manipulates are already meaningful, they're just not meaningful to him.

Searle complains that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.[36]

Commonsense knowledge / contextualist reply.[37] Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.

Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.[38]

What they do and don't prove. To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."[39]

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

[edit] Brain simulation and connectionist replies: redesigning the room

These arguments are all versions of the systems reply that identify a particular kind of system as being important. They try to outline what kind of a system would be able to pass the Turing test and give rise to conscious awareness in a machine. (Note that the "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.)

Brain simulator reply.[40] Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.

Searle replies that such a simulation will not have reproduced the important features of the brain — its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains."[24] His position, that (only) "brains cause minds" is called "biological naturalism" (as opposed to alternatives like behaviorism, functionalism, identity theory or dualism).[41]

Two variations on the brain simulator reply are:

Chinese nation.[42] What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.
Brain replacement scenario.[43] In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.[44]

Connectionist replies.[45] Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.

Combination reply.[46] This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body would surely be able think.

What they do and don't prove. Arguments such as these (and the robot and commonsense knowledge replies above) recommend that Searle's room be redesigned. They can be interpreted in three ways:

  1. The room as Searle describes it can't pass the Turing test. However, if some improvements are made to the design of the room or the program, a room can be constructed that would both pass the test and have a "mind", "understanding" and "consciousness".[47]
  2. The room can pass the Turing test, but it would not have a mind. However, (as with the first case) with some improvements, a room can be constructed that would.[47]
  3. The room does, in fact, have a mind, but it's difficult to see—Searle's description is correct, but misleading. Redesigning the room more realistically will make this more obvious.

Searle's replies all point out that, however the program is written or however it is connected to the world, it is still being simulated by a simple step by step Turing complete machine (or machines). Every one of these machines is still, at the ground level, just like Searle in the room: it understands nothing and doesn't speak Chinese.

Searle also argues that, if features like a robot body or a connectionist architecture are required, then strong AI (as he understands it) has been abandoned.[48] Either (1) Searle's room can't pass the Turing test, because formal symbol manipulation is not enough,[49] or (2) Searle's room could pass the Turing test, but the Turing test is not sufficient to determine if the room has a "mind." Either way, it denies one or the other of the positions Searle thinks of "strong AI", proving his argument. The brain arguments also suggests that computation can't provide an explanation of the human mind (another aspect of what Searle thinks of as "strong AI"). They assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."[50]

In the third case, these arguments being used as "appeals to intuition" (which are discussed in more detail in the next section). By making the program more realistic, they help AI researchers to visualize how the program might work. Searle's intuition, however, is never shaken. He writes: "I can have any formal program you like, but I still understand nothing."[51]

In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's "blockhead" argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". Any program can be rewritten (or "refactored") into this form, even a brain simulation.[52] It is hard for most to imagine that such a program would give rise to a "mind" or have "understanding". In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of our conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims.

[edit] Speed, complexity and other minds: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies.

The central point of these replies is that Searle's description of the Chinese room is profoundly misleading. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[53] Daniel Dennett describes the Chinese room argument as an "intuition pump"[54] and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."[55]

Speed and complexity replies.[56] The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second.[57] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.

An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment:

Churchland's luminous room.[58] Suppose a philosopher finds it inconceivable that light is caused by waves of electromagnetism. He could go into a dark room and wave a magnet up down. He would see no light, of course, and he could claim that he had proved light is not a magnetic wave and that he has refuted Maxwell's equations. The problem is that he would have to wave the magnet up and down something like 450,000,000,000 times a second in order to see anything.

Several of the replies above address the issue of complexity. The connectionist reply emphasizes that a working artificial system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge," as Daniel Dennett explains.[59]

Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"[60]

Other minds reply.[61] Searle's argument is just a version of the problem of other minds, applied to machines. Since it's difficult to decide if people are "actually" thinking, we shouldn't be surprised that it's difficult to answer the same question about machines.

The most radical view is that the Chinese room argument actually proves that humans don't have minds, at least not in the sense that Searle insists that we do. Searle argues that there are "causal properties" in our neurons that give rise to the mind. What if these properties don't exist? How could we tell? Perhaps each neuron in the brain is just like Searle, following his rules, utterly unable to give rise to what Searle calls "understanding." Searle's argument suggests that the human mind is epiphenomenal: that it "casts no shadow."[62] To make this point clear, Daniel Dennett suggests this version of the "other minds" reply:

Dennett's reply from natural selection.[63] Suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. (This sort of animal is a called a "zombie" in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it's most likely that human beings (as we see them today) are actually "zombies," who nevertheless insist they are conscious. This suggests it's unlikely that Searle's "causal properties" would have ever evolved in the first place. Nature has no incentive to create them.

[edit] Formal arguments

In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:

  1. Brains cause minds.
  2. Syntax is not sufficient for semantics.
  3. Computer programs are entirely defined by their formal, or syntactical, structure.
  4. Minds have mental contents; specifically, they have semantic contents.

The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese. Searle posits that these lead directly to four conclusions:

  1. No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
  2. The way that brain functions cause minds cannot be solely in virtue of running a computer program.
  3. Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
  4. The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain.

Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.

[edit] Notes

  1. ^ Searle 1980, p. 2-3
  2. ^ Searle 1980
  3. ^ a b (Harnad 2001, p. 1) Harnad edited the journal BBS during the years the Chinese Room argument was introduced.
  4. ^ Harnad 2001, p. 2
  5. ^ Harnad (2005) writes that the Searle's argument is against the thesis that "has since come to be called "computationalism," according to which cognition is just computation, hence mental states are just computational states". Cole (2004) writes "the argument also has broad implications for functionalist and computational theories of meaning and of mind".
  6. ^ See the "Systems reply" below
  7. ^ See "Other minds reply" below.
  8. ^ The relationship between Searle's argument and consciousness is detailed in Chalmers 1996
  9. ^ Quoted in Russell & Norvig 2003, p. 21. Simon, along with Alan Newell and Cliff Shaw, had just completed the first true AI program, the Logic Theorist.
  10. ^ Quoted in Crevier 1993, p. 46 and Russell & Norvig 2003, p. 17.
  11. ^ Haugeland 1986, p. 2. (Italics his)
  12. ^ This version is from Searle (1998), also quoted in Dennett 1991, p. 435 and at AI Topics. His original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003), p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis.". An equivalent definition is given at Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia")
  13. ^ For example, Searle writes "Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it." (Searle 1980, p. 2)
  14. ^ Searle writes, "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." (Searle 1980, p. 8) The phrasing of this position is due to Harnad (2001).
  15. ^ Searle writes, "One of the points at issue is the adequacy of the Turing test." (Searle 1980, p. 6) The phrasing of this position is due to Harnad (2001).
  16. ^ (Harnad 2001, p. 3). Computationalism is associated with Jerry Fodor and Hilary Putnam. (Horst 2005, p. 1) Harnad also cites Allen Newell and Zenon Pylyshyn.
  17. ^ Harnad 2001, pp. 3-5
  18. ^ Chalmers 1996, p. 322, quoted in Larry Hauser's annotated bibliography
  19. ^ Cole (2004), p. 14) attributes to AI researchers Simon and Eisenstadt this view: "whereas Searle refutes "logical strong AI", the thesis that a program that passes the Turing Test will necessarily understand, Searle's argument does not impugn "Empirical Strong AI" — the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding."
  20. ^ (Russell Norvig, p. 947)
  21. ^ Cole (2004), pp. 5-6). He combines the middle two categories.
  22. ^ Searle 1980, pp. 5-6, Cole 2004, pp. 6-7, Hauser 2006, p. 2-3, Russell & Norvig 2003, p. 959, Dennett 1991, pp. 439, Hearn 2007, p. 44, Crevier 1993, p. 269. Among those who hold to this position (according to Cole (2004), p. 6)) are Ned Block, Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey
  23. ^ a b Searle 1980, p. 6
  24. ^ a b Searle 1980, p. 13
  25. ^ Cole (2004), p. 7-9) ascribes this position to Marvin Minsky, Tim Maudlin, David Chalmers, and David Cole.
  26. ^ This is the point of the universal Turing machine and the Church-Turing thesis: what makes a system Turing complete is its ability to do a step-by-step simulation of any other machine.
  27. ^ The terminology "implementation independent" is due to Harnad (2001), p. 4).
  28. ^ Cole 2004, p. 8
  29. ^ Searle 1980, p. 12
  30. ^ Hearn 2007, p. 47
  31. ^ Cole (2004), p. 21) writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."
  32. ^ Searle 1980, p. 7, Cole 2004, p. 9-11, Hauser 2006, p. 3, Hearn 2007, p. 44. Cole (2004), p. 9) ascribes this position to Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey
  33. ^ Quoted in Crevier 1993, p. 272. Cole (2004), p. 18) calls this the "externalist" account of meaning.
  34. ^ Searle 1980, p. 7
  35. ^ Hauser 2006, p. 11, Cole 2004, p. 19. This argument is supported by Daniel Dennett and others.
  36. ^ Searle distinguishes between "intrinsic" intentionality and "derived" intentionality. "Intrinsic" intentionality is the kind that involves "conscious understanding" like you would have in a human mind. Daniel Dennett doesn't agree that there is a distinction. Cole (2004), p. 19) writes "derived intentionality is all there is, according to Dennett."
  37. ^ Cole 2004, p. 18 (where he calls this the "internalist" approach to meaning.) Proponents of this position include Roger Schank, Doug Lenat, Marvin Minsky and (with reservations) Daniel Dennett, who writes "The fact is that any program [that passed a Turing test] would have to be an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge." (Dennett 1997, p. 438)
  38. ^ Dreyfus 1979. See "the epistemological assumption".
  39. ^ Searle 1984. He also writes "Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them" Searle 1989, p. 45 quoted in Cole 2004, p. 16.
  40. ^ Searle 1980, pp. 7-8, Cole 2004, p. 12-13, Hauser 2006, pp. 3-4, Churchland & Churchland 1990. Cole (2004), p. 12) ascribes this position to Paul Churchland, Patricia Churchland and Ray Kurzweil.
  41. ^ Hauser 2006, p. 8
  42. ^ Cole 2004, p. 4, Hauser 2006, p. 11. Early versions of this argument were put forward in 1974 by Lawrence Davis and in 1978 by Ned Block. Block's version used walky talkies and was called the "Chinese Gym". Churchland & Churchland (1990) described this scenario as well.
  43. ^ Russell Norvig, pp. 956-8, Cole 2004, p. 20, Moravec 1988, p. ? CHECK, Kurzweil 2005, p. 262 CHECK, Crevier 1993, pp. 271 and 279 CHECK. An early version of this argument was put forward by Clark Glymour in the mid-70s and was touched on by Zenon Pylyshyn in 1980. Moravec (1988) presented a vivid version of it, and it is now associated with Ray Kurzweil's version of transhumanism.
  44. ^ Searle predicts that, while going through the brain prosthesis, "you find, to your total amazement, that you are indeed losing control of you external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; pleas tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely out your control, 'I see a read object in front of me.' ... [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same." Searle 1992 quoted in Russell & Norvig 2003, p. 957.
  45. ^ Cole (2004), pp. 12 & 17) ascribes this position to Andy Clark and Ray Kurzweil. Hauser (2006), p. 7) associates this position with Paul and Patricia Churchland.
  46. ^ Searle 1980, pp. 8-9, Hauser,
  47. ^ a b This is how Cole (2004), p. 6) characterizes some of these arguments.
  48. ^ Searle (1980), p. 7) writes that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation." Harnad (2001), p. 14) makes the same point, writing: "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the 'right' kind of implementation, whereas Searle's is the 'wrong' kind."
  49. ^ Note that Searle-in-the-room is a Turing complete machine
  50. ^ Searle 1980, p. 8
  51. ^ Searle 1980, p. 3
  52. ^ That is, any program running on a machine with a finite amount memory.
  53. ^ Quoted in Cole 2004, p. 13.
  54. ^ Dennett 1991, pp. 437 & 440
  55. ^ Dennett 1991, p. 438
  56. ^ Cole 2004, p. 14-15, Crevier 1993, pp. 269-270, Pinker, pp. 95. Cole (2004), p. 14) ascribes this "speed" position to Daniel Dennett, Tim Maudlin, David Chalmers, Steven Pinker, Paul Churchland, Patricia Churchland and others. Dennett (1991), p. 438) points out the complexity of world knowledge.
  57. ^ Crevier 1993, p. 269
  58. ^ Churchland & Churchland 1990, Cole 2004, p. 12, Crevier 1993, p. 270, Hearn 2007, pp. 45-46, Pinker 1997, p. 94
  59. ^ (Dennett 1997, p. 438)
  60. ^ Harnad 2001, p. 7 and Tim Maudlin (Cole 2004, p. 14) both criticize these replies, which are versions of strong emergentism (what Daniel Dennett derides as "Woo woo West Coast emergence" (Crevier 1993, p. 275)). Harnad ascribes this view to Churchland and Patricia Churchland. Kurzweill (2005) also makes this kind of argument.
  61. ^ Searle 1980, Cole 2004, p. 13, Hauser 2006, p. 4-5. Turing (1950) makes this reply to what he calls "The Argument from Consciousness." Cole (2004), p. 12-13) ascribes this position to Daniel Dennett, Ray Kurzweil and Hans Moravec.
  62. ^ Russell & Norvig 2003, p. 957
  63. ^ Cole 2004, p. 22, Crevier 1993, p. 271, Harnad 2004, p. 4

[edit] References

[edit] Further reading