Talk:The Emperor's New Mind

From Wikipedia, the free encyclopedia

This article is within the scope of WikiProject Books. To participate, you can edit the article. You can discuss the Project at its talk page.
Start
WikiProject Physics This article is within the scope of WikiProject Physics, which collaborates on articles related to physics.
Start This article has been rated as Start-Class on the assessment scale.
Low This article is on a subject of Low importance within physics.

Help with this template This article has been rated but has no comments. If appropriate, please review the article and leave comments here to identify the strengths and weaknesses of the article and what work it will need.

Socrates This article is within the scope of the WikiProject Philosophy, which collaborates on articles related to philosophy. To participate, you can edit this article or visit the project page for more details.
Start This article has been rated as Start-Class on the quality scale.
??? This article has not yet received an importance rating on the importance scale.

For me, the current article misses the mark. The Emperor's New Mind does contain much interesting background material on computation, physics, mathematics, and other topics, but all this background material is simply to prepare the reader to understand Penrose's main argument. The main argument boils down to this: the human brain may exploit certain quantum mechanical phenomena, key to intelligence and/or consciousness, that effectively make the brain's activity uncomputable, and hence beyond the reach of Turing machines/classical computers. To allow for this, Penrose suggests that current models of quantum physics are flawed, and hints at how they might be modified.

Although Penrose's expertise and authority on physics is undisputed, many have found the ideas suggested in The Emperor's New Mind unconvincing and unnecessary, though admittedly plausible. Furthermore, even if Penrose turned out to be right, there is no reason why quantum computers would not be able to exploit the same quantum phenomena that the brain does, and thus become just as intelligent as humans. Thus, The Emperor's New Mind is really an argument against strong AI in classical computers, not against strong AI in artificially created systems.

Regarding the followup book Shadows of the Mind, in chapter 2 of that book, Penrose presents an argument that appears to prove that the reader has some insight that a computer could not have. However, there is a subtle mistake in his argument, and I vaguely remember something written by Hofstadter where he succintly points out the mistake. I can't find a reference to what Hofstadter wrote, however I think there have been other reviewers of Shadows of the Mind. A quick web search turned up a detailed review by David J. Chalmers at [1] MichaelMcGuffin 12:49, 3 Jul 2004 (UTC)

This article doesnt just miss the mark, it is on another planet. Most of the references are to do with the first couple of chapters and are largely irrelevant to the theme of the book. The guts of the book must either have not been read by the author of this article, or largely misunderstood. In reference to the discussion that quantum computers could exploit quantum phenomena, as the brain is claimed to: Quantum computers are purely computational and deterministic, and have no net gain over classical computers other than raw speed. They simply serve to assist with the practical complexity of computation, but achieve no greater abilities in principle. The quantum nature of quantum computers is not exploiting the same concept as the quantum nature which Penrose suggests may explain human reasoning.

[edit] I seem to recall both emotions and Godel's incompleteness theorum as well

Yeah, I had similar issues with Penrose's book. I felt that some of the chapters were completely unnecessary (he didn't really connect his famous tiles to the topic at hand), and some chapters were in subjects that were outside his expertise (the chapters on biology were unenlightening).

He also summarized his arguments at the end with 'ask a computer how it feels', which is a pretty inane argument. Don't claim to be making a scientific argument and then get philosophical!

All in all, I didn't find the arguments to be very strong. I guess it's a hard position to take, though. If you state that it is impossible for a machine to be intelligent because it cannot perform x, you then have to define x. Then someone will build a machine that specifically does x. Like playing chess.

Anyway...just my two cents

-t. [note:--just noticed I didn't sign this back when i wrote it. sorry, i was probably still learning the ropes. in any case, this is tristanreid, the same user as below]


  • Penrose does make it fairly clear that if you make a machine that achieves x, you can then apply that same argument to the new machine, so that it can not do, say, x2. If you then make a machine that achieves x and x2, then the same argument again shows it can not achieve, say, x3. So there is no machine that can ever encompass all of the possible non-computable truths, because for every machine, there is an x which it can not do. It is the existance of an x for every single sufficiently complex machine that forbids a sufficiently complex machine from achieving all x's. Remy B 13:54, 4 December 2005 (UTC)
    • Hi Remy. If I understand your point correctly (forgive me if wrong), it is that it is not necessary to specifically name the tasks which a particular machine cannot perform, as it can be shown that any particular machine will have at least one such task that it is unable to do, and therefore no machine could be sufficiently complex to perform all unperformable tasks. If I've gotten that, I think it's a good point! I still don't see, though, that it prevents the same argument from being made about human intelligence. For any given task that is associated with intelligence, I think there exists a human that can't perform that task. Reading, mathematical/logical reasoning, chess playing, etc. I think it's interesting to read about people who have sustained brain damage, and must relearn how to function without a formerly vital part of their mental capabilities. One more thing, in regard to a previous poster's comments about quantum computers. I think that it's probably accurate to say that quantum computers have no advantage over 'classical' computers, if we're debating whether a computer can perform the computations necessary to be classified as intelligent. On the other hand, I think there's something missing from the argument (not yours, the argument in general). I think too much of the AI debate focuses on Turing machines and computability. Has it ever been proved that humans really have some ability that transcends this? Penrose hints that it may be so because of some quantum link, but doesn't really explain what's so special about that link, he just takes it for granted that we definitely have something that computers never could. Tristanreid 19:45, 17 December 2005 (UTC)
      • I think you are close to my point, but its not quite what I meant. Penrose was not only stating that it can be shown that there is a truth for every machine that that machine can not prove, but that there is a truth for every machine that that machine can not prove, and that each of those truths are ones that a human CAN prove. This means that for every machine that is some truth accessible to humans that is not accessible to that machine. Penrose uses this as his basis to state that human beings must be achieving some kind of non-computable process in accessing truths. The reason the AI debate focuses on Turing machines and computability is because AI only deals with computable processes, and Turing machines can achieve any computable process. AI has no capability to deal with non-computable processes because there are currently no known physical processes in the Universe that achieve this. Penrose suggests that further research into quantum mechanics may bring to light the physical processes that the human brain uses to achieve non-computability, but hasnt said that it has to be that in particular. However, he does assert that the fact that humans achieve non-computability means that ultimately there must be some form of non-computability in any complete physics model of the Universe, since the human brain follows the laws of physics. Remy B 12:20, 18 December 2005 (UTC)
        • The reason I think the AI discussion focuses too much on computability is not because I didn't understand what the argument was, I just think we've reached the limit of how much they can prove anything about AI, and I don't know that any new insights are being gained. Wouldn't it be more interesting to try to isolate the things that make us intelligent that computers currently can't do, and use those insights to gain self-knowledge and to enhance computers as a tool? Aside from that (back to computability), I've never seen a proof that humans achieve non-computability, only assertions that humans are definitely not sophisticated Turing machines. But lets say that humans CAN achieve noncomputability. If humans have access to some method of noncomputability that follows the laws of physics, why could this method never be used to create a computer that doesn't deal only with computable processes? Thanks in advance for any insight you can share with me, I enjoy this type of discussion immensely. Tristanreid 19:32, 18 December 2005 (UTC)
          • Penrose draws the conclusion that humans achieve non-computability based on the assertion that humans are not representable as Turing machines. This is because Turing machines can prove all computable truths (ie. do anything computable), and if there is a truth that humans can prove that any Turing machine can not, that truth must be attained using a non-computable process. Considering your other point, you are right that if we could find and master the non-computable physics that the human mind uses, we could indeed build a machine that also reaches non-computable conclusions. The debate exists because AI proponents state that all human reasoning is computable, and that this new physics is not necessary. By this definition any man-made machine that uses non-computable processes would not be considered AI. Remy B 10:51, 19 December 2005 (UTC)
            • There is still no proof that humans can prove any truth that a Turing machine can not, just an assertion. A conclusion can't be solely based on an assertion. As to the last point, when you say "AI proponents think this", you're not accurate. I could just as easily say that "AI opponents think that humans could never build something as smart as themselves, because of Godel's Incompleteness Theorum", which I've actually heard someone say. It's a strawman argument, ultimately. I'm an AI proponent, and I believe that if cognitive science discovered that there was some aspect of human thought that could only be achieved by using a certain type of physics, that we could build a machine using that type of physics. Any intelligence created by man is 'artificial', regardless of what area of physics is used in the underlying process. Why would that area become 'out of bounds'? Further, if physics is used to compute something 'non-computable', hasn't it become computable? Tristanreid 15:34, 19 December 2005 (UTC)
              • Penrose doesnt say that humans can prove ANY truth that a Turing machine can not, but he certainly believes he has proven that humans can prove SOME truths that Turing machines can not. He shows this in a much more rigorous and convincing manner in his follow up book 'Shadows of the Mind'. Penrose uses a variation on the diagonal slash with respect to considering the human intellect as an algorithm (ie. anything computable), and then demonstrating a contradiction to show that the assumption that the human intellect can be represented as an algorithm must be false. As for the statement you have heard about humans never being able to build something as smart as themselves due to Godels IT, that only applies to computable machines because Godels IT only applies to formal systems which are by definition only computable. You can not use Godels IT to show that humans can not build a machine that uses non-computable processes because that is out of the domain of formal systems. On the next point, if you say you are an AI proponent but would allow AI to include non-computable processes then I guess we just have a difference of definition. My general reading of the AI community is that they follow the stricter definition that I also use, which is that AI only encompasses computable processes, but thats only semantics and doesnt really change anything. On your last point, Penrose does mention (I think in Shadows of the Mind) that human access to non-computable physics can not as simple as considering that physics to be an Oracle machine, because then it does indeed become computable. My interpretation of what he said on that point is that he doesnt have a good answer to your concern, but that he believes it is inevitable that the concern will be answered because he has proven that humans are doing *something* non-computable, even if we cant yet define exactly what that is or how it makes sense. I think the philosophy of a non-computable mind is in its very early infancy, and there is a long way to go before we can get and comprehend all of the answers. Remy B 06:31, 20 December 2005 (UTC)
                • Sorry, I should have typed that more clearly. When I said ANY truth, I meant "ANY AT ALL". In other words, I still don't see that Penrose has shown that humans are doing something non-computable. If you know the substance of Penrose's argument against algorithmic representation of human intellect, I'd love to see it, I read SOTM when it first came out, but don't remember the details very well. I do remember him (and others) using Cantor's diagonal slash to demonstrate the halting problem, as an example of noncomputable problems, but I don't remember an example or proof that human intellect is different, beyond his argument that a computer could never be creative. To stay focussed in this discussion, I'll concede what you said about limiting the AI discussion to computability in general. That's what my original point was about, I've always felt that the AI discussion should be expanded to include any physically deterministic process that could conceivably be used to create an artificial intelligence, but I also think there is a lot of value in trying to figure out how human intelligence works. Tristanreid 17:01, 20 December 2005 (UTC)
                  • I would like to have a go at my own wording out of what I consider the convincing reasoning that Penrose made for the non-computability of human intellect. Maybe that would justify its own Wikipedia article if it was written as a NPOV article rather than an essay? I'll definitely leave a note on your user talk page if I do get around to doing that. Remy B 17:32, 20 December 2005 (UTC)

[edit] US or Brit spelling?

Since this article is about a British writer, should it use British English, hence "modelled" rather than "modeled"? I'm not so fussy as to actually go ahead and make the change and tread on anyone's toes, just interested how the language policy is generally applied in this kind of situation. — PhilHibbs | talk 16:15, 24 October 2007 (UTC)

[edit] WikiProject class rating

This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 04:29, 10 November 2007 (UTC)