Talk:Artificial consciousness/Archive 10
From Wikipedia, the free encyclopedia
Contents |
Comment
The link to the Enticy Institute on the article page is a link to a patent nonsense site. Matt Stan 14:09, 2 May 2004 (UTC)
- Yep, don't appreciate it either. Is it really institute? Tkorrovi 16:22, 2 May 2004 (UTC)
"Supposed Experts" in Artificial Consciousness
Let us attempt to reach consensus on who are the leading proponents of artificial consciousness and its relation to artificial intelligence. OK? Matt Stan 08:44, 26 Apr 2004 (UTC)
Various suggestions (please add to list):
- Douglas Hofstadter - consciousness is a central interest as is AC
- Daniel Dennett - wrote Consciousness Explained and deals with AC
- Roger Penrose - denies AC though most scientists dismiss his reasoning
- Thomas Nagel - subjective experience
- Igor Aleksander - AC
- Owen Holland - AC
- Rod Goodman - AC
- Sam S. Adams - AC - Joshua Blue project, IBM Research
- Gerald Edelman - Nobel prize winner, respected ideas on the theory of mind
- Cynthia Breazeal
- Stephen Jones [1] [2]
- But Hofstadter, Dennett, Penrose, and Nagel are philosophers of AI or philosophers of mind. They use the term "AI" or "consciousness", but not "artificial consciousness". I don't really have information on the others. Wikiwikifast 16:05, 26 Apr 2004 (UTC)
-
- AC can perhaps be seen as a part of AI or as a part of Consciousness. Interestingly Wikipedia's article on Hofstadter says he is interested in consciousness not intelligence. Readers of his work will know, of course, that he is interested in both. But in his case it may be wrong to say that he is interested in AC only as a part of AI. I believe the same comments apply to Dennett who wrote a book entitled Consciousness Explained, not a book called Intelligence Explained. Penrose is a celebrated mathematical physicist, but a philosopher of dubious importance. Paul Beardsell 22:16, 26 Apr 2004 (UTC)
-
- I included Nagel only because his subjective experience concept has importance for AC. Dennett talked about of what should be considered AC, but the same thing (AC) is unfortunately often referred to under different names, what may not be proper. The only importance of Penrose is that he denies AC (but not all subfields of AI), Hofstadter writes a general philosophy similar to Dennett, what also touches AC. At present Igor Aleksander, Owen Holland and Rod Goodman work on big project to create a conscious robot http://www.guardian.co.uk/uk_news/story/0,3604,1028776,00.html Tkorrovi 19:13, 26 Apr 2004 (UTC)
AC and Strong AI
- How about moving 'artificial consciousness' over to strong AI instead? Wikiwikifast 16:19, 26 Apr 2004 (UTC)
I disagree, strong AI is a limited proposition about information processors (digital computers/Turing machines) artificial consciousness is about any type of machine, including those based on a 20,000 gene set of DNA strands.
-
- There is no strong AI article, the link refers to AI article where strong AI is also mentioned. And there is no strong AI theory, also there are no strong AI projects, there are no strong AI programs, it is usually only mentioned in comparison to weak AI. As much as I know, so far only AC is something what supposed to theoretically perform of what strong AI supposed to, but it is not the same as strong AI, and likely more genuine compared to consciousness than strong AI. There is no place in AI article where information about AC can be written, also why this approach cannot be separate article when the subfields of AI are? If there was enough to say about strong AI, then it could be a separate article as well, the reason why it was not a separate article was just that probably nobody knew what to write about it, it is not determined at all what strong AI is. Tkorrovi 19:30, 26 Apr 2004 (UTC)
-
-
- Another tiresome set of assertions wrongly presented as hard fact. There is a strong AI theory, there are strong AI projects, the field of AI is vehemently split between believers of the weak and strong AI theories, strong AI is not only mentioned in contrast to weak AI. No one who has read widely could believe otherwise. Paul Beardsell 22:16, 26 Apr 2004 (UTC)
-
-
-
-
- Then say one strong AI theory, or one strong AI project, or even one good strong AI definition. I read AI forums and followed AI for a long time, and saw that it's not clear for almost anybody what it is. Tkorrovi 01:10, 27 Apr 2004 (UTC)
-
-
-
-
-
-
- Now you deny even a definition! If I provide an example will you admit you are wrong? Paul Beardsell 11:23, 27 Apr 2004 (UTC)
-
-
-
-
- I think the science is plain: AC can be real/genuine/true consciousness. This is a minority opinion on this talk page but is nevertheless a popular view among computer scientists and philosophers of mind. As such I think AC fits into the Strong AI section of the artificial intelligence article rather well but I think a separate article is a better idea. Those here who (in defiance of the Copernican principle and Occam's razor and the Church-Turing thesis) think that AC can only ever be a simulation of real/genuine/true consciousness can not, in my opinion, be happy with including AC into Strong AI because Strong AI is real/genuine/true intelligence - not a simulation of it. Paul Beardsell 22:16, 26 Apr 2004 (UTC)
-
-
- It may be plain for you, but at least it's not that plain for Thomas Nagel, it's also that without understanding or considering everything, things look like much more simple. Tkorrovi 01:24, 27 Apr 2004 (UTC)
-
-
- Even if AC does turn out to be a branch of AI, I think there is still room for an article to clarify that point - separate from the AI article. If it does turn out that AC can be developed as an end in its own right, as I had always assumed, then even more point in it having a separate article. I have no great learning in this field, and I come here primarily to learn. Can we have summaries of the main proponents' arguments about consciousness (as distinct from intelligence) in the main article, please? My offerings, for what they are worth are as follows:
- 1) There isn't, from what I have gleaned, a project to pursue the development solely of a machine implementation of consciousness for its own sake, or even as part of some other purposeful endeavour;
-
-
- I think for many researchers AC is the holy grail. Growing up with Lucy by Steve Grand is perhaps not an example with much promise but that is surely his goal? Paul Beardsell 09:45, 30 Apr 2004 (UTC)
-
-
-
-
- "Non-disciplinary" philosophy of "general purpose building blocks," no software "yet". Like building a house from bricks. Tkorrovi 15:25, 30 Apr 2004 (UTC)
-
-
-
- 2) I do not agree that there should necessarily be a strict dichotomy between weak and strong consciousness - the distinction certainly doesn't arise with natural forms of consciousness, and therefore a coherent definition of AC per se is required before we can perhaps make distinctions that were developed for the AI Topic;
-
-
- If by strong and weak you mean genuine/real/true and non-genuine/simulated/pretend then, of course, the distinction does not arise in natural consciousness. Except when we pretend to be asleep! Paul Beardsell 09:45, 30 Apr 2004 (UTC)
-
-
- 3) I can imagine implementations of AC in various contexts that might be considered just as art - if they have no obvious function - or as entertainment. Though I agree about Wikipedia not being primary research, it does have the propensity to make connections (links) between subject-matter that doesn't occur anywhere else. An article that bridges the gap between what SF writers imagine in their stories and what is both theoretically and practically possible from an engineering perspective is a useful endeavour and doesn't, I rhink, conflict with what Wikipedia is about. Matt Stan 00:41, 27 Apr 2004 (UTC)
-
-
- Yes! You can buy solar powered (non-flying) decorative butterflies using muscles wires on some robotics sites. I reckon with a bit of tweaking they could have a genuine consciousness in excess of the most advanced thermostat. Paul Beardsell 09:45, 30 Apr 2004 (UTC)
-
-
-
-
- As much as I know no scientist ever argued that thermostat is conscious.Thermostat is a Chalmers example what he didn't provide because he himself thinks that thermostat is conscious, but for reductio ad absurdum the Lloyd's argument that the connectionist models might shed light to subjective aspects of consciousness described by Nagel. "On the face of it, this approach is put forward as a way of dealing with Nagel's worries about consciousness, where the central mystery is: why is there something it is like to be us at all? There is a huge prima facie mystery about how any sort of physical system could possess conscious experience. Lloyd holds out the promise that connectionist models might shed light on this question, but at the end of the day the models seem to leave the key explanatory question unanswered. Even if we were to go out on a limb and suppose that these simple systems are conscious, the question of explanation would still remain untouched." http://jamaica.u.arizona.edu/~chalmers/notes/lloyd-comments.html (article by Chalmers). Tkorrovi 16:33, 1 May 2004 (UTC)
-
-
-
-
-
-
- Even this selective quote does not say quite what Tkorrovi seems to think. See below discussion in Thermostat section. Paul Beardsell 12:15, 3 May 2004 (UTC)
-
-
-
The reason why there is no "strong AI project" in the way you think of it is because the real work done in "strong AI" is in philosophy. You can be an engineer and say, "Since no one is creating strong AI, I will do it!" But how do you start? What makes your project not weak AI? Obviously, we haven't figured out what is necessary or sufficient for consciousness.
Weak AI is important for strong AI because it serves as inductive evidence for consciousness/intelligence. That is, if we can make something that seems conscious/intelligent, there is a possibility that it is truly conscious. If we can't even make something that exhibits intelligent/conscious behaviour, then that it is truly conscious is out of the question. Hence, the idea behind the Turing Test is worth mentioning.
Lastly, Dennett, Nagel, etc. are not "experts" of AC/strong AI in that what they say must be true. Philosophy of mind and AI is an ongoing debate, and what they say is only one side of the picture. Characteristic of philosophy, there are many competing theories that exist.
--Wikiwikifast 10:33, 27 Apr 2004 (UTC)
- Whilst AI seems to have been dominated by philosphers and its theories disappearing up their own arse (i.e. not leading to particular progress in the engineering field), research into consciousness is focused in psychoneurology, in particular using scanners to correlate perceptual and cognitive activity with neural activity. There may one day be (if there isn't already) a mental map based on this research that equates to the human genome project (already completed). My suggestion is that artificial implementations that attempt to simulate the operation of conscious processes, as evinced from such neurological research, is the path towards AC actualisation. This will leave philosophy way behind. I am not concerned with the philosophers' views on what constitutes consciousness or not - they'll never reach a consensus, just as they've never reached consensus even on logic or ethics. However if I can produce and sell a machine whose advertising claims that that machine is conscious, or even artificially conscious -- and that claim isn't disallowed by the Advertising Standards Authority -- then I'll be able to claim strongly that my AC is real AC. Some implementations that I've considered as candidate AC products are:
-
- A conscious bus stop that recognises the people who regularly stand next to it, welcomes them, tells then when their bus is due, introduces them to each other, shares in their complaints about the bus service, answers questions by sending voice-recognised text to Google, plays music, replays horrifying images of muggings that have taken place near it, etc, etc.
- (Related to the previous) A conscious policeman on the beat that collects evidence, helps the public in simple ways like answering questions about the way to the nearest bus stop, and never wants promotion
- A conscious sexual partner that never thinks, never complains, and whose heuristic is designed to ensure increasing satisfaction the more it is used. This is perhaps the most promising idea in that there is already a well-established market for sex toys
- A conscious (and articulated) computer screen combined with webcam, upon which appears an animated representation of a human (which a camera on this or on other instances had previously observed and which uses morphing software to drive its images), which proactively interacts with its user (partner?) and takes on personae which are effectively caricatures of people with whom it has interacted previously. (Of course one could buy characters, much like people buy mobile phone ring tones, and people who had used the product to the extent required for the machine to build virtual representations of them could sell their characters. A useful pastime and money-spinner perhaps for old celebrities)
- A conscious driving assistant (already marketed by BMW on its latest model) that keeps your car going in the right speed and direction even when the actual driver becomes inattentive
- A conscious dinner party host that ensures quests' glasses are filled, coordinates the timing of the cooking, makes trivial conversation (AI component required here, perhaps), and interrogates bus stops across the internet to trace late-comers
- If any of these ideas were freshly patentable then they aren't now! They are in the public domain on wikipedia. And I don't think you need a philosopher to tell you whether they are implementable. Matt Stan 08:06, 1 May 2004 (UTC)
-
-
-
- As I said before, the results of scanning the neural activity http://www-inst.eecs.berkeley.edu/~cs182/readings/ns/article.html suggest that awareness is awareness of processes. Human genome project is completed only in the sense that the DNA sequences are mapped, but only a function of a very few fragments of DNA are known. The complexity of that problem is so huge, that it's not feasible to understand everything that way. Neither probably by scanning the whole neural activity. In addition to that, such scanning only shows when neurons fire, but nothing about what happens inside the neuron, but neuron may be as complex as a computer. And even if we could do that, again just having a map would not by itself explain anything. There is also by far not enough information in DNA for all brain activity what even a child has, what shows again that the brain activity is not pre-programmed, but is a result of learning. What we must understand is how that happens, not what *exactly* is in the brain. AC is useful for providing an experimental method to test philosophy. I thought just about a regulator at first, what could model the processes based on the input and predict how they would develop. A natural processes, such as friction of the tyres of your car, what depends on so many things, and cannot be uniformly modelled. Tkorrovi 15:54, 1 May 2004 (UTC)
-
- "What makes your project not weak AI?" "Objective Less Genuine AC" should do that. Well, "Genuine AC" should do that as well. Not sure though how Paul interprets it, and why he said that thermostat can considered to be "Genuine AC", concerning that you should ask him, I don't support "Genuine AC" because I don't think that it would be ever possible to model consciosness completely. Tkorrovi 09:40, 28 Apr 2004 (UTC)
You use project and expert in a restrictive way which suits your argument. Given that I agree with what you say. Please do not misunderstand me: I never said the problems are solved and now all we need is enough Meccano to construct Sentient Expert Mk 1. Paul Beardsell 11:13, 27 Apr 2004 (UTC)
- The 'you' in my first paragraph was directed at Tk, and my comments about projects and experts was directed more at him as well. I mostly agree with you (PB) when you said that there are strong AI projects, etc., but I was addressing the restrictive sense of project as in an engineering project, which was what I took Tk to mean. Wikiwikifast 00:35, 28 Apr 2004 (UTC)
Obviously there isn't a Strong AI project like the Boeing 7E7 engineering project. How many research grants, how many funded scientists does there have to be before a project is admitted? There are, of course, many Strong AI research projects. And each computer scientist / philosopher of mind engaged in Strong AI has a theory, or pretends to have one, to get his/her funding. The theory is not established like, for example, Special Relativity but to say there is no theory (or rather no set of candidate theories) is just to redefine the word. Paul Beardsell 11:40, 27 Apr 2004 (UTC)
- Yes, there are engineering projects that are created for a purpose other than making something useful for humans. But calling them 'strong AI' projects makes it sound like there are conscious robots out there. However, people have created robots that act like insects that can move around autonomously, etc. Wikiwikifast 00:35, 28 Apr 2004 (UTC)
Upon reconsideration, I agree that AC should remain a separate article and should not be merged with AI nor moved to Strong AI. Wikiwikifast 03:20, 28 Apr 2004 (UTC)
The concept of machine
Daniel Dennett compared mind to a machine, but the problem is that he never said what he means by machine (if you find where he did that, then please say). Machine is usually interpreted as something human-made or something what humans can make. Then there are virtual machines, like Turing machine, what are theoretical concepts, what cannot always be made, like Turing machine with endless tape. or we cannot implement them because it's impossible to obtain all necessary information to implement them. So machine and virtual machine are not the same. We can say that mind is equivalent to certain virtual machine what satisfies Church-Turing thesis, what it certainly is, but this is not the same as to say that mind is a machine. We may call such theoretical machine a Church-Turing machine, to name it somehow. Then we may say that mind is some kind of Church-Turing machine. We cannot follow the logic of Dennett until we don't know what Dennett meant by machine, especially when the question is exactly in the difference of consciousness and a machine. So this is a theoretical problem, I hope it's understood. Tkorrovi 11:41, 28 Apr 2004 (UTC)
Fortunately for Dennett the Church-Turing thesis says that all machines are equivalent in computing ability except for speed and memory capacity. So Dennett does not have to say what type of machine he is talking about. One idealised computing machine, the Turing machine, is shown by Turing to be equal or superior in capability to other computing machines. A Turing machine is of course implementable (and there are examples on the web) except for the infinite memory size. But any program which runs in finite time can only access a finite amount of tape. The human being has a finite life time and so doesn't need an infinite tape. According to the Church-Turing thesis, any computer can mimic any other. The unavoidable conclusion is: If your Sinclair ZX-80 cannot mimic the human being replace the C30 cassette tape with a C60. Of course, there is also the problem of writing the software. Paul Beardsell 12:02, 28 Apr 2004 (UTC)
So, are you saying that humans are not machines? That there is something spiritual about them: a soul or a magic spark? Or are you questioning the Church-Turing thesis? Or are you questioning it's applicability to humans on the basis of new physics? Or are you saying that the software cannot be written? Paul Beardsell 11:46, 28 Apr 2004 (UTC)
- Are you saying that humans are machines? The argument for strong AI goes:
- (1) Given that the mind is the software/hardware brain, and
- (2) Given the Church-Turing thesis,
- (3) The possibility of Strong AI must be accepted.
- I do not disagree with (2), but (1) is questionable. --Wikiwikifast 02:37, 5 May 2004 (UTC)
-
- Yes, that humans are but machines is a very common view amongst many/most scientists and many/most philosophers. Even Penrose agrees with (1) but he disagrees with (2). See functionalism. But the point is not whether you or I agree with either (1) or (2) and of course you can logically disagree with either but that if you accept them both then the conclusion (3) is unavoidable. Paul Beardsell 02:48, 5 May 2004 (UTC)
-
-
- Many/most scientists and philosophers believe in some form of materialism, but I don't think many/most believe that the mind is a machine qua symbol manipulation. Wikiwikifast 04:03, 5 May 2004 (UTC)
-
-
-
-
- If I change "many/most" into "many" then the point is incontrovertible: Many scientists and philosophers do believe that the hardware/software brain is a computing machine.
-
-
-
-
- My point is that the mind is not necessarily a virtual machine. Wikiwikifast 04:08, 5 May 2004 (UTC)
-
-
-
-
- If by "virtual" you mean not-real then I agree with you. But I think we disagree because many do believe the mind is really the software/hardware machine. That I strongly believe so is hardly pertinent! Your position is, of course, not an unusual one: Most religions are on your side. Paul Beardsell 11:43, 5 May 2004 (UTC)
-
-
What??! By virtual machine I mean a finite state machine or Turing machine. The mind may not be analogous to concepts of software/hardware, because it is controversial that cognition consists of merely symbol manipulation—symbol manipulation as in the reading and writing of discrete tokens such as 0 and 1. In the connectionist framework, cognition isn't transformation of discrete symbols, but rather patterns of activity. Wikiwikifast 14:02, 5 May 2004 (UTC)
I think you may be redefining "virtual machine". The view you express, if backed up by appropriate citings, should go in the article. But others think (and I reckon I can find references to back this up) that "patterns of activity" is just like the flashing of lights on the panel of a 1970's computer. What is going on, underneath the patterns, is symbol manipulation. Paul Beardsell 22:22, 5 May 2004 (UTC)
- "Virtual machine" I took from Tkorrovi's post, the first one in this section. Connectionist nets are modelled after neurons in the brain. The firing of one node activates certain other nodes in its vicinity, depending on the connection strength between it and a neighboring node, which is supposed to simulate how neurons work. In neural nets, I believe connections are given numerical weightings, e.g. 0.55. I think the numbers aren't supposed to be discrete, but are theoretically supposed be real numbers (i.e. continuous). If you accept this model, cognition cannot be symbol manipulation, because there are no discrete tokens like either 0 or 1, but rather, it uses the set of real numbers (of which there are infinite, and have no discrete divisions). Hence, if you accept this model, the mind cannot be a Turing machine. Wikiwikifast 00:12, 6 May 2004 (UTC)
-
- I think the use of "virtual" in this context is a red herring. As a software professional, I have always understood a virtual machine to be an implementation of a machine within another machine, e.g. the Java Virtual Machine within the Windows operating system (machine). Now whilst it might be worthwhile to think of "mind" as a virtual machine machine within the "body" machine, I do not think that is the intention here and I don't think that analogy holds anyway. It is more likely, I think, that Tkorrovi looked up "virtual" in the dictionary and thought that it would be useful to describe mind as "a bit like a machine", or "virtually a machine", which is of course misleading anyway, as we are aiming to indicate whether mind is a machine at all, and as Wikwikifast points out, from a scientific materialist perspective, it is. The salient question, as Wikiwikifast again correctly identifies, is to ask what sort of machine the mind is, because that will throw light on whether it can be emulated using the methods available to us. Matt Stan 06:49, 6 May 2004 (UTC)
-
-
- "Virtual machine" is another of those compound nouns the meaning of which can be perfectly well understood from each of the constituent words. It also has its own article. The first usage of that term in this section is incorrect. Paul Beardsell 09:39, 6 May 2004 (UTC)
-
Do real numbers exist in nature? This is an interesting philosophical question. Even distance, we are told by the quantum theorists, is discrete. Maybe only rational numbers exist. Certainly our vision is digital and discrete. The pixels are close together and we do not see pixellation - there must be smoothing happening in software - but the retina consists of discrete rods and cones. In the connectionist nets the values assigned to the nodes are not chosen from the whole set of real numbers but from a subset of the rationals: This is a proven limitation of all computers. As the firing of a neuron in the brain must involve an integer number of electrons we also know that not all real numbers can be represented at the synapse junctions. The neuron cannot be set to fire after the transfer of a fractional number of electrons. The brain does not process real numbers. Brain processing is chemical and electrical. Everything is a multiple of a number of molecules or a multiple of a number of electrons. Brain processing is discrete. Paul Beardsell 01:17, 6 May 2004 (UTC)
See philosophical implications of the Church-Turing thesis. Is the Universe computable? Paul Beardsell 01:35, 6 May 2004 (UTC)
- You have a good point. However, I just wanted to point out that not everyone believes in symbolicism.
- symbolicism - An approach to understanding human cognition that is committed to language like symbolic processing as the best method of explanation. ... The commitments of symbolicism have been challenged by connectionist research and most recently the dynamical systems theory approach to cognition. - Dictionary of Philosophy of Mind
- See also John Lucas' Minds, Machines and Gödel for an argument against the claim that the mind is a Turing machine. Instead of debating incessantly over talk pages, I should probably use my time constructively by contributing to the AI and cognitive science articles... Wikiwikifast 02:38, 6 May 2004 (UTC)
-
-
- As to what type of machine the brain is, this is settled for most scientists. Turing showed that anything which does symbol manipulation is no more powerful than the Turing machine. And anything passing about discreet lumps of stuff (e.g. electrons and molecules) is doing symbol manipulation in the computer science sense. It is a finite state machine. All finite state machines are no more powerful that the Turing machines. Proven computer science result. It's maths. No argument. The onus is on the dissenters to explain what is wrong with any of this reasoning. Penrose understands this and attempts to do so, but established science finds serious flaws in his arguments. But Penrose understands what it is he has to show, most other dissenters ignore the scientific method. That is what is happening here. Paul Beardsell 09:39, 6 May 2004 (UTC)
-
-
- Cognitive science article would indeed be a better place for that. This is not necessary for AC considering the Thomas Nagel argument that subjective experience cannot be reduced (modelled), and this is included in the article. Considering that, it doesn't matter whether subjective experience is physical. I think it's physical and Nagel thinks that it's physical, but this doesn't matter for AC. Tkorrovi 08:59, 6 May 2004 (UTC)
-
-
- If the mind is no more powerful than the Turing machine then what accounts for subjective experience? A magic spark? The soul? Penrose quantum quackery? Maybe Nagel is wrong. Paul Beardsell 09:45, 6 May 2004 (UTC)
-
It (that the mind is a machine) does matter for genuine/true/real/strong AC. Obviously. Paul Beardsell 09:39, 6 May 2004 (UTC)
All machines are equivalent in computing ability except for speed and memory, but there is a difference between virtual (theoretical) machine and man-made machine also in that it may not be possible to obtain all necessary information to make the machine, in spite that theoretically some type of virtual machine can implement it. Please read Thomas Nagel "What is it like to be a bat" [3] to get an idea. Tkorrovi 12:19, 28 Apr 2004 (UTC)
I have read the book*. I have demonstrated the equivalence of the Sinclair ZX-80 and the human being (except for speed and memory capacity). Neither of these are theoretical machines. Trying my best to interpret what you have said I think you have chosen the last of the presented alternatives: You think the software cannot be written. Paul Beardsell 12:28, 28 Apr 2004 (UTC)
- (*)I have read the book "The Mind's I" in which Nagel's ideas are quoted at length and the bat essay may appear in it's entirety. The critique of those ideas which appears in that book and especially how they are contrasted with other ideas is interesting and entertaining. But, I suggest, not entirely relevant to the narrow point we are trying to resolve here. Paul Beardsell 12:43, 28 Apr 2004 (UTC)
-
- Yes it may be not entirely relevant. It is said in the article Thomas Nagel that "While many philosophers of mind and cognitive neuroscientists accept the fundamental distinction between the subjective and the objective, they often have not accepted Nagel's dismal conclusions." This is almost that what is relevant for AC. But concerning dismal conclusions, maybe Nagel is not so well understood, he doesn't deny the reduction of that what is objective. Tkorrovi 12:56, 28 Apr 2004 (UTC)
Whether written or taught, we cannot make sure that it can be developed so that it implements consciousness completely, but we can develop it so that it implements consciousness partly when we don't omit anything, except that what is not objective. Tkorrovi 12:39, 28 Apr 2004 (UTC)
OK, I cannot deny that view but that is just one way things might work out. At some point the built-consciousness might reach a critical mass, transfer itself to a local super-computer cluster of 100,000 Trituim 86666MHz processors running MacOS XII, attach several Terabytes of NAS memory and, changing gear, evolve a consciousness which is a superset of human consciousness. But, please, neglect this flight of fancy for the moment. I return to your view: Will your partially implemented consciousness be a genuine manifestation of consciousness or not? And, if not, why not? Paul Beardsell 12:52, 28 Apr 2004 (UTC)
- The problem is that whether it is genuine consciousness or not can never be found out. Because of that, as science must be objective, in scientific terms we must say no. But there is a possibility that it may seem one day very genuine, especially if maybe implemented on quantum computer, maybe even exceed some human abilities, in theory there seems to be nothing what restricts such system. And then one day we may believe that this is genuine consciousness, but there would be no scientific way to find out, all what we can test even in that case would be that it is artificial consciousness. Tkorrovi 13:13, 28 Apr 2004 (UTC)
-
- But, as an aside, and this is another fact to trip up Penrose, the Church-Turing thesis still applies to the quantum computer. Paul Beardsell 13:47, 28 Apr 2004 (UTC)
-
-
- Sure, I also think that Church-Turing thesis apply, and I think it's not Gödel's theorem what is wrong, but rather the way how Penrose uses it. This question has been discussed endlessly in the Internet. Tkorrovi 20:59, 28 Apr 2004 (UTC)
-
Dennett deals with this issue well in both The Intentional Stance and Consciousness Explained. Essentially, he denies the human any special place in the universe, he says we allow another human consciousness because he says he is, and says it is mere arrogance to deny something else consciousness if it acts as if it is conscious and if it claims it is conscious. He makes the point better than I do, doubtless. Paul Beardsell 13:24, 28 Apr 2004 (UTC)
As I understand it he is saying your use of subjective and objective you fail to apply to humans themselves. Subjectively you say you are conscious. Objectively, how do I know? Paul Beardsell 13:26, 28 Apr 2004 (UTC)
Yes, the only reason why we can say that we are conscious is that we are humans, and capable humans are considered to be conscious when they don't happen to be in coma. We cannot even completely compare our consciousness to that of somebody else. This doesn't mean that we are better, this is just how we determine consciousness. Maybe dolphins are better than we are, but we can never know. Tkorrovi 13:40, 28 Apr 2004 (UTC)
Except that dolphins act as if they are conscious, they do all but say they are conscious. I have seen a conscious dolphin, I am sure. Neither you nor I can be sure that the other is conscious: Indeed, for all you know I have tapped this out on a panel in my tank. [[User:Flipper|Flipper]] Paul Beardsell 13:47, 28 Apr 2004 (UTC).
Counter-example
Oliver Sacks describes in his book The Man Who Mistook His Wife For A Hat the case of autistic-savant identical twins who were able to compute large prime numbers within a timescale that was several orders of magnitude less than what would be required by the most powerful computer to perform the same feat [4]. These twins were not articulate enough to describe the methods they used, and I think their ability remains a mystery twenty years later. They certianly did not have the mathematical ability to compute prime numbers using any standard algorithm such as the sieve of Erastothenes - they could not even do simple multiplication. Such pathologies as Sacks describes raise questions about the operation of the brain that could possibily be used to counter the Church-Turing thesis, or at least to question it until a viable explanation had been found. The twins in Sacks' story seemed to have a particular consciousness of primeness (and not much else besides) and savoured each new prime number they encountered as one might savour a newly discovered vintage wine. Sacks mentions a number of examples of enhanced and deficient mental abilities that surely throw some light on our conception of consciousness, and which perhaps should be taken into account in our discussions here. Matt Stan 07:13, 6 May 2004 (UTC)
But you invent magic. We do not know how we remember the spelling of "Erastothenes" but we do not invent magic to explain it. The scientific method does not say "invent science to support your prejudices". Paul Beardsell 09:34, 6 May 2004 (UTC)
Experiments on primates have shown that there is an area of the brain concerned with visual cognition that does pattern-matching and which, interestingly, operates at a level of resolution (topological complexity, to be more precise)commensurate with alphabetic and other writing system symbols (e.g. Chinese). This is used to show that our brains were not designed for reading, but that we have innate capabilities (shared with other primates) that lend themselves to be able to interpret writing. So our spelling ability is capable of being understood at an elementary level. However the recognition of prime numbers is held to be a difficult feat to perform algorithmically, and is indeed the basis of modern cryptography. Therefore if we cannot explain how certain individuals' brains can divine prime numbers, surely that is a good argument to suggest that the Church-Turing thesis may not hold in all circumstances when it comes to understanding consciousness, as I suggest above. If we could explain it then there is the possibility that cryptography would founder. Matt Stan 09:52, 6 May 2004 (UTC)
Firstly, you overstate the ability of the idiots savant. They could not do any prime number problem. Related problems, e.g., which a mathematician would be able to work out if they knew what the twins knew, stumped the twins completetly. You have managed to remember and store away a large vocab, to store away a fairly detailed map of London, to remember many facts about scores of people. These idiots savant did none of that. The storage capacity of the brain is known to be enormous. Yet I can store a lot of prime numbers on my laptop. No one is saying that the brain is not remarkable. But no new science should be invented until necessary. It is difficult for people to appreciate they are not God's chosen ones. We feel special. A more plausible explanation than the brain not being finite state sub-Turing is that it's drugs in the water that make you think this way. No new science required for that. Paul Beardsell 10:03, 6 May 2004 (UTC)
A much better counterexample is the finite state machine known as Beethoven. Paul Beardsell 10:06, 6 May 2004 (UTC)
-
- Please would you provide a reference to Beethoven in this context, unless you mean the composer. If the latter, why should the ability to produce musical patterns be cited as a counter-example for the Church-Turing thesis in relation to mind? Matt Stan 10:38, 6 May 2004 (UTC)
-
-
- I agree with you! Beethoven, even though he was much more impressive than Sacks's twins, is no evidence for humans being more than FSMs. Paul Beardsell 10:47, 6 May 2004 (UTC)
-
In the story that Oliver Sacks tells (and he was a neurologist invited to examine the twins), at first no one was aware that the twins were savouring prime numbers. He listened to them, noted down the numbers they mentioned, and took his notes to a mathematician who indicated that they were all primes. Sacks then wrote down some more primes - bigger ones - and re-joined the twins. He mentioned his primes and the twins paused to consider them. They had never encountered such large primes and were delighted with the new player (Sacks) in their game. I forget how large they went in terms of the numbers mentioned, (and the twins did take longer to recognise larger primes) but I think it's fair to say that it's unlikely they had memorised a list previously presented to them. They also, incidentally, had the ability correctly to state (without delay) the day of the week of any date in the previous or next 10,000 years. I doubt whether they'd had the chance to memorise such a large number of calendars. Whatever the explanation, I don't think it's one of them having an eidetic memory, though some idiot savants do have this. I also don't think their ability was learned - the fact that they were identical twins being important here. Matt Stan 10:30, 6 May 2004 (UTC)
- Identical twins important? Did you forget to provide this link? My point being that even a cluster of computing machines is no more capable except in speed and memory capacity than any other computing machine. Paul Beardsell 12:00, 6 May 2004 (UTC)
If you know all the prime numbers to 1000, say, then it is not super-humanly difficult to test numbers to 1000000 for primeness. Not magic. The calendars might have impressed Oliver Sacks the mystic, but it would not have impressed Oliver Sacks the mathematician. You only have to learn 14 calendars to know them all. The magic feat is simply one of dividing modulo 14 and applying an offset correction for Pope Gregory. I can tell you whether any number of any size is divisible by 11. But then I am not like you machines. I take Super Unleaded. Paul Beardsell 10:45, 6 May 2004 (UTC)
Perhaps I am digressing, but it occurs to me, from reading Sacks and others that part of our normal humanness consists of constraints (attenuators) on our consciousness. See hrair limit (also relevant in the autistic savant context), and remember Sacks' example of the man who smelt like a dog (i.e he could smell water at a distance) temporarily after experimenting with drugs. I am primarily concerned here, though, with the example of primeness. It is not easy to test a number for primeness in your head, even with relatively small numbers. See http://primes.utm.edu/prove/ for the known methods of proving primeness, and bear in mind that the twins couldn't do simple arithmetic. What I think they demonstrate is that there was some mysterious method of determining primeness (perhaps by means of some form of pattern matching and/or symbol manipulation) which was not learnt (the only reason why the twins are important is that they had the same genotype which gave them each the same mysterious capability) and was not explainable by them - they didn't know how they did it, i.e. they didn't do it by any known mathematical/algorithmic process, and it couldn't be fathomed by anyone else, and still can't, as far as I am aware. Matt Stan 18:38, 6 May 2004 (UTC)
Brain size
In terms of learning ability, I am reminded of Richard Feynman's calculation that if you were to encode alphabetic symbols at the atomic level, using 100 atoms per character, and build a block of material encoded in such a way, then the entire content of all the world's reference libraries could be stored in a piece of matter (I think he said metal, actually) as small as a grain of sand. I don't know what the volumetric storage requirements of biological memory are, but assuming they are not more than an order of magnitude different from Feynman's model, then one could theoretically perhaps store the entire contents of the Internet in a small corner of one's brain. What this makes me wonder is whether there might be some kind of critical mass of grey matter below which consciousness is impossible, and that therefore there are physical constraints on artificial implementations using current technology. If this could be demonstrated then one might come up with a calculation that one would need a disk platter as large as the rings of Saturn, or perhaps as large as the orbit of Pluto round the sun, to store enough data to make consciousness possible. How many neurons are there in a typical brain? I think we should be told. Matt Stan 18:38, 6 May 2004 (UTC)
The unit of biological memory has to be the neuron or the synapse because the reading and writing of material within the cell cannot be done in seconds. A neuron is huge when measured in numbers of atoms: 25 microns = 2.5*10^(-5)m. Feyman's info density packing will be based on the size of an atom: 2.5*10^(-11)m. 10^6 difference in linear dimension. But to your question: The volume of a human brain is 1600cc or 1.6litres or 0.0016m^3. Linear dimension of a neuron is 2.5*10^(-5)m. Volume is 1.6*10^(-14)m^3. At maximum packing density there are 1.6*10^11 neurons in the brain. The easy way tells us 100bn neurons. 100*10^9 = 10^11. I was spot on: super unleaded! Is that 100Terabits = 10Terabytes? Paul Beardsell 23:45, 6 May 2004 (UTC)