Talk:Artificial consciousness
From Wikipedia, the free encyclopedia
1, 2, 3, (Blasphemy), (Summary), 5, 6, (NPOV), 7, 8, (AI vs AC), 9, 10, 11, 12, |NPOV, notreal, real
- ===Objective less Genuine AC===
- By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.
- An AC system that appears conscious must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, computers that appear conscious are a form of AC that may considered to be strong artificial intelligence, but this also depends on how strong AI is defined.
To start with i'd say this needs to be way clearer re what the actual point/position is and who holds it, and to have the sub-issues disentangled (eg, scientific observation of C vs whether it's there, objective vs artificial, AC vs strong AI, etc). And the first sentence of the second paragraph is either part of the "less Genuine" position that needs to be explained, sourced, and related to the position; or it's a general statement re AC which is thus either original research or mistaken re many peoples views. It implies that there couldn't be a dog-mentality AC because it wouldn't be "capable of achieving all known objectively observable abilities of consciousness possessed by a capable human", and that there couldn't be aliens very different from humans (eg, not capable of pain, etc) but nonetheless conscious. One could hold this but many people do not. Much more to say but that should be enough; thx again and hope that helps, "alyosha" (talk) 06:40, 3 January 2006 (UTC)
Suggest:
Artificial Consciousness must not be as genuine as Strong AI, it must be as objective as the scientific method demands and capable of achieving known objectively observable abilities of consciousness, except subjective experience, which by Thomas Nagel cannot be objectively observed.
The point is to differentiate AC from Strong AI, which by some approach means just copying the content of the brain, by no paper is this the aim of AC. There are no such terms as "genuine AC", "not genuine AC" etc, these were invented by a person who was banned from editing this article indefinitely by the decision of the Arbitration Committee. Overall the text of the article is too long, I have always said it should be shortened so that the reader could follow it.Tkorrovi 01:54, 7 January 2006 (UTC)
If one really wants to improve this article, please notice that the very first sentence was edited wrong, nowhere is said that the aim of AC is to produce definition of consciousness, the aim of AC is to implement known and objective abilities or aspects of consciousness. The definition by Igor Aleksander said "defining that which would have to be synthesized were consciousness to be found in an engineered artefact", who can follow it, this says something very different than producing a "rigorous definition of consciousness".Tkorrovi 02:48, 7 January 2006 (UTC)
Suggest:
This article interchangeably uses the words intelligence, consciousness, and sentience. In fact, an artificial conscious program would more correctly be described as an artificial >sapience<, as sapience implies complex reasoning on the level of a human. Sentience, by contrast, merely reflects the ability to feel. Almost all multicellular animals have the ability to react to their environment through a nervous system, and so can be to some degree be considered 'feeling' and therefore sentient. It is quite possible that a program might be programmed in such a way as to not incorporate feeling at all. Additionally, the difference between intelligence and consciousness is quite great. An intelligent being might be capable of performing a complex task, reacting to its environment accordingly, without actually using any reasoning, and it is reasoning that defines sapience.
(gigacannon, 01:26 GMT 15 May 06)
We cannot use our own terms here, in spite that there has been a lot of criticism about "original research" in this article, this article is about that which was written in various papers. The term "consciousness" has mostly been used regarding humans, so it is about the kind of awareness which humans have, or anything else which may have the same kind of awareness. Even the bacteria have the ability to react to their environment, the difference is how advanced such awareness is, like the awareness of humans is so advanced that the brain can model every kind of external processes. But then, all these thoughts are for us to understand the things, what we can write in the article is how exactly these things were explained in various papers.Tkorrovi 15:03, 24 May 2006 (UTC)