Talk:Philosophy of artificial intelligence
From Wikipedia, the free encyclopedia
Contents |
[edit] Discussion of ethics moved
A discussion that appeared here about the ethics of artificial intelligence has been moved to the talk page of that article.
[edit] Maudlin
The article might benefit fromm a discussion of Maudlin's "olympia" argument. 1Z 00:32, 12 January 2007 (UTC)
[edit] The Real Debate.
This article should contain more discussion of the serious academic debates about the possibility/impossibitity of artificial intelligence, including such critics as John Lucas, Hubert Dreyfus, Joseph Weizenbaum and Terry Winograd, and such defenders as Daniel Dennett, Marvin Minsky, Hans Moravec and Ray Kurzweil. John Searle is the only person of this caliber who is discussed.
In my view, issues derived from science fiction are far less important than these. Perhaps they should be discussed on a page about artificial intelligence in science fiction. Is there such a page? CharlesGillingham 11:02, 26 June 2007 (UTC)
- Yes. -- Schaefer (talk) 12:59, 26 June 2007 (UTC)
- Some text could be moved to Artificial intelligence in fiction. Critics can be listed here but maybe a discussion of the debate may belongs in Strong AI vs. Weak AI? --moxon 15:20, 12 July 2007 (UTC)
[edit] Some interesting stuff re Turing Test and Marvin Minsky and List of open problems in computer science
I cc'd this over from the Talk:Turing machine page:
> Turing's paper that prescribes his Turing Test:
- Turing, A.M. (1950) "Computing Machinery and Intelligence" Mind, 59, 433-460. At http://www.loebner.net/Prizef/TuringArticle.html
"Can machines think?" Turing asks. In §6 he discusses 9 objections, then in his §7 admits he has " no convincing arguments of a positive nature to support my views." He supposes that an introduction of randomness in a learning machine. His "Contrary Views on the Main Question":
- (1) The Theological Objection
- (2) The "Heads in the Sand" Objection
- (3) the Mathematical Objection
- (4) The Argument from Consciousness
- (5) Arguments from Various Disabilities
- (6) Lady Lovelace's Objection
- (7) Argument from Continuity in the Nervous System [i.e. it is not a discrete-state machine]
- (8) The Argument from Informality of Behavior
- (9) The Argument from Extrasensory Perception [apparently Turing believed that "the statistical evidence, at least for telepathy, is overwhelming"]
re Marvin Minsky: I was reading the above comment describing him as a supporter of AI, which I was unaware of. (The ones I do know about are Dennett and his zombies -- of "we are all zombies" fame, and Searle. Then I am reading Minsky's 1967 and I see this:
- "ARTIFICAL INTELLIGENCE"
- "The author considers "thinking" to be within the scope of effective computation, and wishes to warn the reader against subtly defective arguments that suggest that the difference beween minds and machines can solve the unsolvable. There is no evidence for this. In fact, there couldn't be -- how could you decide whether a given (physical) machine computes a noncomputable number? Feigenbaum and Feldman [1963] is a collection of source papers in the field of programming computers that behave intelligently." (Minsky 1967:299)
- Marvin Minsky, 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc., Englewood Cliffs, N.J. ISBN: none. Library of Congress Card No. 67-12342.
I have so many wiki-projects going that I shouldn't undertake anything here. I'll add stuff here as I run into it. (My interest is "consciousness" as opposed to "AI" which I think is a separable topic.) But on the other hand, I have something going on at the List of open problems in computer science article (see the talk page) -- I'd like to enter "Artificial Intelligence" into the article ... any help there would be appreciated. wvbaileyWvbailey 02:28, 2 October 2007 (UTC)
[edit] List of open problems in computer science: Artificial Intelligence
Here it is, as far as I got:
Source:
In the article "Prospects for Mathematical Logic in the Twenty-First Century", Sam Buss suggests a "three-fold view of proof theory" (his Table 1, p. 9) that includes in column 1, "Constructive analysis of second-order and stronger theories", in column 2, "Central problem is P vs. NP and related questions", and in column 3, "Central problem is the "AI" problem of developing "true" artificial intelligence" (Buss, Kechris, Pillay, Shore 2000:4).
- "I wish to avoid philosophical issues about consciousness, self-awareness and what it means to have a soul, etc., and instead seek a purely operational approach to articial intelligence. Thus, I define artificial intelligence as being constructed systems which can reason and interact both syntactically and semantically. To stress the last word in the last sentence, I mean that a true artifical intelligence system should be able to take the meaning of statements into account, or at least act as if it takes the meaning into account." (Buss on p. 4-5)
He goes on to mention the use of neural nets (i.e. analog-like computation that seems to not use logic -- I don't agree with him here: logic is used in the simulations of neural nets -- but that's the point -- this stuff is open). Morever, can I am not sure that Buss eliminate "consciousness" from the discussion? Or is consciousness a necessary ingredient for an AI?
Description:
Mary Shelley's Frankenstein and some of the stories of Edgar Allan Poe (e.g. The Tell-Tale Heart) opened the question. Also Lady Lovelace [??] Since the 1950's the use of the Turing Test has been a measure of success or failure of a purported AI. But is this a fair test? [quote here?] (Turing, Alan, 1950, Computing Machinery and Intelligence, Mind, 59, 433-460. http://www.loebner.net/Prizef/TuringArticle.html
A problem statement requires both a definition of "intelligence" and a decision as whether it is necessary to, or if so how much to, fold "consciousness" into the debate.
> Philosphers of Mind call an intelligence without a mind is a zombie (cf Dennett, Daniel 1991, Consciousness Explained, Little, Brown and Company, Boston, ISBN 0-316-180066-1 (pb) ):
- "A philospher's zombie, you will recall, is behaviorally indistinguishable from a normal human being, but is not conscious. There is nothing it is like to be a zombie; it just seems that way to observers (including itself, as we saw in the previous chapter)". (italics added or emphasis) (Dennett loc cit:405)
Can an artificial, mindless zombie be truly an AI? No says Searle:
- "Information processing is typically in the mind of an observer . . . the addition in the calculator is not intrinsic to the circuit, the addition in me is intrinsic to my mental life.... Therefore, we cannot explain the notion of consciouness in terms of information processing and symbol manipulations" (Searle 2002:34). "Nothing is intrinsically computational. computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon" (Searle 2002:17).
Yes says Dennett:
- "There is another way to address the possibility of zombies, and in some regards I think it is more satisfying. Are zombies possible? They're not just possible, they're actual. We're all zombies [Footnote 6 warns not to quote out of context here]. Nobody is conscious -- not in the systematically mysterious way that supports such doctrines as epiphenomenalism!"((Dennett 1991:406)
> Gandy 1980 throws around the word "free will". For him it seems an undefined concept, interpreted by some (Sieg?) to mean something on the order of "Randomness put to work in an effectively-infinite computational evironment" as opposed to "deterministic" or "nondeterministic" both in a finite computational environment (e.g. a computer).
>Godel's quote: "...the term "finite proceedure" ... is understood to mean "mechanical procedure". concept of a formal system whose essence it is that reasoning is completely replaced by mechanical operations on formulas" ... [but] the reults mentioned in this postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics."(Godel 1964 in Undecidable:72)
Importance:
> AC (artificial consciousness, an AI with a feeling mind) would no less than an upheavel in human affairs
> AI as helper or scourage or both (robot warriors)
> Philosophy: nature of "man", "man versus machine", how would man's world change with AI's (robots)? Will it be good or an evil act to create a conscious AI? What will it be like to be an AI? (cf Nagel, Thomas 1974, What Is It Like to be a Bat? from Philosophical Review 83:435-50. Reprinted on p. 219ff in Chalmers, David J. 2002, Philosophy of Mind: Classsical and Contemporary Readings, Oxford University Press, New York ISBN 0-19-514581-X.)
> Law: If conscious, does the AI have rights? What would be those rights?
Current Conjecture:
An AI is feasible/possible and will appear within this century.
This outline is just throwing down stuff. Suggestions are welcome. wvbaileyWvbailey 16:13, 6 September 2007 (UTC)
cc'd from Talk:List of open problems in computer science. wvbailey Wvbailey 02:41, 2 October 2007 (UTC)
[edit] The role of randomness in AI
Some years ago (late 1990's?) I attended a lecture given by Dennett at Dartmouth. I was hoping for a lecture re "consciousness" but got one re "the role of randomness" in creative thought (i.e. mathematical proofs, for instance). I know that Dennett wrote something in a book re this issue (he was testing his arguments in his lecture) -- he talked about "skyhooks" that lift a mind up by its bootstraps -- but I haven't read the book (I'm not crazy about Dennett), just seen this analogy recently in some paper or other.
[edit] The problem of "minds", of "a consciousness" vs "an artificial intelligence" what do the words mean?
In your article you may want to consider "forking" "the problem" into sub-problems. And try to carefully define the words (or suggest that even the definitions and boundaries are blurred).
I've spent a fair amount of time studying consciousness C. My conclusion is this --
- Consciouness is sufficient for an AI, but consciousness is not necessary for an AI.
Relative to "AI" this is not off-topic, although some naifs may think so. Proof: Given you accept the premise "Consciousness is sufficient for an AI", when an "artifical consciousness" is in place, then the "intelligence" part is assured.
In other words, "diminished minds" that are not C but are highly "intelligent" are possible (expert systems come to mind, or machines with a ton of sensors that monitor their own motions -- Japanese robots, cars that self-navigate in the desert-test). There may be an entire scale of "intelligences" from thermostats (there's actually a chapter in a book titled "what's it like to be a thermostat?") up to robot cars that are not C. In these cases, I see no moral issues. But suppose we accidentally create a C or are even now creating Cs and don't know it, or cruelly creating C's for the shear sadistic pleasure of it (AI: "Please please I beg you, don't turn me off!" Click Us: "Ooh that was fun, let's do it again..." )-- that where the moral issues lurk. Where I arrived in my studies, (finally, after what, 5 years?) is that the problem of consciousness revolves around an explanation for the ontological (i.e. experienced from the inside-out) nature of "being" and "experiencing" (e.g knowing what it's like to experience the gradations of Life-Savor candy-flavors) -- what it's like to be a bat, what it's like to be an AI. Is it like anything at all? All that we as mathematicians, scientists and philosphers know for certain about the experiential component of being/existence is this: We know "what it's like" to be human (we are they). We suspect primates and some of our pets -- dogs and cats -- are conscious to a degree, but we don't have the foggiest sense of what it is like to be they.
Another way of stating the question: Is it possible for an AI zombie to go through its motions and still be an effective AI? Or does it need a degree of consciousness (and what do the words "degree of consciousness" mean)?
If anyone wants a bibliography on "mind" lemme know, I could assemble one here. The following is a really excellent collection of original-source papers (bear in mind that these are slanted toward C, not AI). The book cost me $45, but is worth every penny:
- David J. Chalmers (ed.) 2002, Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, New York, ISBN 0-19-514581-X (pbk. :alk. paper). Includes 63 papers by approx 60 authors, including "What is it like to be a bat" by Thomas Nagel, and "Can Computers Think" by John R. Searle.
Bill Wvbailey 15:24, 10 October 2007 (UTC)
- Since you bring up the word consciousness, I've added it the top of the article, because it's basically the same idea as having a mind and mental states. (This article will bog down in confusion if we distinguish "having a mind" from "being conscious.") I'll use the word in the section on Searle's Strong AI as well, when I finish it.
- Is it clear from the structure of the article that there are three separate issues?
- Can a machine (or symbol processing system) demonstrate general intelligence? (The basic premise/physical symbol systems hypothesis)
- Is human intelligence due to a machine (or symbol processing system)? (Mechanism/computationalism)
- Can a machine (or symbol processing system) have a mind, consciousness, and mental states? (Searle's STRONG AI)
- I've written the first, haven't touched the second and have started the third.
- The issue you bring up "Is consciousness necessary for general intelligence?" is interesting, and I suppose I could find a place for it.
It's an issue that no one, to my knowledge, has addressed directly -- I'm not aware of any argument that you need consciousness or mental states to display intelligence.(Perhaps this is why some find Searle's arguments so frustrating -- he doesn't directly say that you can't have intelligent machines, just that your intelligent machines aren't "conscious". He doesn't commit himself.)
- (While we're sharing our two cents, my own (speculative) opinion would be this: "consciousness" is a method used by human brains to focus our attention onto a particular thought. It's the way the brain directs most of it's visual, verbal and other sub-units to work together on a single problem. It evolved from the "attention" system that our visual cortex developed to attend to certain objects in the visual field. It is an optimization method that brains use to make efficient use of its limited resources. As such, it is neither necessary nor sufficient for general intelligent action.) ---- CharlesGillingham 17:35, 10 October 2007 (UTC)
-
-
- Totally agree, see my next. It may feel like "just two cents" but I believe you've hit on the definition of "intelligence".
-
-
- I was just thinking this over, and I realized that "consciousness" is what Dreyfus is talking about with his "background". The question of whether consciousness is necessary for intelligent machines falls under the first question (Can a machine demonstrate intelligence?) Dreyfus (and Moravec and situated AI) answer that "not unless it has this specific form of "situated" or "embodied" background knowledge." This background knowledge provides "meaning" and allows the mental state of "understanding" and we experience this as "consciousness." (I would need to find a source that says this). More later. ---- CharlesGillingham 18:56, 10 October 2007 (UTC)
Woops: circular-definition alert: The link intelligence says that it is a property of mind. I disagree, so does my dictionary. "IF (consciousness ≡ mind) THEN intelligence", i.e. "intelligence" is a property or a by-product of "consciousness ≡ mind". Given this implication, "mind ≡ consciousness" forces the outcome. We have no opportunity to study "intelligence" without the bogeyman of "consciousness ≡ mind" looking over our shoulder. Ick...
So I pull my trusty Merriam-Webster's 9th Collegiate dictionary and read: "intelligence: (1) The ability to learn or understand or deal with new and trying situations". I look at "Intelligent; fr L intellegere to understand, fr. inter + legere to gather, to select."
There's nothing here at all about consciousness.
When I first delved into the notion of "consciousness" I began with an etymological tree with "conscious" at the top. The first production, you might say, was "aware" from "wary", as in "observant but cautious." Since then, I've never been quite able to disabuse myself of the notion that that is the key element in, if not "consciousness", then "intelligence" -- i.e. focused attention. Indeed, above, you say the same thing, exactly. Example: I can build a state machine out of real discrete parts (done it a number of times, in fact) using digital and analog input-selectors driving a state machine, so that the machine can "turn its attention toward" various aspects of what it is testing or monitoring. I've done this also with micros, and with spread-sheet modelling. I would argue that such machines are "aware" in the sense of "focused", "gathering", "selecting". Therefore (ergo the dictionary's and my definition) the state machines have rudimentary intelligence. Period. The cars in the desert auto-race, and the Mars robots, are quite intelligent, ( iff they are behaving autonomously (all are criteria: "selective, focused attention attendere'" (fr. L. holding), "autonomous" and "behavior")).
"Awareness" versus "consciousness": My son the evolutionary biologist believes consciousness just "happens" when the right stuff is in place. I am agnostic. On one day I agree with Dennett that we're zombies, the next day I agree with Searle that something special about wet grey goo causes consciousness, the next day I agree with my son, the 4th day I agree with none of the above. I share your frustration with Searle, Searle just says no, no, no, but never produces a firm suggestion. But it was only after a very careful read of Dennett that I found his "zombie" assertion in "Consciousness Explained".
'Self-awareness: What the intelligent machines I defined above lack is self-areness. Does it make sense to have a state machine monitor itself to "know" that it has changed state? Or know that it knows that it is aware? Or is C a kind of damped reverberation of "knowing that it knows that it knows", with "a mystery" producing the "consciousness", as experienced by its owner? Does garlic taste diffent than lemon because if they tasted the same we could not discriminate them? There we go again: distinguishing -- di- stinguere as I recall - to pick apart. Unlike the Terminator, we don't have little digital readouts behind our eyeballs.
To summarize: you're on the right track but I suggest that the definitions that you're working from -- your premises in effect -- have to be (i) clearly stated, not linked to flawed definitions, but rather stated explicitly in the article and derived from quoted sources, and (ii) effective (i.e. successful, good, agreeable) for your presentation. Bill Wvbailey 22:09, 10 October 2007 (UTC)
- I think you're right. I hope that I'm working from the standard philosophical definitions of words like consciousness, mind, and mental states. But, of course, there are other definitions floating around -- for example, some new age writers use the word "consciousness" as a synonym for "soul" or "élan vital". I'll try to add something that brings "consciousness" and "mind" back down to earth. They're really not that mysterious -- everybody knows that they have "thoughts in their head" and what that feels like. It's an experience we all share. Explaining how we have "thoughts in our head" is where the real philosophical/neurological/cognitive science mystery is. ---- CharlesGillingham 00:14, 12 October 2007 (UTC)
[edit] Interesting article, possible references
http://www.msnbc.msn.com/id/21271545/
Bill Wvbailey 16:44, 13 October 2007 (UTC)
- I think this might useful for the ethics of artificial intelligence. (Which is completely disorganized at this point). ---- CharlesGillingham 18:49, 27 October 2007 (UTC)
[edit] Plan
I've added some information on how Searle is using the word "consciousness" and a one paragraph introduction to the idea. I've also added a section raising the issue of whether consciousness is even necessary for general intelligence. I think these issues must be discussed here, rather than anywhere else.
The article is now too long, so I plan to move some of this material out of here and into Chinese Room, Turing Test, physical symbol system hypothesis and so on. ---- CharlesGillingham 18:49, 27 October 2007 (UTC)
[edit] Rewrite is more or less complete
- I think this article should be fairly stable now. ---- CharlesGillingham 19:44, 8 November 2007 (UTC)