Philosophy of artificial intelligence
From Wikipedia, the free encyclopedia
The philosophy of artificial intelligence concerns questions of artificial intelligence (AI) such as:
- What is intelligence? How can one recognize its presence and applications? Is it possible for machines to exhibit intelligence?
- Does the presence of human-like intelligence imply consciousness and emotions?
- Is creating human-like artificial intelligence moral? What ethical stances should they take? What ethical stances should humans take toward them?
AI may be considered as a goal, an academic field of study within computer science, and as the techniques developed by such study. The philosophy of AI studies many topics that overlap with the philosophy of mind.
Contents |
[edit] Conditions for intelligence
The Turing test suggests a sufficient condition for intelligence is the ability to converse with a human in such a way that the human is fooled into thinking the conversation is with another human. (In order to remove biases based on how the AI looks, the conversation is normally imagined to take place through a medium like modern-day instant messaging chats.)
Such a test is not a necessary condition; it seems for example that ET was intelligent even if it couldn't convince anyone of this fact due to language barriers and the like. Others doubt that it is even a sufficient condition. Chatbots, for example, are learning more and more sophisticated algorithms for sounding intelligent without any actual understanding of the conversations.
John Searle argues that AI is impossible in his famous thought experiment, the Chinese room. Searle argues that syntax is not sufficient for semantics—that mere symbol manipulation, no matter how complicated, cannot provide genuine meaning or understanding. Most professional philosophers in the area believe that Searle failed to establish that AI is impossible, but there is disagreement about exactly what is wrong with his argument, with the Systems Reply, Robot Reply, and Brain Simulator Reply being among the objections.[1]
[edit] Ethical issues
There are many ethical problems associated with working to create intelligent creatures.
- AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
- Would it be wrong to engineer robots that want to perform tasks unpleasant to humans?
- Would a technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?
- Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
- Under what preconditions could such a simulation be allowed to happen at all?
A major influence in the AI ethics dialogue was Isaac Asimov who created the Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
[edit] Expectations of AI
AI methods are often employed in cognitive science research, which tries to model subsystems of human cognition. Historically, AI researchers aimed for the loftier goal of so-called strong AI—of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering. Recent research areas include Bayesian networks and artificial life.
The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are routinely used to augment or to replace professional judgment in some areas of engineering and of medicine.
[edit] References
[edit] See also
[edit] External links
- BBC News: Games to take on a life of their own
- 3 Laws Unsafe Campaign - Asimov's Laws & I, Robot
- Who's Afraid of Robots?, an article on humanity's fear of artificial intelligence.
- Part 1. Lectures in Philosophy of AI
- Part 2. Lectures in Philosophy of AI
- Part 3. Lectures in Philosophy of AI