Ethics of artificial intelligence
From Wikipedia, the free encyclopedia
This article or section is in need of attention from an expert on the subject. Please help recruit one or improve this article yourself. See the talk page for details. Please consider using {{Expert-subject}} to associate this request with a WikiProject |
Please help improve this article or section by expanding it. Further information might be found on the talk page or at requests for expansion. (October 2007) |
Contents |
[edit] Treating AIs Ethically
There are many ethical problems associated with working to create intelligent creatures.
- AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
- Would it be wrong to engineer robots that want to perform tasks unpleasant to humans?
- Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
- Under what preconditions could such a simulation be allowed to happen at all?
[edit] Robot rights
Robot rights is the human rights corresponding to robots. The corresponding definition is therefore "the basic rights and freedoms to which all robots are entitled, often held to include the right to life and liberty, freedom of thought and expression, and equality before the law." [1]
[edit] Legal rights of robots
With the emergence of advanced robots, there is a greater need of legal rights as well as legal responsibilities for robots. Specific and detailed laws than that will probably be necessary[2]
[edit] Necessity
The time where the rights of robots must be considered might not be very far away. Already in 2020, there might be a robot in every South Korean household, and very advanced robots besides. [3] Since science is only accelerating, robot rights is probably something that even many people today need to consider. However, even the most enthusiastic scientists admit that at least 50 years have to pass before any real artificial intelligence can be spoken about[4]. Until then, no real robot rights are probable to be of significance.
[edit] Creating AIs that Behave Ethically
- Would a technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?
A major influence in the AI ethics dialogue was Isaac Asimov who, at the insistence of his editor John W. Campbell Jr., proposed the Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.
[edit] Robot Ethics in Fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
[edit] See also
[edit] External links
- Research Paper: Philosophy of Consciousness and Ethics in Artificial Intelligence
- Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture
- BBC News: Games to take on a life of their own
- 3 Laws Unsafe Campaign - Asimov's Laws & I, Robot
- Who's Afraid of Robots?, an article on humanity's fear of artificial intelligence.
- Governing Lethal Robot Behavior on the Battlefield
[edit] Notes
- ^ The American Heritage Dictionary of the English Language, Fourth Edition
- ^ The Legal Rights of Robots.
- ^ The Scientist - A Robot Code of Ethics By Glenn McGee
- ^ New World Technologies - NWT - Should we be worried by the rise of robots?