Talk:Ethics of artificial intelligence
From Wikipedia, the free encyclopedia
[edit] Comments moved from "Talk:Philosophy of artificial intelligence"
"A major influence in the AI ethics dialogue was Isaac Asimov who fictitiously created Three Laws of Robotics to govern artificial intelligent systems." I've removed "fictitiously." While the Three Laws of Robotics were created for a fictitious universe, Asimov really did create them. It might be appropriate to somehow add that he developed them for his science fiction books. goaway110 21:57, 22 June 2006 (UTC)
I don't see the point why Ethical issues of AI should be an independent encyclopedia entry. The lexicographic lemma here is surely "Artificial Intelligence". --Fasten 14:41, 7 October 2005 (UTC)
I disagree Fasten, I think the main AI article should briefly mention ethical issues and we should keep this as a separate article; the subject can be much more extended to include uses of AI(in wars, in saving people of dangerous conditions, in working under unhealthy circunstances(like mining)), AI as a possible future singularity Technological singularity(ie, what will happen if AI eventually become more intelligent and capable than humans), it could also include more deep discussions about the possibility of sensations(qualia) and consciousness in AI, some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar to ours, and about issues as "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?" Rend 01:29, 17 October 2005 (UTC)
The first question I would have regarding the ethics of AI would be whether it is possible for a machine to be capable of conciousness. This is obviously a very difficult question given the fact that no human being can really be capable of knowing anyone's internal existence other than his own. Hell, maybe computers really to have conciousness. But if they do, they would be the only one's who would know this for certain since it is difficult to ask a computer if it exists without programing it to say it exists beforehand. I believe animals have conciousness and are capable of feeling emotions even though they cannot tell us this. Also, would the fact that a computer wasn't capable of conciousness, much less emotions, mean that is should not be protected and given rights. This may seem impropper but I can help bring to mind the Terri shivo case. It is very possible that she was fully concious and fully capable of emotions even though she was in a permenantly catatonic state.207.157.121.50 12:49, 25 October 2005 (UTC)
- That might get difficult without OR. I changed the merge suggestion from "Artificial Intelligence" to "Artificial intelligence (philosophy)", which is referred to by the Portal:Artificial_intelligence --Fasten 13:51, 19 October 2005 (UTC)
The subject can be much more extended to include:
- Use of AI in wars
- Use of AI in conditions hazardous to human(saving people from fire, drowning, poisoned or radioactive areas)
- Use of AI in human activities(doing human work, AI failing, substituting human jobs, doing unhealthy or dangerous work(eg: mining), what will happen if AI gets better than us in most of our work activities, what AI will not be able to do(at least in near future))
- AI as a possible future singularity Technological singularity: what will happen if AI eventually become more intelligent and capable than humans, being able to produce even more intelligent AIs, possibly to a level that we won't be able to understand.
- More deep discussions about the possibility of sensations(qualia) and consciousness in AI
- Some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar or even better than ours(could it bring problems about machine "treatment"? I mean, could we still throw them away as they were a simply expensive toy, if they become better than us in all our practical activities?)
- As AI usage and presence become greater and widespread, should we discuss issues as: "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?"
I ask anyone who has references and contents to include them properly. Rend 23:11, 21 October 2005 (UTC)
Just following up on some of Rend's questions...
- Is it ethical for a person to own an AI "being"? Would an AI being necessarily "prefer" to be unowned?
- A computer is owned by whoever owns the factory that makes it (until the factory sells the computer to a person) -- is the same true of an AI being?
- If an AI being is unowned, and it builds another AI being, then does the first AI being own the second one?
- Are the interests of human society served by incorporating unowned AI beings into it? Would humans in such a society be at a competitive disadvantage?
- Would the collective wisdom of AI beings come to the conclusion that humans are but one of many forms of life on the planet, and therefore humans don't deserve any more special treatment than, say, mice? Or lichen?
Whichever way these questions are answered, more questions lie ahead. For example, if we say it isn't ethical for a person to own an AI being, then can or should society as a whole constrain the behavior of AI beings through ownership or through "laws of robotics"? If we are able to predict that the behavior of AI beings will not be readily channeled to the exclusive benefit of humans, then is there a "window of opportunity" to constrain their behavior before it gets "out of hand" (from human society's point of view)?
A survey of current philosophical thought on questions such as these (and the slippery slope issues surrounding them) would be very helpful here.—GraemeMcRaetalk 05:55, 3 November 2005 (UTC)
[edit] Robot rights
Moved Robot rights from Artificial intelligence in fiction to Ethics of artificial intelligence Thomas Kist 18:57, 15 October 2007 (UTC)