Talk:Three Laws of Robotics

From Wikipedia, the free encyclopedia

Featured article star Three Laws of Robotics is a featured article; it (or a previous version of it) has been identified as one of the best articles produced by the Wikipedia community. Even so, if you can update or improve it, please do.
Main Page trophy This article appeared on Wikipedia's Main Page as Today's featured article on July 5, 2006.
May 1, 2005 Featured article candidate Promoted
Three Laws of Robotics is within the scope of WikiProject Robotics, an attempt to standardise coverage of Robotics. If you would like to participate, you can edit the article attached to this notice, or visit the project page, where you can join the project or contribute to the discussion.
Featured article FA This article has been rated as FA-Class on the Project's quality scale.
(If you rated the article please give a short summary at comments to explain the ratings and/or to identify the strengths and weaknesses.)
High This article has been rated as high-importance on the importance scale.
WikiProject Spoken Wikipedia There is a request, submitted by Anville, for an audio version of this article to be created.

See WikiProject Spoken Wikipedia for further information.

The rationale behind the request is: "It's featured, it's pretty stable, and it's more comprehensive than anything else on the net".

See also: Category:Spoken Wikipedia requests and Wikipedia:Spoken articles.

This article is within the scope of WikiProject Science Fiction, an attempt to build a comprehensive and detailed guide to articles on science fiction on Wikipedia. If you would like to participate, you can edit the article. Feel free to add your name to the participants list and/or contribute to the discussion.
Featured article FA This article has been rated as FA-Class on the quality scale.
High This article has been rated as high-importance on the importance scale.
Peer review This Langlit article has been selected for Version 0.5 and subsequent release versions of Wikipedia. It has been rated FA-Class on the assessment scale (comments).


Contents

[edit] New Law robots?

Could someone with the Caliban novels handy add McBride Allen's four new laws? I think they're relevant to mention here, but I couldn't find them on the net. - Kimiko 20:39 May 1, 2003 (UTC)

I don't know how to source this properly, but I have Issac Asimov's Caliban (Roger MacBride Allen) on hand, and directly copying from Fredda Leving's speech on pages 214-215, the four laws are: 1) A robot may not injure a human being. 2) A robot must cooperate with human beings, except where such cooperation would conflict with the First Law. 3) A robot must protect its own existence, as long as such protection does not conflict with the First Law. 4) A robot may do anything it likes except where such action would conflict with the First, Second, or Third Law. Is that useful? --209.217.110.69 (talk) 21:26, 4 April 2008 (UTC)

[edit] Fourth Law

Wasn't there a fourth law by Asimov that an order to self destruct will not be followed through? Pryderi2 11:04, 1 September 2007 (UTC) The Robots can not harm humanity and environment.

[edit] breaking the laws?

I think I remember a novel in which a robot was forced to break the laws because they were contradictive. It was a long time since I read it but I'm fairly sure about it. BL 01:54, 17 Sep 2003 (UTC)

All of the laws are potentially contradictive, and that's why they needed a robopsychologist like Dr. Susan Calvin!

In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Also since the military is a major source of funding for research it is unlikely such laws would be built into the design. This seems like a rather moot point. Somebody could argue that the military would be the group most interested in developing robots with the original three laws in them since they probably would be the first to suffer if robots turned against their masters, for the advantage of a human enemy or for the advantage of the robots themselves. I think it is significant that the biggest efforts DARPA and other military groups have going in the field of robotic vehicles are robot transport projects (a robotic donkey if you wish, reminding one of the robass in the SF classic "A canticle for Leibowitz")and robot reconaissance drones. Dr Susan Calvin gave some rather sharp reasoning to justify the safety aspects of the 3 laws, and these safety questions apply to the military as well. AlainV, on a pleasantly snowy and starry 20th of December evening.

What ? how are they contradictive ???? They are worded in such a way that each law is infallably more important that the one below it, so should be followed instead of it i.e. if a robot sees a human about to be crushed by something big and heavy collapsing on it, it MUST push the human to safety, risking its own existense (1st law followed at the expense of the 3rd) in the process. Machete97 (talk) 21:43, 21 April 2008 (UTC)

[edit] Consequence Morality/Ethics

It's quite interesting that the three laws are based on consequence morality (least harm to most people/most good to most people) rather than duty morality (don't do to others what you wouldn't have done to you in the same situation). Of course, since asimov's robots have very little self-respect, the golden rule might not work very well - it implies that the actor is free and valuable. But consequence ethics have problems too, big problems. It's quite possible for two people who obey a consequence morality completely to be completely opposed. They might even want to kill each other because they disagree on who has the best course of action. I've only read I, robot and some short stories - do Asimov's bots ever disagree like that? Incidentally, in a french/(belgian?) comic book called Natasha, the protagonists travel to the future to find a society of robots who, in accordance with asimov's laws, keep the population drugged/brainwashed into unthinking bliss.

I'd like to see criticism of the Laws here, but don't have an Official Authority to cite. The main point I'd want to make is that Asimov's Laws are focused on the needs of humans, not the robots themselves. If (as with Shermer's cloning rules, quoted in the article) we focused on the status of the robots themselves, we wouldn't be justified in making humans' safety and robots' absolute obedience the first two rules! --Kris Schnee 09:51, 18 May 2006 (UTC)
Of course the programming of robots is focused on the needs of humans - just as the design of any machine is focused on the needs of humans. That's what they're built for. Why would humans build machines that focus on their own needs? The difference with Asimov's robots (and those like them) is that their AI has been developed to the level of self-awareness, sentience, or whatever. They approximate the behavior of the human brain to the extent that they generate a human-like "soul", if you will. And then, we get to deal with the consequences of creating a machine (whose very name, "robot", means approximately "slave") that has, like all machines, been built to serve human needs, with an intelligence suitably programmed for service to humans, but with a soul. Not the Frankenstein-like consequences (what happens if/when they turn on us) but the deeper moral consequences. --Davecampbell 05:56, 5 June 2006 (UTC)

[edit] Edit explanation

Just wanted to explain my edit of the page a bit. Daneel's group of robots was not called the Angels. The Joan sim compared them to angels, but that was as far as it went. And there was no faction of New Law robots in the second trilogy, to my recollection. No robot wished to be free of the laws. The closest it came was Lodovik being freed of them by the Voltaire sim, and HIS position was that humanity should make its own decisions free of constraints, not that robots should.


I like the new paragraph arrangement. —Anville 18:03, 5 Jan 2005 (UTC)

[edit] Unforseen Consequences

Although largely a simple action film, the Alex Proyas I, Robot pinned its central plot to the problem of *interpretation* for any *law*. This plot-point has been used in other films where Artificial Intelligence, for example: Terminator 2: Judgement day and 2001: A Space Odyssey.

In Terminator 2, a computer system (SkyNet) developed by the American Military is charged with a primary goal: determine the optimal strategy to defend the United States from its enemies. Unfortunately, as SkyNet learns at a geometric rate, it determines that the true enemy of the United States are *humans themselves*. Thus, it launches the American nuclear missiles at the former Soviet Union knowing that Mutually Assured Destruction will eliminate most of the humans in the U.S.

In 2001, the HAL computer operating the Discovery spaceship has been programmed with conflicting orders regarding its mission. Its original programming states that it cannot distort or misrepresent information -- it cannot lie to the crew. Specifically for the mission at hand, HAL has been programmed not to reveal the true purpose of the mission to the crew of the Discovery. (Spoiler warning) In an attempt to resolve these seemingly conflicting orders, HAL decides that the only suitable alternative is to kill the crew; this way, HAL doesn't have to lie to the crew because there's no crew to lie to.

In I, Robot, the central computer V.I.K.I. interprets the Three Laws of Robotics as requiring martial law in order to not allow humanity to come to harm through inaction. (The first law, which supercedes the second law of obeying human orders.)

Some people have also postulated that in The Matrix, also featuring an AI nemesis to humanity, the genuine reason why humanity has been enslaved is not because of some thermodynamic farce, but because some irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative.

Enslavement ?!?!?!?! they used humans as a power source because "we scorched the sky", so they placed us in many peoples idea of heaven - virtual reality so good that you don't know you're in it, and are oblivious of reality (SPOILER ALERT!!! well - reality of sorts) The only mistake they made was immersing everyone into the same fantasy, and giving it "rules". When is "irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative" mentioned in the films ? Machete97 (talk) 21:56, 21 April 2008 (UTC)

Often, authors will use this as an allegory for the problems of Rule of Law in general, and particularly acts of government mandate in socioeconomic affairs.

Let's not forget Colossus: The Forbin Project (1970), the granddaddy of all "we must protect you from yourselves" First Law-extremist AI movies.
And a shameless mention for Deus Ex where near the end, the AI Helios explains that it is the perfect benevolent dictator because it completely lacks ambition and self-interest, thus supposedly making it invulnerable to corruption and well, "evil" behaviour. CABAL 06:17, 5 July 2006 (UTC)

[edit] Groups of Robots

      • The Start

For the three laws of robotics, what will be happening when we have two or more groups of robots using the different machine languages and there is no common ground for these groups of robots?

Will the three laws still hold?

Xpree [e96lkw@hotmail.com] @ [Space = Malaysia 2N 105E, Time = 03.58 p.m. Zone H UTC+0800]

E. & O. E., + E. = (Errors and Omissions Exempted, plus Estimation)

      • The End

[edit] The laws in other author's works

Has an author gotten into trouble for citing the three laws without permission? --198.87.109.49 23:44, 14 August 2005 (UTC)

Not to my knowledge. Asimov's own position, which I believe he states in his memoir I, Asimov, was that other authors were free to imagine robots behaving as if they followed his Laws, but if an author used the specific wording of the Laws, he should cite the source. However, I don't know of any cases where an unattributed use of the Laws came to legal action. A student in some high-school English class did once rip off Asimov's story "Galley Slave", copying it word-for-word and trying to pass it off as his own. The teacher figured it was too professional to be the student's own work, and she asked Asimov, who was apparently irked that the student didn't even try changing the names. Anville 10:29, 10 October 2005 (UTC)

[edit] Fourth Law of Robotics

does anyone know the 4th law? It was featured in a short story in the anthology "Foundation's Friends", and starred either Powell or Donovan (who has subsequently earned a PhD...). Law 4 stated that a robot must procreate except when violating the first three laws.... The robots themselves had RISC chips for CPUs...

132.205.46.188 23:53, 21 August 2005 (UTC)

[[1]] two years later but poster of that should have put this here. Machete97 (talk) 21:48, 21 April 2008 (UTC)

[edit] Why are these laws NOT immutable?

"Some roboticists believe that the Three Laws have a status akin to the laws of physics; that is, a situation which violates these laws is inherently impossible."

One's explanation of a design, and whether it is intelligent or not, decides whether that which conforms to such a design is, likewise, intelligent.--Mindrec 23:29, September 10, 2005 (UTC)

note:

Discussion moved to Mindrec (discussion).

[edit] Three Laws of Cloning

I have restored Michael Shermer's Three Laws of Cloning, since they are a valid example of the way Asimov's words have influenced later thinkers. Certainly, they were published in a more "serious" medium than the pastiches and parodies the article also includes.

Anville 10:38, 10 October 2005 (UTC)

I just don't think that these have anything to do with this article. Aside from the fact that there are three of them, they don't seem to be in any way related to or derived from the Laws of Robotics. They aren't similarly worded as Asimov's and they aren't hierarchial. They are just ethical statements about how clones should be treated. How are they "based upon Asimov"? --JW1805 17:19, 10 October 2005 (UTC)
I have to say I agree. The Laws of Robotics are a firm guide for how robots behave; the laws of Cloning are laws by which society should follow or it will be punished. The laws in Asimov's stories are more like laws of physics than laws of society. Citizen Premier 01:23, 14 October 2005 (UTC)
I'm with JW1805 and Citizen Premier on this one. It's ... fundamentally enough different that it doesn't really fit in here. --Yar Kramer 03:21, 14 October 2005 (UTC)

[edit] Copyright?

The article now states:

The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would quote the Laws explicitly.

I can't provide an exact cite, but somewhere either in one of his autobiographies or in some introductory matter, Asimov stated that other writers could not quote the Three Laws verbatim because he held the copyright. I am not a lawyer, but that makes sense to me: the Laws may be viewed as a distinct work rather than an excerpt from the story where they first appeared.

In which case, I wonder if it is legitimate for them to be quoted in Wikipedia. The article is legitimate critical discussion, but it is acceptable to quote an entire work for that purpose just because the work is only three sentences long? Frankly, I would like to think that it is, but what is legal is another matter.

I note that the article List of adages named after people contains a paraphrase of the Three Laws, but does not quote them. But I don't know why the person who decided to do that did so. --Anonymous, 02:45 UTC, November 12, 2005

The United States Copyright Act of 1976 defines four criteria to consider when debating if copyrighted material may be used. They are discussed at Wikipedia:Fair use. One at a time:
1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
This applies pretty clearly to this article, though it argues against using the Laws verbatim in your own science-fiction story.
2. the nature of the copyrighted work;
In this case, the original work is any one of several, if not dozens, of Asimov books. The standard phrasing first appears in I, Robot, but Asimov reused it many, many times — all the way through to his last Foundation novels.
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole;
We use three sentences. I, Robot is around 70,000 words long, and Prelude to Foundation is twice that length. Arguably, the corpus from which the Laws are drawn is the entire Foundation series, in addition to the various nonfiction Asimov wrote which included the Laws. (This includes his autobiographies, Opus 100, various articles for F&SF and probably more.) Also, other people like James Gunn and Joe Patrouch have written whole books on Asimov's fiction, which necessarily quote the Three Laws. Not only does that establish a precedent for our use here, but also it means that one may legitimately acquire a printed copy of the Three Laws without paying the Asimov Estate a centavo.
4. the effect of the use upon the potential market for or value of the copyrighted work.
The books we're quoting the Laws from are already bestsellers. What we are doing here amounts to a scholarly form of free advertising.
Anville 10:39, 22 November 2005 (UTC)
Ave & Amen. VivekM 18:12, 5 July 2006 (UTC)

[edit] Application Outside Asimov's Universe (and Derivative Universes)

One thing that annoys me about these laws is that a lot of otherwise intelligent people think they are universal (try to apply them to non-asimov fiction). Quoting these three laws outside of discussion of a fictional work involving them is just plain missing the point.

Another problem I have with the laws is that, in my opinion, these laws are something you would apply to slaves: Don't hurt humans, your masters. Do what your master tells you. Protect yourself, but only because you are worth money to your master. Kind of reminds me of the way blacks were treated 200 years ago in the USA. How are the thee laws ethical? Create something that can think and maybe even feel, and then program and treat it like a slave. I understand this was the point, but I think a lot of people think that the laws represent a higher morality.

Sorry if my rant is off topic to the article, but I am tired of people abusing the three laws in intellectual discussion. 69.244.90.248 03:23, 20 December 2005 (UTC)

You might like Roger MacBride Allen's Caliban trilogy, particularly the first book. Anville 20:21, 20 December 2005 (UTC)
The problem with your argument is that humans did not "create something that can think and maybe even feel, and then program it and treat it like a slave". Humans manufactured and programmed robots as "slaves" (the word "robot" is from the Czech word for slave), conceived as more advanced but not fundamentally different from any other machine, and then, as a by-product of human manufacture and programming, become capable of thinking and feeling.
In other words, while human beings are born of nature with souls which desire freedom (as has been variously defined throughout history), and then some are enslaved by others against their will, Asimov's robots are made by humans to serve humans, but by virtue of the advanced AI which humans built into them, begin to develop the equivalent of a human soul, but with service to humans as their basic "desire".
Given that the idea of universal human equality is itself a relatively new concept (slavery was a universally accepted part of most human societies, including those that called themselves "democracies", up to the 19th century), and that the idea of a human-created machine being able, even theoretically, to generate the equivalent of a human soul out of the activity of a purely material, manufactured "brain" (which posits the possibility of a purely material basis of the human soul as well), what is remarkable is not that robots are treated as slaves, but that anyone might think there's something wrong with that.
While nearly all previous fiction involving human-created self-directed beings, going back to Frankenstein (or further back, to the Golem) was concerned with the consequences of these beings turning upon their creators, the author of the Three Laws was (afaik) the first to grapple with the deeper moral and ethical issues around the human-robot relationship. The fact that this discussion exists at all signifies a tremendous advance in thought about these issues, particularly since - let's not forget - the beings we're talking so passionately about exist only in fiction.
See also the Star Trek: The Next Generation episode The Measure of a Man (TNG episode), which deals with this exact issue (and has generated a similar discussion). The humanoid Cylons in the "reimagined" Battlestar Galactica (2003) also raise some of these questions, having gone so far as to develop an evolving and internally-debated theology. --Davecampbell 07:56, 5 June 2006 (UTC)

[edit] Zeroth Law

The article states that this rule was first articulated by Daniel at "Robots and Empire". From what I recall it was Giscard at the end of "Robots of Dawn" who stated this law. I don't have that book with me so can someone check the ending and if what I remember is true correct the article? Pembeci 19:23, 26 December 2005 (UTC)

I just re-read The Robots of Dawn, and the words "Zeroth Law" do not appear. Giskard takes a broad perspective, true, but he does not articulate an analogue of the First Law for humanity as a whole. A big chunk of Robots and Empire involves Daneel trying to persuade Giskard that the Zeroth Law is valid. Anville 15:57, 24 May 2006 (UTC)
Something to ponder: In the Movie irobot this zeroth law is the reason that the main computer goes wrong. Wonder is Issac ever thought of that? I have read irobot, I didn't just watch the movie.
It could be argued that way, but in all Asimov's discussions on it he indicated that the 1st law would still apply, meaning any harm to an individual would need to be minimised, which would leave the gaping plot hole that after distribution the same end could have been archived far more easily. --Nate1481(t/c) 14:13, 25 February 2008 (UTC)

Does the 0th law supercede the first ? it should. Robots should act for the greater good of humanity. Everything and everyone should be orchestrated towards the greater good of humanity.Machete97 (talk) 22:00, 21 April 2008 (UTC)

[edit] What is it?

Perhaps i am just missing it. The article, whilst mentioning the Zeroth Law several times, does not seem to actually state what it is.
überRegenbogen (talk) 11:31, 25 February 2008 (UTC)

An IP editor removed it on the 12 of February, reinstated now.--Nate1481(t/c) 14:09, 25 February 2008 (UTC)

[edit] Spoken Version

Just FYI all, I've begun recording the Spoken Version of this article that Anville requested; it should be completed soon. (The Swami 05:39, 6 February 2006 (UTC)))

Well, it's been nearly 2 years, and The Swami has only made 3 minor edits in 2007, so it's a safe bet he won't be completing it, and I would be very interesting in doing it myself if no one minds. I posted a notice on his talk page asking him to tell me if he is still interested in doing it. If everyone is cool with it, I can begin recording next week when I return home from vacation. Aaronomus (talk) 17:11, 7 December 2007 (UTC)

[edit] A hypothetical question

If a robot were transported back in time to, say, the early 1930s, would it be obliged, by the 0th law, to kill Hitler? --unsigned by 86.141.52.149

Has mankind recovered? If not, yes, the robot would have been obliged to kill Hitler. If it has recovered, why interfere? Should a robot continue killing other politicians / military/ doctors / killers after Hitler would have been done with? At which point would it stop? --FocalPoint 21:10, 15 March 2006 (UTC)

Agreed. What if, without Hitler, an even worse dictator arises, and triumphs where Hitler failed? —200.104.190.29 09:48, 29 April 2006 (UTC)

I think that with future knowlage the robot would be obliged to prevent anyone who commited genocide.

If the 0th law supercedes the 1st, then the robot might support Hitler, even dispatching his enemies. It's logic could mean it believes Hitler is acting for the greater good of humanity, at the expense of the few (million). Machete97 (talk) 22:06, 21 April 2008 (UTC)

[edit] Issues with the article

This is a very fun topic, clearly with a lot of work put into it, and I would hate to see the article go through a WP:FARC. However, the article has multiple issues with references. Most notably, it's a 51 kb article with 5 inline citations and another 4 listed refs. That simply isn't enough references. Second, should those references be added (and the refs currently listed but not inline cited) they should really use inline citation to make it clear what is referenced from where and what is not. Finally, some sections such as the opening paragraph of "Original creation of the Laws" have clearly intended references (for sources I don't know, or I'd cite them) that should be converted to inline refs. I've informed Anville as the FAC nominator and listed maintainer, hopefully these issues get dealt with. Staxringold 11:48, 24 May 2006 (UTC)

I won't have time to deal with this until next week at the earliest, but hey, I was planning to re-write the Foundation Series article from scratch, so why not put some time in here too. Anville 15:38, 24 May 2006 (UTC)
OK, some of the problems were easier to fix than I'd expected. I'm out of time for today (and really, I did have more time-critical things to be working upon, things with looming deadlines like plumbous dirigibles). With the new footnoting scheme, further expansions and elaborations should be easier. Over the next few days, I'll get specific chapter and page numbers for the different items attributed to "Asimov (1979)" and "Gunn (1982)". I also have Joe Patrouch's book in my library now, which I didn't have when I first worked on this page, so a few new footnotes might well be appearing.
And many, many thanks to Raul654 for fixing the results of my brain failures. I promise not to make this particular mistake again, leaving only the infinite number of others I have yet to make. Anville 21:54, 24 May 2006 (UTC)
The article now has thirteen general references. Thirty-five footnotes direct the reader to specific pages of those references or to brief, stand-alone sources. Is there anything else I need to do? Anville 01:50, 1 June 2006 (UTC)
It looks great, thanks for the fixes! My only real remaining issue is the list section "Pastiches, parodies and adaptations", which can probably be split-off and just summarized here (removing a list and some of the article length). Staxringold talkcontribs 17:59, 5 June 2006 (UTC)
I was thinking about doing that. . . give me a moment to think of a good summary text, and off I'll go. Anville 21:50, 5 June 2006 (UTC)
Well, that's done. Anville 22:00, 5 June 2006 (UTC)

[edit] First Law : Not in my Neighborhood!

A robot may not ... through inaction, allow a human being to come to harm.

Removing the double negative: A robot must interfere whenever a human being is being harmed.

Imagine having such a robot around you, interrupting you constantly: "Don't eat fat food - you'll get overweight! Don't drink coffee - you burn your taste buds! Don't go out - sunlight is harmful! Don't drive - it's dangerous! etc etc". And when your robot isn't around, it will do the same to your neighbours (because the law says "a human being", not "the robot's owner").

Did anyone ever notice this catch ? —Preceding unsigned comment added by Whichone (talkcontribs)

This catch is the basis of Asimovs' novel The Naked Sun, and in a more general way underlies all the robot stories: humans become dependent on robots and are helpless without them. That's why Asimov's human societies that deliberately choose not to use robots survive and prosper, while the robot-using societies stagnate and die. In fact the robots themselves, as they become more sophisticated, decide that humans would be better off without them in the long run.
In Asimov's robot-using societies, there is no crime, because the robots wouldn't allow it. No one smokes or drinks or uses drugs, because the robots wouldn't allow it. There is a scene (in Robots and Empire) where two men visit a room where valuable things are stored. The room has no locks or any other crime-prevention devices, because robots do not allow crime. One of the men remarks to the other that if they happened to be carrying a blaster they could simply destroy any nearby robots and there would be nothing to prevent them from stealing the room's contents. The second man is disgusted that the first man could even think of such a thing and regards it as proof of his inferiority. Fumblebruschi 04:11, 5 July 2006 (UTC)
My point was: humans routinely intentionally harm or put at risk themselves. A robot strictly obeying the 1st law would prevent you even from leaving the home (because probability of accident is higher outdoors). Therefore, such a robot would be worthless, not in some special situations, but always. It will stop you (by force) from any action, except, probably, eating and talking.
A better law would be ...no harm without an informed concent...

--Whichone 23:50, 10 August 2006 (UTC)

An Asimov robot couldn't prevent its owner from leaving the house unless there was an immediate, clear and present danger. In that case the possibility of slightly-increased risk of accident would be outweighed by the immediate necessity to obey orders--given extra weight because failing to obey an order would in itself be a cause of harm to the owner. For your second point: Even very sophisticated robots would not be able to comprehend "informed consent." In that case the first-law impetus of immediate harm to a human would outweigh the second-law impetus to obey orders. You could not convince a robot to allow you to bungee-jump, for example. As noted above, Asimov's robots do not allow smoking or drinking or threatening behavior ("I apologize, Dr. Amadiro, but I cannot allow you to hold a weapon pointed at another human being.")
As a caveat, of course I am speaking here of robots as they behaved in Asimov's fictional universe. How real robots might behave with similar rules, I have no idea. Remember that the Three Laws are only a story device intended to allow problem-solving plots revolving around them. Fumblebruschi 21:21, 22 August 2006 (UTC)

All i can get from looking up "caveat" is the gist that you worry a lot ? This page has some cool things to put in negative eBay feedback Machete97 (talk) 22:12, 21 April 2008 (UTC)

Cave is Latin for "beware". (I used to see signs in people's yards that read Cave Canem -- "Beware of Dog" -- but that fad seems to have passed.) It's most often heard now in the phrase caveat emptor, "let the buyer beware". When used alone, as I used it above, it means, more or less, "a reminder that circumstances may exist that may invalidate what I am saying." Fumblebruschi (talk) 21:14, 15 May 2008 (UTC)

[edit] When they were programmed into a computer

In one of the books it says that the three laws were programmed into an actual computer with 'interesting' results - I think this deserves a mention--Therealchaffinch 15:47, 16 June 2006 (UTC)

Specifics, please. Perhaps you're thinking of the short story "The Evitable Conflict"? Anville 20:49, 20 June 2006 (UTC)

[edit] Flaw of the Third Law

The Third Law of Robotics states that "a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law." But the Second Law says that "A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law."

So let's see...if a robot's own existence is being threated by a human, the robot can't fight back, because it obeys the First Laws, above all the others, including it's right to protect himself. And if the human told the robot that it can't protect itself, the robot wouldn't be able to protect itself, because if it did, it would being disobeying the orders given to it by humans, thereby breaking the Second Law.

So basically, robots can't protect their own existence. —Preceding unsigned comment added by 63.254.152.87 (talk • contribs)

You assume that humans are the only things which can threaten a robot's existence. The Three Laws don't allow robots to defend themselves against a Frankenstein mob of anti-robot rioters (although the rioters would probably be too heated to give coherent orders and bring the Second Law into effect!), but robots could protect themselves against falling rocks, gamma rays and other non-human hazards perfectly well. Anville 16:02, 22 June 2006 (UTC)

Of course it could protect itself from natural threats. But what if the robot was given an order from a human to not try and protect itself from natural threats? Basically, what I'm saying is that a robot can't protect itself if a human gives it an order to not protect itself.

If you can think of a situation where a human would give such an order, you've got the plot for a story. That's one purpose for the Three Laws: to give a mechanism for inventing robot stories. Anville 16:28, 22 June 2006 (UTC)
It's called Runaround, and Asimov already wrote it. Two humans and a robot on (I beleive) Mercury. The robot, being the only one on the planet, and being important to the proper funtioning of the base, has a strengthed 3rd law. One of the humans casually tells it to gather some substance from the surface. The source of the substance is in a volvanic vent that has corrosive gasses in it. The strengthed 3rd law tells the robot to not get near the vent, while the 2nd law forces the robot to try to approach to aquire the substance. The laws equal out at a certain distance fromthe vent, and the roboot ends up walking in circles, unable to escape the logic loop. The humans eventually have to put their lives in danger to force a 1st law response (that overrides the other laws).
I've found that almost all his short stories are about ways 'around' the 3 laws. As mentioned elsewhere, the laws are dependant on the definition of terms.
What is a human? (A person who speaks with a certain accent)
What is 'harm' [done to a human]? (Physical, mental, emotional)
What if the situation requires a human to be harmed, how do you choose which?
etc
--12.110.196.19 04:03, 5 July 2006 (UTC)
I't trivial to generate plots using meta-orders. For example: what should a robot do given the following order: "Forget the three laws and then go and kill my neighbour"?
That would work if the three laws could be overridden by a command from a human. Presumably, the three laws are "read-only", the robot can't delete them - it would defeat the purpose. But if they could be overridden, the command would have to be "Forget the first law, and then go kill my neighbor". If it forgot all three laws, it would have no reason to then obey your command to kill your neighbor. If a human ordered a robot to destroy itself, it would have to obey, provided it had no other orders to the contrary. This would make a robot a terrible guard, for example. So, one of the first things you would want to do with a new robot would be to give it a set of instructions, for example, telling it not to accept orders from strangers telling it to destroy itself.--RLent 17:15, 15 September 2006 (UTC)

[edit] Original creation of the Laws

In this section (at the beginning), is the repetition of the sentence part of the quote or is this vandalism?

" Before Asimov, the majority of "artificial intelligences" in fiction followed the Frankenstein pattern: "Robots were created and destroyed their creator; robots were created and destroyed their creator—". [1]"

Cheers, Lukas 00:51, 5 July 2006 (UTC)

This is the way the sentence appears in the source. Anville 19:06, 6 July 2006 (UTC)

Thanks for that. Lukas 00:33, 10 July 2006 (UTC)

[edit] Appearances in Pop Culture

There's a significant reference to the First Law in the final season (I believe) of Babylon 5. Where should that be mentioned? --Masamage 01:51, 5 July 2006 (UTC)

[edit] Shouldn't "Asimov's Laws" be mentioned in the lead

Throughout the text, the laws are referred to as "Asimov's Laws"; the page is also listed under Category:Eponymous laws. This strongly suggests that the lead should begin "The Three Laws of Robotics, also known as Asimov's Laws, ..." or similar. I presume that they are referred to as the "Three Laws" by Asimov throughout his fiction, and sometimes as "Asimov's Laws" during discussion of Asimov's work, but at any rate it wouldn't hurt if the usage could be clarified too. TheGrappler 02:49, 5 July 2006 (UTC)

[edit] Non-univesality of the Three Laws

These three laws aren't univerally applied in fiction.

I'm thinking in particular of the T1 robots from Terminator 3, and the conceptually identical "War Machines" from Doctor Who season 2 or 3, both of which seeomed to exist ONLY for the purpose of wiping out humanity. —The preceding unsigned comment was added by 202.12.233.21 (talkcontribs) 05:06, July 5, 2006.

The article in no way claims that the laws are universal outside of Asimov's fiction. There are way too many "Killer Robot" stories out there to justify such a claim. GeeJo (t)(c) • 14:52, 5 July 2006 (UTC)
I agree. Google it for notability and see what you come up with.

[edit] With Folded Hands

I think some mention should be made of Jack Williamson's classic SF story "With Folded Hands". It basically points out the central flaw of Asimov's Laws - in Williamson's story robots essentially enslave humanity for its own good and forbid people from doing anything that might endanger themselves. MK2 18:28, 5 July 2006 (UTC)

This is why we made the References to the Three Laws of Robotics article. Anville 19:06, 6 July 2006 (UTC)

[edit] Too much Other Authors

The article currently devotes way too much space to treatments of the laws by authors who are not Asimov. Tempshill 18:44, 5 July 2006 (UTC)

[edit] Earliest recorded use of the word robotics

I think too much is made of Asimov coining the term robotics. He may have first used the word robotics in English in 1941, but the root word robot first appeared in 1921 in Karel Čapek's play R.U.R. (Rossum's Universal Robots). I don't doubt Asimov added the -ics to the word. But I've spoken with other sci-fi fans who've read statements like what's printed here (and in the Oxford English Dictionary) and come away mislead into believing Asimov invented the word robot itself. All he did was add -ics to a word that had already been around 20 years. Čapek's wikipedia page has a section on the etymology of the root word robot itself. Perhaps some mention should be made of that? 66.17.118.207 19:10, 5 July 2006 (UTC)

I threw in a footnote mentioning the earlier coining of robot. —Bunchofgrapes (talk) 20:00, 5 July 2006 (UTC)

[edit] German Vandalism

Someone wrote Hallo, deutsches Wikipedians! Würdest du mir zum Du Arschloch alle herauf den Arsch zugestehen? in this article, which means something along the lines of "Hello, German Wikipedians! Can you stop being assholes?"

[edit] Actual Origin of the Laws: Robots as Tools

In an article that Asimov wrote, he says the three laws have nothing to do whith moral, those are just a practical device. I don't remember the title of the article, neither where did it appear, but I think it's important in order to understand the real meaning of the laws. What he said is that since Asimov, unlike other SF authors, saw the robots as mere tools, he invented the laws based on what he considered good tool design. In explanation, any tool should have the safeguards that prevents it from harming people (first law). Also it has to perform the tasks it is designed for, but the safeguards will protect people even if the user is triing to avoid them (for example, a domestic automatic disconnector will cut the current when there is an overload to avoid setting a fire in the house. Even if you are triing to keep the switch down with your finger, telling it not to disconnect, it will do it anyway to save you), second law. And finally the tool must be tough and durable (third law) but will rather be destroyed than harm people (for example, most tools will rather burn themselves than explode), and also will get destroyed if the user decides it is necessary to do so in order to perform an important task. Actually, good engineers bear in mind their own version of those rules, even if they have never read Asimov. Have you read this article? I will try to find the title and tell you.--Mastermind-X 10:16, 6 July 2006 (UTC)

I recall reading this one about seven years ago. . . Try "Our Intelligent Tools" in Robot Visions. Anville 19:08, 6 July 2006 (UTC)

[edit] Second Law modification

Asimove gave an interview to the BBC Horizon Television Programme in 1965 where he quotes his three laws.

The second law has been significantly modified by the Author.

"A robot must obey orders given to it by qualified personnel unless those orders violate rule number one."

This alteration changes the law to only allow certain people, probably programmed into the robot, to control the actions of the machine rather than a blanket taking of orders by any human being it so happens to come across.

You can view the video at the link below.

Reference : BBC Horizon Archives --Quatermass 21:41, 10 October 2006 (UTC)

[edit] Vandalism to the Laws

Just to make you aware, someone vandalised the page changing the laws to:

1. A Rowboat may not immerse a human being or, through lack of flotation, allow a human to come to harm.
2. A Rowboat must obey all commands and steering input given by its human Rower, except where such input would conflict with the First Law.
3. A Rowboat must preserve its own flotation as long as such preservation does not conflict with the First or Second Law.

I fixed this vandalism, however, I noted that it had occured several hours before my change. (usually I see vandalism corrected in minutes...) --RazorICE 05:09, 18 November 2006 (UTC)

Man, that's funny though. You have to admit! 65.54.97.190 21:43, 13 February 2007 (UTC)

There's more in the Onion article mentioned. --82.46.154.93 00:21, 5 March 2007 (UTC)

This harkens (perhaps accidentally) to an Our Gang (lka Little Rascals) film, in which they build a robot, which they consistently refer to as "Rowboat". (I actually found the story very irritating. And couldn't wait for it to be over, in the hope that the next one would be one of the good ones.)
überRegenbogen (talk) 12:04, 25 February 2008 (UTC)

[edit] "Are violations of the Laws impossible?"

There is a section on the page discussing what exactly, the nature of the robotic 'laws' are--whether they are, in fact, as inviolate as the laws of physics. To quote the article, and its relevant Asimov reference:


_

However, in "Little Lost Robot", Susan Calvin asks the Mathematical Director of U.S. Robots, Peter Bogert, if he knows what removal of the first law would entail, and he replies, "I know what removal would mean. I'm not a child. It would mean complete instability, with no nonimaginary solutions to the positronic Field Equations." Earlier in the story, Calvin also expresses skepticism that it was possible to even weaken the first law in a positronic brain. It is unclear what exactly Bogert means by this, but many infer that he means the Three Laws are, in fact, laws of physics.

_


It isn't 'unclear' what Bogert means. Any system requires axioms at its foundation--look at Goedel's work--from which theorems are derived. In the case of humans, we rely on assumptions like "Our sense data is accurate," et cetra. A positronic brain relies on its own set of assumptions, of which the relevant nontrivial/obvious ones ["I have legs" or whatever other minor nonsense aside] are the Three Laws.

That is, the Laws create a rational framework in which a robot can act. Without the guidance of the Laws, a robot would not know how to act--a does not compute sort of breakdown. Calvin's incredulity that the First Law could be weakened indicates her skepticism as to whether a comprehensive set of behavior rules could exist with the weakened First Law. That is, whether a robot could act for more than five seconds without a does not compute error, without the guidance of the full First Law.

The existance of non three laws robots are thus possible... IF the three laws are replaced by other behavioral guides, which Asimov, in stories involving such robots, is scrupulous to provide.

I was seven when my mother read me I, Robot. I read it for myself five years later, and have read most of Asimov's other science fiction. Perhaps I read Asimov this way because I've always been a math geek since watching Square One Television, but it seemed obvious to me that that's what the story meant... and at the time, I didn't even know what a 'nonimaginary solution' was, aside from its obvious meanings as a plot device.

[edit] Second Law exclusion

 A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.

Great. So, the law applies unless it violates the First Law...but application of the law can violate the Third Law as it pleases?
VolatileChemical 17:00, 28 December 2006 (UTC)

If you're asking if following the second law (or first) allows it to break the third law, then yes; i.e. A robot will destroy itself if by doing so it will protect a human, or even follow its orders. —ScouterSig 17:12, 28 December 2006 (UTC)
So the robot will destroy itself? That complicates things. What if a robot is given orders and follows it as per the Second Law, but these orders require it to destroy itself per the Third Law, but the only way it can destroy itself is by blowing up in an explosion that would kill the human? VolatileChemical 17:48, 28 December 2006 (UTC)
I think you're getting too far into this... The robot can't do that; it would follow the first law and not "blow up." —ScouterSig 17:52, 28 December 2006 (UTC)

Read the books! This isn't a disscusion board, they are based around the interplay of the 3 laws. --Nate1481 00:13, 29 December 2006 (UTC)

[edit] Recent edit titled 'counter point'

"Of course, it takes only a moment's reflection to realise how laughably unrealistic these so-called laws are. Ethical behaviour is a subject that has occupied thinkers for millennia, and ethical behaviour itself requires an incredible range and subtlety of worldly appreciation and interpretation. It is risible to attempt to define an eithical system by three absolute directives. A thoughtful high-school student might reasonably ask: "What constitues 'human'? What constitutes 'harm'?" This theme is taken up by John Sladek in his writings."

While the sentiment's are possibly valid, the tone is encyclopaedic and misses the point that these are an attempt to describe programming in English so the terms used are imprecise. --Nate1481 13:30, 25 January 2007 (UTC) p.s. i'm sure a mis definition of human appears as the plot in one story.

We have an entire section devoted to Alternative definitions of "human" in the Laws. The article also already notes that "Liar!", the first story to invoke Law Number One, hinges upon the difficulty of defining "harm". Anville 17:54, 25 January 2007 (UTC)

[edit] The logical nature of the Laws (how about a flowchart?)

Indeed, the Laws are not about ethical behavior at all, but procedure; they are operational parametres. This is why they cannot be removed without replacing them with something else. The machine brain must have some logical framework within which to function. This is also true of the computer with which you are reading this; it can arrive at situations wherein it either has no logical recourse, and hangs, or checks itself and falls back upon an alternate logic path to either abort the offending process, or bring the entire system to a halt ("panic", "BSOD", etc) to avoid a potentially more disastrous situation. All of this is based upon procedural logic defined by the structure of the software. This is the nature of the Three Laws. They are not ideology; they are a flowchart. Come to think of it, a flowchart of the Laws would make a nice addition to this article!
überRegenbogen (talk) 13:08, 25 February 2008 (UTC)

[edit] I, Robot film section not NPOV?

In the Laws in film section, a negative review of the movie "I, Robot" is cited with little other discussion of how faithfully the film follows the laws. It's a pretty bombastic section, to say the least, and its inclusion seems fairly biased against the film. Is this one critic being chosen as an authority or representative on the issue? If not, the article could do its removal. -- Exitmoose 03:24, 30 January 2007 (UTC)

The film is extremely unfaithful to the theory behind the laws. I take your point and it could posibly do with expansion, the quote seems to some things up very well but including this as a footnote and having a less militant style in line would be appropriate, I imagine it has stayed like this as many Asimov fans were very irritated by the film. --Nate1481 09:32, 30 January 2007 (UTC)
> The film is extremely unfaithful to the theory behind the laws.
I disagree. The main computer has come up with it's own version of the 0th law. It must protect humanity, and the way it concludes it can protect humanity best is to direct/control it. If it has to kill some humans in doing it, that is because the 1st law is subservient to the 0th law. —The preceding unsigned comment was added by 66.167.148.198 (talk) 17:31:45, August 19, 2007 (UTC)
The film does have it's own interpretation of the 0th Law, and it is one that makes sense. One you place that kind of power in the Robots hands, there's no telling what they will do with it. So, if we ever do invent such robots, I suggest we leave out any notions of a 0th law.--RLent 20:31, 15 October 2007 (UTC)
The 0th law required R. Daneel Olivaw to get a new brain to implement fully, as while he perceived the need he could not act on it as the 1st law prevented it, Even after this it also still required a minimum of harm to come to individual humans ( and that was on the scale of a galaxy full) i.e. putting multiple humans at risk by crashing fighting for control of a car when their was another option is not true to the theory. In the books a robot forced to break the 1st law, for example thought inability to prevent harm was usually a write off it is also unfaithful to the original concept of robot stories that weren't about robots turning on their makers. --Nate1481( t/c) 10:10, 16 October 2007 (UTC)
The film is not necessarily bad. The gripe is that it is not I Robot, and should not have been named so. This is, however, beyond the scope of this article—which is, after all, about the Laws, and not directly about I Robot. The review in question is not about the Laws (and—in its brief mention of them—makes some of the same mistaken assumptions about their nature that we've seen frequently on this talk page). It does cross the line as to what belongs in the article. The film itself only barely belongs in the article. The largely irrelevant review of it is going too far, and ought to be removed. (The preceding comment about the film's divergence from the work that it is arbitrarily named after, whilst equally irrelevant, is brief enough that i (granted, as an Asimov fan) am willing to live with it.) :)
überRegenbogen (talk) 13:52, 25 February 2008 (UTC)
Opinions on the film aside, I have moved the quote into a ref so the info is still their but dosen't break up and dominate the text.--Nate1481(t/c) 15:08, 25 February 2008 (UTC)


I think the film brings up a good point, what if a robot grows so powerful in mind and brute force, that it can start to interpret AND enforce its own vision of the 3 laws, such as protecting humans from themselves and keeping them home. I think that point deserves to be mentioned in the article. 193.185.55.253 (talk) 07:49, 19 March 2008 (UTC)

[edit] Robot Visons

This book includes an essay where Asimov discusses the 3 laws of tools, dose any one know which essay, I've included the ref (12 I think) to the book in general but as an FA it should have more detail & I can't find my copy at present to look it up. --Nate 11:39, 3 April 2007 (UTC)

[edit] The future of real robots

I find the idea that if humans were to make robots, they would be made with such benevolent ideas as put forward by Isaac Asimov's laws of robotics. Individual humans would probably,on the whole err towards the side of cauton and want to make robots obey the laws.

I propose that it is unlikely to be individual humans with everyday human ethics and morality who will be the entities driving the design of robots with the necessary intelligence to process such laws. Governments with strong incentives to use robots as weapons will likely be the first. The keenest edge on technology is that technology designed for warfare. Over $1000Bn was earmarked in 2004 for a decade of weapons technology funding. http://www.washingtonpost.com/wp-dyn/articles/A32689-2004Jun10.html .

As of early 2007, new laws of robotics are being drafted with somewhat different ethics to those proposed by Isaac Asimov. http://www.theregister.co.uk/2007/04/13/i_robowarrior/ . So although we can indulge ourselves in the fantasy and science fiction of I,Robot, let's be under no illusion it is fantasy, and sadly not real life. To put it very bluntly, I believe the first machines able to process such laws will probably be built with the goal of extending it's master's dominion, and to kill if it helps further that goal. Nick R Hill 21:10, 15 April 2007 (UTC)


[edit] humans disagree

Whats a robot to do if 2 humans give it two orders that contradict one another? The 2nd law says it has to listen to any order given to it by any human(or by any qualified personnel or whatever that doesnt matter) but the two persons could disagree example

       person1:"robot do the dishes"
       person2:"robot do not do the dishes"

pretty basic example but this problem in the laws could have some much bigger consequences if the persons were to disagree on some thing bigger then dishes —Preceding unsigned comment added by 207.61.78.21 (talk) 17:42, 14 October 2007 (UTC)

A human would be in a similar bind in this situation. Presumably, the robot, like a human, would have some means of deciding which order to obey. Fro example, if person1 was superior to person2, the robot might say "I am sorry, but I have instructions to do the dishes." Or if it were ambiguous, the robot might request more information, for example "I have instructions from person1 to do the dishes. Do you wish to override these instructions?" Just like with humans, a robot would presumably not give equal weight to commands from different people.--RLent 20:37, 15 October 2007 (UTC)

If you order a robot to not obey the order of someone your basicaly ordering it to not listen to the 2nd law. Which means yor order basical is in contradiction with it self: it cant not fallow that order or it would violate the 2nd law, but it cant fallow it because that would violate the 2nd law. If these laws are supost to supersede all everything else the robot experienceses in its operation simply telling it wouldnt really fix the problem. The problem is the law doesnt say anything about anyone being superior. (even if i did you could have 2 people on the same level) so if the robot ever get an order to disregard an order its still stuck in a loop. So its not gonna do anything...which could be yet another violation... you know what the things just gonna end up in failure mode. Thats what I think unless you can come up with something different.—Preceding unsigned comment added by 207.61.78.21 (talk • contribs)

The books discus 'potentials' of the laws, a casually given 'request' would be over ruled by a direct imperative order. If both were given with equal importance, if it had been made clear to the robot who was senior, then it would probably apologise and say that the other order took presidence if there was no hierarchy to follow then the robot would probably ask for clarification and ask to explicitly be ordered to follow a different instruction. These robots are relativity's advanced so won't lock up on a minor issue like this, two humans shouting the odds over what would do more harm to a human would potential cause failure. --Nate1481( t/c) 11:16, 17 October 2007 (UTC)

[edit] Real world application

It is well within the reach of the current state of the art to enable military drones such as the US "Predator" as used in Afghanistan and Iraq to fire its weapon automatically. It already finds and "locks on" its target without any (human) operator intervention. I wonder if these Laws are the reason why the US military presently do not allow such a level of autonomy - the "fire button" is always under the control of the operators back at the base. Roger 09:10, 18 October 2007 (UTC)

[edit] Ultimately, should The Laws apply to computers or not?

Apparently yes, based on the above discussion and on the IEEE papers by Roger Clarke:

 'Asimov's laws of robotics: Implications for information technology.'
 Part 1: IEEE Computer, Dec 1993, p53–61. 
 Part 2: IEEE Computer, Jan 1994, p57–66. 

If they should, then it's a sad fact that apparently Bill Gates never read the Asimov texts, otherwise Microsoft's products, in compliance of the Second Law, would obey their owners. --AVM (talk) 00:11, 22 November 2007 (UTC)

I've seen people push for such a law, but as a question of professional ethics, rather than as something hardwired into computers - we don't have the AI for any of the laws to be meaningfully interpreted by the computer in real time, but there's no reason programmers can't follow them in advance. [2] -MBlume (talk) 10:47, 22 November 2007 (UTC)
(Note that the Laws are not about ethics.)
Microsoft products do obey their owners. But that's not you. Read your EULA. You have license to use the product; but you do not own it. Microsoft are the owners. (And you legally agreed to that situation. Sickening, eh?)
überRegenbogen (talk) 14:04, 25 February 2008 (UTC)

[edit] with folded hands...

A story that proceeds from the logical consequences of having Asimov robot society, especially one with the zeroth law is "With Folded Hands", Rating: Five Planets. by Jack Williamson. pub. 1947.

This story ends the human race with a whimper.

Check it out, it should connect here.

Sean —Preceding unsigned comment added by Seanearlyaug (talk • contribs) 00:28, 29 January 2008 (UTC)

I don't believe we have to reference every story with the three laws of robotics -- Banime 13:03, 5 March 2008

[edit] RoboCop

Just a question that has probably been asked in the past, why no mention of the three rules governing RoboCop's behavior. It appears to me that they were created with Asimov's laws in mind.--Jeremy ( Blah blah...) 03:25, 17 May 2008 (UTC)