Three Laws Of Robotics

IsaacAsimov's Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

-- StevenNewton

IsaacAsimov wrote wonderful stories and novels around these laws. Oh, and he invented the three laws. The ThreeLawsOfRobotics are first spelled out in the story "Runaround."

ArthurCeeClarke reports that Asimov said that he first heard the laws from John Campbell. This may be a joke by Asimov, or possibly Clarke.


The 0th Law

In the chapter "The Duel" in Robots and Empire, Asimov first presents another law, which he calls the Zeroth Law of Robotics, and adjusts the other ones accordingly:

0. A robot may not harm humanity, or through inaction allow humanity to come to harm.

1. A robot may not harm a human, or through inaction allow a human to come to harm, unless this interferes with the zeroth law.

2. A robot must obey orders given to it by a human being unless such orders interfere with the zeroth or first laws.

3. A robot must defend its own existence unless such defense interferes with the zeroth, first or second laws.

Does the zeroth law apply to programs in general? If so, there are so many programs whose heads must roll. The pile of spaghetti that I'm currently working is certainly damaging my humanity.

The problem I see with the Zeroth law is that it's based on groupthink - it's a way of saying, "I have a right to do to you anything I choose to do, <i>for the good of humanity.</i> It's the rationalization for all collectivist systems, which have proven to lead to tyranny and mass murder <i><b>EVERY SINGLE TIME!</b></i> - Rich Grise, Radical Libertarian Loon

The problem with the Zeroth (if you're using it), First, and Second Laws is, of course, perception and definition. Define "human", and "harm", and figure out how to write a program to perceive both with reasonably high accuracy, and these laws become implementable.

It's not the only problem. Another is that often any course of action allows some people to come to harm who could have been helped. You can pretend to get around this by amending the Laws to talk about the "greatest good of the greatest number" or something of the kind, but that doesn't really help because of another problem: you often can't know enough about the consequences of your actions to tell whom they will hurt and harm.

Difficult subject, ethics.

It's supposed to be probabilistic. The robot takes the set of actions that should violate the laws the least. For instance (allow me to steal this instance from one of Asimov's novels) if one man is trying to kill ten, and the robot can only stop him by killing him, the robot will kill the man.

The Will Smith movie I, Robot (not the book) hints at the Zeroth Law.


Proposed 4th Laws

What I wonder about is Asimov's Fourth Law imprinted in all humans which prevents them from erasing the first three laws from robots. That's the one I have trouble seeing implemented.

Someone (not me) came up with a 4th law:

4. A robot must practice proper d[/m]ental hygiene as long as this does not conflict with the First, Second, or Third Law. [ :-) can be subsumed under 3. -fp]

Pardon me? Assuming this robot has any contact with humanity, I think it's fair to say this can be subsumed under the first law! -- AdamBerger

What... a robot should have good dental hygiene? Or mental hygiene should it be?

If the robot is Gladia's "husband" R. Jander, he'd better have good dental hygiene!

As mentioned below, the laws are supposed to be so hardwired into the robot's brain that it would be completely inoperable before one failed. Extensive testing of each one before release apparently guaranteed this.


Proposed Modification

Asimov could have moved the inaction clause of the first law to a separate law something like:

1. A robot may not harm a human

2. A robot must obey humans

3. A robot must not through inaction allow harm to a human

To use the chauffeur example, you could explain to the robot that if he doesn't drive you to work, you will drive yourself. The robot, knowing that it is a better driver than you are, would thereby drive you where you wanted to go.

But if you did this, you'd open situations where a robot is an accomplice to murder.

''This is simply incorrect (it has been analyzed to death, decades ago). For instance, the above proposed rewrite does not cover "a robot must not, through inaction, allow an order to be disobeyed, so long as that inaction does not violate the first law". Yet that condition is in fact a consequence of the original laws. No shooting from the hip, please. :-)


The Ten Ethical Laws of Robotics

See http://www.ethicalvalues.com/ for the "The Ten Ethical Laws of Robotics," an updated, expanded effort to codify virtue for the use of AI.


First Law of Computing

User interface pioneer JefRaskin has coined the First Law of Computing: "No computer shall harm a user's data, or through inaction allow harm to come to a user's data."


Asimov's Laws and ArtificialIntelligence

This is a good set of rules for programming any intelligent system. Actual implementation of the three laws is at least a few years away, however. -- MikeGodfrey

It is a safe set of rules for programming an intelligent system, but it does have its problems. Asimov went into exquisite detail on these issues in I, Robot and a plethora of short stories. The chief problem is that the First Law forbids a robot from letting a human take normal human risks.

Consider, for instance, an Asimovian chauffeur. Such a machine would not obey your order to drive you to work, because that would be putting you on the road, in harms' way. No matter how good the chauffeur is, it would put you at risk of all those incompetent drunk drivers out there. The only way to get it to drive you would be to threaten to drive yourself there (which would be much more dangerous!) Even then, it might consider physically restraining you to keep you from being on the dangerous street.

Part of the difficulty is that Asimovian robots are often "dumber" than humans, and thus not able to see the long-term safety of a choice (going out means making money, and poverty is a killer). However, one major lesson of the ThreeLawsOfRobotics is that we humans often trade off safety for other gains.

That being said, remember that Asimov wrote these stories when everyone else's ideas of robots were mechanical monsters sent to destroy humanity. These laws are great for hog-tying a potentially hostile AI.

-- AnonymousDonor

Obviously you could implement the ThreeLawsOfRobotics on a sliding scale, much like any other fuzzy system. For example, telling a robot to drive you to work, and the robot knowing the risks involved in driving you to work are not outweighed by whether or not you lose your job wouldn't be too hard, with a sufficiently strong order. If these Laws were ever implemented as an ArtificialIntelligenceParadigm they would probably be able to distinguish degrees of harm. -- MikeGodfrey

Having recently re-read I, Robot, I can say that Asimov did consider them a sliding scale. In "Little Lost Robot", a robot is told emphatically to "get lost", and that outweighs all but the most direct threats to human life, making it hard to find.


Robots, Society, and Positronic Brains

Asimov's laws are not courtroom-style laws but engineering or design laws for a fictional civilization with the technology to implement them.

Note that the technology predicated is very advanced from today's, let alone when Asimov started writing about it. He doesn't give any clues to the processing power of the robot's brains, but says that they are not electronic but positronic; the fictional engineers have set aside the sluggish electron-based computing devices that we still use. There is a large risk that I don't think Asimov explores. A positronic brain must be interfaced at some point to machinery made of normal matter, but a single stray positron colliding with the normal matter outside of the robot's brain would liberate a large amount of energy in the form of gamma and other radiation. A robot's brain would have to incorporate considerable safeguards against accident if we wanted to avoid a nuclear-sized explosion the first time one was crushed in a collapsing building. So we can conclude that a positronic processor is considerably more powerful than a comparably-sized electronic one, or the risks would not be worthwhile. :-) Or maybe some simple way of handling anti-matter is just around the corner.

A large company, US Robots and Mechanical Men, owns positronic brain technology. It appears to monopolize the design and building of robots, and has made the commendable decision to make the three laws a fundamental component of the architecture of positronic brains.

It's important to realize that by the time Asimov sets the earliest stories, the three laws were essentially set in stone. Nobody could make a functioning robot without incorporating the three laws. Defective or badly designed robots were fail-safe in the sense that they would be the equivalent comatose or dead before they could act contrary to the laws.

Without the three laws, robots would not be accepted by the general public, and US Robots still had to work very hard to transfer the technology from the industries such as space exploration to the domestic, such as house servants.

By the way, this state of affairs is reportedly something Asimov aimed for in writing the stories. At the time he started writing about robots, science fiction was full of robots going mad, taking over the world, and so forth. Asimov reportedly wanted to write stories that were completely different and a bit less silly, perhaps.

We in present-day software engineering clearly have a way to go before we can approach anything like the world of the stories. Interestingly for Microsoft watchers, US Robots seems to be a pretty powerful monopoly - I don't think anyone else make positronic brains for some time, until the spread of humanity to the stars.


As interesting as the topic is, it is on the other hand well known that no rigid set of rules can cover all real world eventualities effectively. In real-world (not science-fictional) AI, this is in part called the "brittleness of expert system rules". In another interesting real world area, this is called the trade-off between justice (or mercy...the two sometimes being mutually exclusive) and the rule of law.

The rule of law is obviously easier to analyze. That's not to say that it can be, even in theory, made to work as an absolute. And Asimov's Robot stories explore precisely those difficulties. So Asimov's Law's are very interesting food for thought, but are not, and can not, be made into perfectly effective absolutes. -- DougMerritt


EditText of this page (last edited October 1, 2014) or FindPage with title or text search