Saturday, November 19, 2011

Why Asimov's Three Laws of Robotics Won't Work

In Science Fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which most robots that appear in his fiction must obey. Introduced in his 1942 short story "Runaround," the Laws state the following:

A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

If we could actually build robots who are intelligent enough to be self-aware, would these laws actually make sense. I propose not. Take the first law. In the first place, how could the robot tell a human being from another robot that looked like a human being or from a hologram of a human being. You might say, so what. As long as the robot cannot harm a human being or anything that resembles a human being, that is all to the good. But what if a humanoid robot or hologram and a real human being are both in danger. How would the robot know which one to save? As far as that goes, if two human beings are in danger at the same time, how does a robot know which one to save. (Note: this exact situation is shown in the movie I, Robot. In the movie the robot made the wrong choice.)

For certain uses, a manufacturer would not want to apply the Laws in that order. For example, suppose the robots are to be used for military purposes. In this case, the Laws built into the robot might go something like this:

A robot must obey the orders given to it by his superior officer.

A robot must protect its own existence, and those of other soldier robot, except where such orders conflict with the First Law.

A robot may only harm those human beings or robots designated as "The Enemy," by its superior officer and only if not under a flag of truce, surrendering or designated as "Prisoners of War."

In my novel, The Isaac Project, the situation of the military wanting to change the Three Laws provides part of the conflict in the book.

One error that Isaac Asimov made was that he assumed that the intelligence of the robot would somehow be in its electronic circuitry. Actually, we know now that the intelligence of a robot would more likely be in its software. This changes the situation quite a bit, since software can have errors in it that are not always detected during testing. Also, it can be modified. Depending upon how the software is installed, it might be subjected to viruses, worms, and other sorts of malicious software tricks by unscrupulous hackers, such as our computers are now.

No comments: