As machines begin to take on more executive functions, the question of ethics has appropriately been raised. Who is responsible if a self-driving car runs over a mailbox? In the 1940s, Isaac Asimov conceived of a solution where machines would be imbued with rules to prevent them from behaving badly. Those rules were known as the Three Laws of Robotics and are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These rules form a plausible ethical system for robots, but even Asimov knew they would be insufficient. He wrote a number of stories showing how the laws could break down in his book, I, Robot. The problems with the rules are ambiguity and the possibility of internal contradictions. In the stories, poorly constructed rules for guiding behavior led to robots to commit all manner of misdeeds.
Inconsistent rules not only plague Asimov’s fictional world, but the real world as well. Continue reading