As machines begin to take on more executive functions, the question of ethics has appropriately been raised. Who is responsible if a self-driving car runs over a mailbox? In the 1940s, Isaac Asimov conceived of a solution where machines would be imbued with rules to prevent them from behaving badly. Those rules were known as the Three Laws of Robotics and are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These rules form a plausible ethical system for robots, but even Asimov knew they would be insufficient. He wrote a number of stories showing how the laws could break down in his book, I, Robot. The problems with the rules are ambiguity and the possibility of internal contradictions. In the stories, poorly constructed rules for guiding behavior led to robots to commit all manner of misdeeds.
Inconsistent rules not only plague Asimov’s fictional world, but the real world as well. People struggle to find solutions for extreme hypothetical scenarios that push the limits of their ethical intuition. Even in the real world, contradictions like having separate ethical rules and laws for government employees and regular individuals leads to outrageous and unending crimes.
Fortunately, there is a solution to both of these problems. An ethical system has been developed that, if implemented, would ensure both the peaceful coexistence of humans and robots in the future, but also the peaceful coexistence of humans and humans in the present. This ethical system is called libertarianism.
Instead of three laws, libertarian is based on a single principle called the non-aggression principle, or NAP. The NAP simply states that an individual should not cause conflict to occur. Causing conflict might mean starting fights, provoking other people to get into fights, stealing things, etc. What makes the NAP superior to the first law of robotics is that it uses praxeology to take a very abstract view of behavior. Thus, while it does not give any specific insight into whether any particular course of action is ethical, it gives general rule that can be applied to every situation. Humans, and eventually machines, are then able to derive more specific rules like “do not murder” from the NAP. These derived rules would constitute a libertarian legal system.
This system has not been fully fleshed out yet, but fortunately there are still a few decades until it is needed to immunize strong AI against evil. Unfortunately, natural intelligence has been in need of such medicine for a long time, so the sooner it is available, the better.