In the television series Star Trek, characters are often challenged with new and interesting ethical dilemmas. One of the best such challenges occurs during the episode “Tuvix” from the second season of Star Trek: Voyager.
In Ancapistan, only things that violate the non-aggression principle are illegal. Normally assassination violates the NAP, but there are important exceptions. For example, suppose someone commits a crime so heinous that the private court system decides that his death would be appropriate recompense for the victim. However, before he is executed he escapes and moves to some state-controlled territory where nobody has any interest in bringing him to justice.
In this case, it is ethical to hire someone to go find him and kill him. So, Ancapistan would likely have assassination companies. In the long run it would not be a big market, but in the short run it might be big business. Continue reading
Libertarianism says that people should not cause conflict. It wants everyone to get along. That’s why the non-aggression principle, which libertarianism is based on, is so simple. It does not tell you how to live your life. It just says not to cause problems in the lives of other people. This rule is great in theory, but not in practice. Not because libertarianism isn’t practical. It is. However, applying the non-aggression principle to everyday situations can be quite difficult.
The reason is that people do many different things each day. They make choices and take chances that can potentially affect the lives of other people. Trying to evaluate whether any particular thing you might do will cause conflict, and thus violate the NAP, could take a long time. Try doing that for everything you might do in a day and you won’t have time to do anything else.
So how do we protect liberty without bringing life to halt? Continue reading
As machines begin to take on more executive functions, the question of ethics has appropriately been raised. Who is responsible if a self-driving car runs over a mailbox? In the 1940s, Isaac Asimov conceived of a solution where machines would be imbued with rules to prevent them from behaving badly. Those rules were known as the Three Laws of Robotics and are as follows:
These rules form a plausible ethical system for robots, but even Asimov knew they would be insufficient. He wrote a number of stories showing how the laws could break down in his book, I, Robot. The problems with the rules are ambiguity and the possibility of internal contradictions. In the stories, poorly constructed rules for guiding behavior led to robots to commit all manner of misdeeds.
Inconsistent rules not only plague Asimov’s fictional world, but the real world as well. Continue reading