In his book, A Spontaneous Order: The Capitalist Case for a Stateless Society, Chase Rachels does an excellent job conveying insights from both libertarianism and economics. He uses clear explanations of basic concepts and persuasive examples for applications. He relentlessly identifies aggression as the root cause of society’s problems, and the state as the primary source of aggression. Most importantly, the book is permeated by a Rothbardian hatred of the state, which will make it an enjoyable read for any ancap.
Rachels makes frequent use of long passages quoted from other works. Thankfully these are drawn from some of the best sources on libertarianism and economics: Continue reading →
In the television series Star Trek, characters are often challenged with new and interesting ethical dilemmas. One of the best such challenges occurs during the episode “Tuvix” from the second season of Star Trek: Voyager.
In Ancapistan, only things that violate the non-aggression principle are illegal. Normally assassination violates the NAP, but there are important exceptions. For example, suppose someone commits a crime so heinous that the private court system decides that his death would be appropriate recompense for the victim. However, before he is executed he escapes and moves to some state-controlled territory where nobody has any interest in bringing him to justice.
In this case, it is ethical to hire someone to go find him and kill him. So, Ancapistan would likely have assassination companies. In the long run it would not be a big market, but in the short run it might be big business. Continue reading →
Libertarianism says that people should not cause conflict. It wants everyone to get along. That’s why the non-aggression principle, which libertarianism is based on, is so simple. It does not tell you how to live your life. It just says not to cause problems in the lives of other people. This rule is great in theory, but not in practice. Not because libertarianism isn’t practical. It is. However, applying the non-aggression principle to everyday situations can be quite difficult.
The reason is that people do many different things each day. They make choices and take chances that can potentially affect the lives of other people. Trying to evaluate whether any particular thing you might do will cause conflict, and thus violate the NAP, could take a long time. Try doing that for everything you might do in a day and you won’t have time to do anything else.
As machines begin to take on more executive functions, the question of ethics has appropriately been raised. Who is responsible if a self-driving car runs over a mailbox? In the 1940s, Isaac Asimov conceived of a solution where machines would be imbued with rules to prevent them from behaving badly. Those rules were known as the Three Laws of Robotics and are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These rules form a plausible ethical system for robots, but even Asimov knew they would be insufficient. He wrote a number of stories showing how the laws could break down in his book, I, Robot. The problems with the rules are ambiguity and the possibility of internal contradictions. In the stories, poorly constructed rules for guiding behavior led to robots to commit all manner of misdeeds.
Inconsistent rules not only plague Asimov’s fictional world, but the real world as well. Continue reading →