In his book, Ethics: A Very Short Introduction, Simon Blackburn takes the reader on a semi-structured tour of various ethical topics. He tackles a variety of bad ideas that have made their way into the ethical arena and spends the majority of the book focused on putting them down. Blackburn mostly refrains, however, from developing or advocating any particular ethical theory.
Surprisingly, given the title, the book not overly friendly to the uninitiated. The reader is often expected to already be familiar with major ideas, figures, and schools of thought in ethics and philosophy. While in the beginning Blackburn does do a good job explicitly motivating why ethical systems are important, by spending the bulk of the work on focused on flawed systems, the book might be discouraging to individuals looking for an ethical system to live by. Blackburn does drop a few hints at what he thinks a good ethic might look like, but sadly it seems to be some sort of democratic socialism.
In his book, A Spontaneous Order: The Capitalist Case for a Stateless Society, Chase Rachels does an excellent job conveying insights from both libertarianism and economics. He uses clear explanations of basic concepts and persuasive examples for applications. He relentlessly identifies aggression as the root cause of society’s problems, and the state as the primary source of aggression. Most importantly, the book is permeated by a Rothbardian hatred of the state, which will make it an enjoyable read for any ancap.
Rachels makes frequent use of long passages quoted from other works. Thankfully these are drawn from some of the best sources on libertarianism and economics: Continue reading →
In the television series Star Trek, characters are often challenged with new and interesting ethical dilemmas. One of the best such challenges occurs during the episode “Tuvix” from the second season of Star Trek: Voyager.
Libertarianism is a system for resolving conflict. In other words, it is an ethical system. Libertarianism simply tells you not to commit crimes like theft and murder. So, it only applies to how you interact with other people, and even then only sets some bare minimum of acceptable behavior.
This invariably leads people to ridicule libertarianism because it does not give any guidance on activities that are not crimes. Should you donate to charity? Libertarianism doesn’t say. People who don’t like libertarianism phrase this as, “libertarianism does not support charity, ” which is technically true but very misleading. Many libertarians give to charity, but they do not do it because they are libertarian. They have other codes of behavior that motivate them.
These other codes are called moral systems. They help people decide what is good and what is bad. For example, Jainism says that drinking alcohol is bad. Like libertarianism, a moral code might dictate how to interact with others. On the other hand, moral codes can also deal with how to behave when you are all alone. Continue reading →
Dudley Do-Right fails to commit a crime in “The Disloyal Canadians”
How do intentions factor in to ethical analysis? Can one be forgiven for doing something evil if they had intended to do something good? Deontological libertarians only care about whether the NAP is violated, so intentions do not appear to be relevant at first glance: all that matters is who is responsible for conflict.
So if someone attempts to do something evil, but ends up doing something not-evil, then from a libertarian perspective that is okay. If you try to build a death ray, shoot your neighbor with it, and unintentionally cure his cancer then you have probably not done anything unethical, even though you tried to. Similarly, if you try to do something peaceful, like give someone a massage, but you accidentally kill them then you have unintentionally done something unethical.
So intentions are not sufficient to determine ethical outcomes, and cannot be used as an excuse for some crime. Honor killing is still murder. However, intentions can affect who is responsible for a crime and thus indirectly affect an ethical conclusion under certain circumstances. Continue reading →
A common attack on libertarianism is that it prohibits certain behaviors that seem to make sense from a utilitarian point of view. For example, if you could save your village from King Kong just by giving the beast one of the young women who live there, that might seem like a good idea, especially if the alternative is that everyone dies. So, while it might be evil to sacrifice her to the monster, maybe it is a good thing to do since you end up saving everyone else.
To fully appreciate this kind of argument, it is necessary to understand that the idea of evil is an objective quality of human interaction, while the idea of good is a subjective quality of any kind of behavior. Whether something is evil or not-evil can be defined in such a way that everyone can agree on what is evil and what is not. So the town saviour in our example could recognize that it is evil to sacrifice a young woman, but he might think that it is a good thing to do. There is no contradiction here because evil does not mean “very bad”. In fact, whether behavior is evil is totally independent of whether it is good or bad.
Just as Ayn Rand and Murray Rothbard might disagree on whether it is good or bad to smoke cigarettes, they would both agree that it is not evil. In the same way, anarchists and minarchists agree that stealing is evil, but anarchists believe that all taxes are bad and minarchists think that some low level of taxation is good. Continue reading →
As machines begin to take on more executive functions, the question of ethics has appropriately been raised. Who is responsible if a self-driving car runs over a mailbox? In the 1940s, Isaac Asimov conceived of a solution where machines would be imbued with rules to prevent them from behaving badly. Those rules were known as the Three Laws of Robotics and are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These rules form a plausible ethical system for robots, but even Asimov knew they would be insufficient. He wrote a number of stories showing how the laws could break down in his book, I, Robot. The problems with the rules are ambiguity and the possibility of internal contradictions. In the stories, poorly constructed rules for guiding behavior led to robots to commit all manner of misdeeds.
Inconsistent rules not only plague Asimov’s fictional world, but the real world as well. Continue reading →