What is ethical?
Ethics: the principles of right and wrong that are accepted by an individual or a social group (from WordNet)
This is a good definition, for it suggests a useful test for determining if something is ethical. Based on this definition, we can evaluate if something is ethical by asking a group if it is right. This is fine in consensus, but it is unclear what to make of when group members disagree - and these are precisely the situations we are interested in. A simple answer to resolve this dispute is to suggest that there is no group ethics - a position at the base of moral relativism.
Moral relativists hold that an unsharable, personal, and aesthetic moral core lies at the foundation of personal choices. They deny the possibility of a shared morality at all, except by convention.
At first glance, this leaves our debaters in a quandary. How can we compare ethical valuations, when they are based on aesthetics? In practice, relativism merely changes the name of what is being debated. While a group may not have a shared morality, this does not prevent them from creating and reasoning about workable ethical conventions. This set of conventions is essentially the same as what non-relativists call ethics.
Either way, we're left with determining a procedure whereby we can evaluate an option by an ethical standard. Two basic mechanisms at our disposal are deontology and utilitarianism.
Deontological ethics systems start by defining a set of actions that are wrong and/or a set that are right. For example, we might use the rule "killing is wrong" as a part of our system. Alternatively, the rules can be more abstract like "do unto others as you would have them do unto you". Whether or not our system contains such abstract rules, we need not define rules for all possible situations. Instead, we can allow for generalization.
Most countries' legal systems can be thought of as deontological rule systems. Laws typically demonstrate a primary benefit of deontological systems: as a system matures it typically becomes easier to evaluate common actions. For example, one generally need not consider all of the consequences, motivations, and factors involved in a killing to say it is unethical. When killing is considered ethical (perhaps in self-defense), other rules can be set up as branches of the general rule. We are left with a tidy, easy-to-use decision tree.
There are still problems in setting up this decision tree - typically due to lack of consensus. Our primary tool for resolving disputes here is generalization, but it is not always enough. For example, much of the current debate on abortion is based on varying generalization:
-Abortion is like killing.
-Abortion is not like killing.
-Abortion resembles other health decisions that a woman is free to make.
How do we resolve this problem, which seems to have stalled the debate?
The first general approach is to order rules in a hierarchy. For example, a system might say that it is alright to steal a loaf of bread for a starving child. In such a system, there is likely a rule that vilifies stealing - but another, higher rule, that calls for the preservation of life.
In setting up such a hierarchy, however, we're left with the question of how to rank our rules. However we decide to do this, we will create a system that is not purely deontological. If we decide that killing is worse than stealing for some reason, then that reason is our real ethical system. Our deontology, in that case, is more of an index of some deeper ethical calculus.
Another approach to rescue deontology is to propose an extensible "master rule" which covers all situations. The most famous rule in this vein is Kant's Categorical Imperative, which basically tests if behavior is in accordance with a universalizable maxim. The linked article gives a better explanation than I'm likely to muster here - do read it if you're interested. You may also want to read up on virtue ethics, which is another thing altogether but beyond the scope of a short article.
Perhaps the most natural way of evaluating actions is utilitarianism - the practice of judging options based on their expected consequences. In employing this method, we face the problem of how to judge these consequences. Is a consequence good if it makes people happy? Satisfied? Free? What is the goal of our ethical system? If this were a formula, what variable are we trying to maximize? For the sake of moving forward, I suggest that the clearest expression of what we're trying to maximize is "the degree to which individuals' preferences are met".
Our next problem is how to resolve situations where preferences collide; we have to decide how to "distribute" satisfaction. One can imagine a kind of market-based solution to this problem wherein everyone gets an amount of points to spend as they see fit to influence decisions. Under such a system, everyone's preferences are weighed and decisions are made that best fit those collective preferences. To what extent is this a complete ethical system?
Imagine (it won't be hard) that most people would prefer seeing Carrot Top beaten to death with a bag of walnuts. Surely Mr. Top's preference for not being beaten, however strong, cannot trump the collective preference of millions. Why, then, would few people consider this beating ethical, and how do we manifest our desire to avoid this kind of abuse? We need to not only actualize net satisfaction but to place a lower limit on satisfaction; we want to make the worst case livable.
Taking this into account, and presuming the perfect system for distributing satisfaction, do we end up with a reasonable ethical system? I think we end up with an interesting system, but one very different than the ethical systems we see in practice. Why is this? In large part, this is because practical systems are limited by the fact that we can't know all the consequences of decisions.
This causes problems. If an ethical evaluation becomes too difficult, then we cannot really evaluate or sanction unethical behavior. A set of ethics needs not be a true Nash equilibrium (people aren't gaming robots) to function but an ethical system needs to be somewhat evolutionarily stable. To the extent that behaving unethically can realize rewards, the ethical convention breaks down. To combat this, stable ethical systems have a sense of "justice" - a sense that someone's preferences should only be expressed through ethical outlets.
A good system of ethics allows people to make decisions despite their inability to consider all consequences. To achieve this goal, ethical systems treat "actions" much differently than "omissions". There are lots of things you're not doing while there's only a few things you are. Primarily, evaluating only actions serves to reduce the number of consequences that must be routinely considered. With omissions, people are typically considered to have a limited number of things they're ethically responsible for.
The modern Western ethical landscape is difficult to summarize - but here is an attempt:
For everyday decisions, people rely on deontological "rules of thumb" - most of which are based on law and traditional culture norms. To a greater and greater extent, however, these norms are being challenged by, and evaluated against, appeals to utility. While the ethics of Christianity are still a powerful force in shaping ethics on a personal level, the wider group ethic is being defined by a few medium-term trends:
1. A greater focus on mundane costs and benefits in evaluating the actions of nations, including war.
2. Through environmentalism, an increased conciousness of long term consequences of collective behavior.
3. The exclusion of personal sexual behavior from ethical consideration. In general, there is a focus on
"minding one's own business".
4. The melding of religious deontologies into a vague, general Oprah-ism of universal love and altruism with much less focus on any religion's particular rule set.
The common thread here is the gradual replacement of deontological ethics with systems based on utility.
Old taboos, from reproductive technology to profanity, fall when they interfere with preferences. This trend is not universal, but is clearly observable over the last century.
Ethics in Practice
In most matters, humans are prone to the errors of stubbornness and over-generalization. We take bad risks
and we repeat mistakes. To avoid these errors in our ethical determinations, we need to approach these determinations in a coordinated manner. Hopefully this article, while only a starting point, has given some ideas on how this might be done.