02.03.2013 Views

Thinking and Deciding

Thinking and Deciding

Thinking and Deciding

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

404 MORAL JUDGMENT AND CHOICE<br />

is natural to everything to keep itself in ‘being,’ as far as possible. And<br />

yet, though proceeding from a good intention, an act may be rendered<br />

unlawful, if it be out of proportion to the end.<br />

Other deontological rules might specify just treatment in terms of balancing or<br />

retribution: Reward good behavior <strong>and</strong> punish bad behavior, even if there is no good<br />

consequence. Still others concern truth telling or promise keeping. The important<br />

aspect of these rules is that they are at least somewhat independent of consequences.<br />

A blanket deontological rule concerning truth telling would prohibit white lies to<br />

save people’s feelings. The rule could be tailored to allow exceptions, but the exceptions<br />

would be described in terms of the situation, not the expected consequences.<br />

Many laws are of this form, although some legal distinctions are based on intended<br />

or expected consequences (e.g., manslaughter vs. murder).<br />

I shall suggest that many deontological rules are the result of cognitive biases.<br />

Because they are moral, we endorse them for others. They are not just heuristics in<br />

the usual sense. We become committed to them. I shall shortly discuss some possible<br />

biases that correspond to deontological rules.<br />

Rule utilitarianism<br />

Some philosophers have resolved the conflict between utilitarianism <strong>and</strong> deontological<br />

theories by arguing that utilitarianism does not apply to specific acts (as we would<br />

assume using utility theory), but rather to the moral rules that we adopt <strong>and</strong> try to live<br />

by. This approach is called rule utilitarianism, as distinct from act utilitarianism.<br />

Rule utilitarianism takes as its starting point the fact that most conventional moral<br />

rules seem designed to maximize utility, <strong>and</strong> we might do best to follow them even<br />

when we think that breaking them would maximize utility. For example, rights of<br />

autonomy — for example, the right to refuse medical treatment — are justified by<br />

the fact that individuals generally know best how to achieve their own goals. We take<br />

away the rights of autonomy when this is probably false, such as young children <strong>and</strong><br />

patients with cognitive impairment. In many cases, by this argument, the assumption<br />

that people know what is good for them is not true, but it is true more often than<br />

not. Because it is impossible to ascertain (for adults outside of institutions) just<br />

when it is true, we maximize utility by assuming that it is generally true of everyone.<br />

This argument implies that autonomy is not a fundamental moral principle but rather<br />

a generally well-justified intuition, suitable as a guide to daily life but not as the<br />

fundamental basis of a moral theory.<br />

The idea that following rules can maximize utility also helps us underst<strong>and</strong> the<br />

importance of moral motives <strong>and</strong> goals. Take the classic dilemma in which a cruel<br />

dictator offers you the following choice: Either you shoot one of his political prisoners<br />

for him, or he will shoot ten others, as well as the one. There is no way out,<br />

<strong>and</strong> the choice you make will not be known to anyone else. Clearly, the utilitarian<br />

solution here is to shoot one prisoner. One death is not as bad as eleven; but many

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!