02.03.2013 Views

Thinking and Deciding

Thinking and Deciding

Thinking and Deciding

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

BIASES IN MORAL JUDGMENT? 413<br />

Second, intuitions that oppose producing the best overall consequences might<br />

actually lead to worse consequences (Baron, 1994, 1998). Those people who are<br />

concerned with why the outcomes of human decisions are not as good as possible<br />

might find it useful to know that some of the explanation lies in our moral intuitions.<br />

Utilitarians will ask why anyone should suffer harm or forgo benefit because<br />

of someone else’s moral intuitions. Perhaps others can provide an answer, in which<br />

case utilitarianism will not turn out to be the ultimate normative theory.<br />

Can intuitions be values?<br />

Sometimes people say that an intuition is itself a value or goal, so these biases are<br />

not really biases at all, even in utilitarian terms, because such values count as part<br />

of people’s utility. For example, people say that they personally find it worse to<br />

harm others through an act than through an omission. They say that this is one of<br />

their values. So, when we carry out a utilitarian calculation of costs <strong>and</strong> benefits, we<br />

must count the individual’s disutility arising from performing the harmful act. The<br />

problem is how we distinguish a value from a deontological moral intuition. Such a<br />

moral intuition is a kind of opinion about what should be done, not a judgment about<br />

the consequences.<br />

This issue has many manifestations. One is the question of whether we should<br />

count the anti-Semite’s attitude toward Jews as a value, or the racist’s attitude toward<br />

blacks. It may seem that the values of those who will not inflict harm to prevent<br />

greater harm are quite different from these. From the inside, though, the values<br />

of the racist may feel much the same. Racists may think of their values as moral<br />

ones, even though those who bear the brunt of these values, <strong>and</strong> most other people,<br />

certainly do not seem them as moral.<br />

A more acceptable example is the intuitions that people have about allocation<br />

of resources. Suppose that a government health program has decided not to pay for<br />

bone-marrow transplants for certain types of cancer, because the transplants are expensive<br />

<strong>and</strong> very rarely effective. Paying for such transplants would require denying<br />

other people services that are more effective, even at preventing death. An analysis<br />

of utilities supports this decision. But public attitudes go against this analysis <strong>and</strong><br />

want the transplants paid for, even at the expense of more beneficial services to others.<br />

If the government decides to give in to these attitudes (because, for example, it<br />

sees itself as democratic), then some people will get marrow transplants <strong>and</strong> others<br />

will be hurt, perhaps even die, as a result of the denial of other services. In this case,<br />

a moral opinion or value has caused harm to some people <strong>and</strong> benefit to others, but<br />

the benefit is, we have assumed, less than the harm. We cannot justify the harm by<br />

pointing to greater benefits. The harm results from people’s values or intuitions.<br />

What about moralistic values (p. 396)? Some people are offended by nudity (for<br />

example, at private beaches), homosexuality, or women in public with bare faces or<br />

legs. They want the government to ban these things. Again, they have a value that<br />

seems moral to them but actually goes against the values of others. Should we count<br />

these moralistic values as something worth considering in a utilitarian calculation?

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!