BazermanMoore
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
68 Chapter 4: Framing and the Reversal of Preferences
Now consider an even nastier version of Russian Roulette in which the revolver
has two bullets in it. How much would you pay to remove one of the bullets,
reducing your chances of imminent death by 17 percent (from one-third to onesixth)?
Most people would see that as a much less satisfying change, and consider
it to be less valuable than the certainty of reducing the chance of imminent death
to zero. This is true despite the fact that your probability of death is reduced by
the same amount in both instances.
Kahneman and Tversky (1979) were the first to document the human tendency to
underweight high-probability events (such as the 83 percent chance of living to tell
about your adventure playing the one-bullet version of Russian Roulette) but appropriately
weight events that are certain (such as the certainty of living to tell about the
zero-bullet version of the game). If an event has a probability of 1.0 or zero, we tend
to accurately evaluate the event’s probability. However, if the event has a high probability
(say, 83 percent), we tend to respond as the expected-utility framework would
expect us to respond to a probability of less than .83. As a result, Slovic, Fischhoff,
and Lichtenstein (1982) observe that ‘‘any protective action that reduces the probability
of harm from, say, .01 to zero will be valued more highly than an action that
reduces the probability of the same harm from .02 to .01’’ (p. 24). In other words,
people value the creation of certainty over an equally valued shift in the level of
uncertainty.
Interestingly, the perception of certainty (that is, the perception that the probability
of an event is zero or 1.0) can be easily manipulated. Slovic, Fischhoff, and Lichtenstein
(1982) considered the best way to advertise a disaster insurance policy that covers
fire but not flood. The policy can be accurately advertised either as ‘‘full protection’’
against fire or as a reduction in the overall probability of loss from natural disasters.
The researchers found that the full-protection advertisement makes the policy most
attractive to potential buyers. Why? Because the full-protection option reduces perceived
uncertainty for loss from fire to zero, whereas the overall disaster policy reduces
uncertainty some incremental amount to a value that is still above zero. The perceived
certainty that results from the full-protection framing of the advertisement has been
labeled ‘‘pseudocertainty’’ because it provides certainty regarding a subset of the relevant
uncertainties (Slovic, Fischhoff, & Lichtenstein, 1982).
Slovic, Fischhoff, and Lichtenstein (1982) provided empirical evidence of the
strength of the pseudocertainty effect in the context of disease vaccination. The researchers
created two versions of a questionnaire. Version 1 described a disease that
was expected to afflict 20 percent of the population. Research participants in this
condition were asked if they would receive a vaccine that protected half of the
individuals vaccinated. Version 2 described two mutually exclusive and equally probable
strains of the disease, each of which was expected to afflict 10 percent of the
population. In that case, vaccination was said to give complete protection (certainty)
against one strain and no protection against the other. Would you take the vaccine
described in Version 1? What about the vaccine described in Version 2? In either
case, the vaccine would objectively reduce one’s overall risk from 20 percent to
10 percent. Slovic, Fischhoff, and Lichtenstein found that Version 2 (pseudocertainty)
was more appealing than Version 1 (probabilistic). Some 57 percent of