BazermanMoore
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Self-Serving Reasoning 95
attributed failure to reach agreement to the rigidity of the other side. President Ronald
Reagan told reporters, ‘‘We came to Iceland to advance the cause of peace ... and
although we put on the table the most far-reaching arms control proposal in history,
the General Secretary rejected it.’’ On the very same day, General Secretary Mikhail
Gorbachev stated: ‘‘I proposed an urgent meeting here because we had something to
propose ... the Americans came to this meeting empty handed.’’ Kramer (1994) cites
these leaders’ memoirs as proof that these quotes are more than political representations:
they reflect the leaders’ underlying egocentrism.
As we discussed in our review of the confirmation heuristic in Chapter 2, when
people encounter favorable information, they are likely to accept it uncritically. Negative
information, however, produces more critical and suspicious evaluation. Dawson,
Gilovich, and Regan (2002) nicely document our tendency to select standards of evidence
in self-serving ways. They note that it sounds completely reasonable to accept an
argument when the available data are consistent with the argument. On the other hand,
it also seems reasonable to require the data to be overwhelmingly supportive. Dawson,
Gilovich, and Regan (2002) argue that when we want to believe an argument, we tend
to ask, ‘‘Can I believe this?’’ When we do not want to believe an argument, we ask,
‘‘Must I believe this?’’
Illustrating this phenomenon, Ditto and Lopez (1992) told their research participants
that they had to pick a colleague with whom they would work on a collaborative
project. Each participant was told to pick the more intelligent of two potential coworkers.
The participants were given information about the performances of the two
coworkers on several tasks and were told to review the information until they were satisfied
that they had picked the more intelligent partner. Participants were led to believe
that one of the two coworkers was friendly and helpful and that the other was rude and
inconsiderate. When the evidence seemed to suggest that the friendly coworker was the
smarter one, people stopped searching for information and quickly chose him. When
the evidence favored the jerk, however, people kept seeking more and more information,
hoping to be able to justify the choice they wanted to make.
Evidence for the automatic nature of biased perception comes from Balcetis and
Dunning (2006). They told participants that they would be taking a taste test of one of
two drinks standing before them: either (1) freshly squeezed orange juice or (2) a gelatinous,
chunky, green, foul-smelling, somewhat viscous concoction labeled as a veggie
smoothie. Which drink they would have to taste would be determined by the random
appearance of either a farm animal or a sea creature on a computer screen. For some
participants, seeing a farm animal meant that they had a veggie smoothie in their future;
for others, the sea creature had the same ominous significance. Participants were
then shown an ambiguous picture that had features of both a horse and a seal. Balcetis
and Dunning found that those who were hoping to see a farm animal saw only a horse
and never consciously registered the possibility of interpreting the same picture as a
seal, and vice versa. In other words, the filters and choices that drove their selective
perception occurred at an unconscious level.
If these biases occur at an unconscious level, then it ought to come as no surprise
that people are unaware of their own vulnerability to bias (Pronin, Gilovich, & Ross,
2004). Intelligent and well-intentioned people come to biased conclusions even while