02.03.2013 Views

Thinking and Deciding

Thinking and Deciding

Thinking and Deciding

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ACCURACY OF PROBABILITY JUDGMENTS 141<br />

childbirth taken together).<br />

They noted that the probability scale is limited at the top. It does not allow people<br />

to go beyond 100%, so any error may tend to make people give a spuriously large<br />

number of 100% judgments. To test this theory, they asked subjects to express their<br />

judgments as odds instead of percentages. It then became impossible to indicate<br />

100% confidence, for that would correspond to infinite odds. This helped a little, but<br />

the basic effect was still present.<br />

Another possible explanation of biased confidence judgments is that people have<br />

little idea what probability means: “People’s inability to assess appropriately a probability<br />

of .80 may be no more surprising than the difficulty they might have in estimating<br />

brightness in c<strong>and</strong>les or temperature in degrees Fahrenheit. Degrees of<br />

certainty are often used in everyday speech (as are references to temperature), but<br />

they are seldom expressed numerically nor is the opportunity to validate them often<br />

available” (Fischhoff et al., 1977, p. 553).<br />

This argument does not explain all the results. Even if people do not have a good<br />

feeling for what “80% confidence” means, they must have a good idea of what 100%<br />

confidence means: It clearly means absolute certainty — no chance at all of being<br />

incorrect — yet these extreme expressions of confidence are the judgments that show<br />

the most bias.<br />

In addition to the finding of overconfidence at 100%, another finding suggests<br />

that overconfidence is not merely a result of misuse of the probability scale. People<br />

themselves are willing to act on their judgments. Fischhoff <strong>and</strong> his colleagues<br />

asked the subjects whether they would be willing to play a game. After making the<br />

confidence judgments (still using odds for their estimates), the subjects were told (p.<br />

558):<br />

Look at your answer sheet. Find the questions where you estimated the<br />

odds of your being correct as 50 to 1 or greater.... We’ll count how<br />

many times your answers to these questions were wrong. Since a wrong<br />

answer in the face of such high certainty would be surprising, we’ll call<br />

these wrong answers ‘your surprises.’<br />

The researcher then explained:<br />

I have a bag of poker chips in front of me. There are 100 white chips <strong>and</strong><br />

2 red chips in the bag. If I reach in <strong>and</strong> r<strong>and</strong>omly select a chip, the odds<br />

that I will select a white chip are 100 to 2, or 50 to 1, just like the odds<br />

that your ‘50 to 1’ answers are correct. For every ‘50 to 1 or greater’<br />

answeryougave,I’lldrawachipoutofthebag.... Sincedrawingared<br />

chip is unlikely, every red chip I draw can be considered ‘my surprise.’<br />

Every time you are surprised by a wrong answer ..., you pay me $1.<br />

Every time I am surprised by a red chip, I’ll pay you $1.<br />

Of course, since the subjects’ confidence was usually greater than 50 to 1, they<br />

stood to come out ahead if their estimates were well calibrated. Of forty-two subjects

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!