26.03.2013 Views

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

424 Judgment Heuristics<br />

small samples would affirm research hypo<strong>the</strong>ses, leading<br />

<strong>the</strong>m to propose study designs with surprisingly low statistical<br />

power. Of course, well-trained scientists can calculate <strong>the</strong><br />

correct value for power analyses. However, to do so, <strong>the</strong>y<br />

must realize that <strong>the</strong>ir heuristic judgment is faulty. Systematic<br />

reviews have found a high rate <strong>of</strong> published studies with low<br />

statistical power, suggesting that practicing scientists <strong>of</strong>ten<br />

lack this intuition (Cohen 1962).<br />

Kahneman and Tversky (1972) subsequently subsumed<br />

this tendency under <strong>the</strong> more general representativeness<br />

heuristic. Users <strong>of</strong> this rule assess <strong>the</strong> likelihood <strong>of</strong> an event<br />

by how well it captures <strong>the</strong> salient properties <strong>of</strong> <strong>the</strong> process<br />

producing it. Although sometimes useful, this heuristic will<br />

produce biases whenever features that determine likelihood<br />

are insufficiently salient (or when irrelevant features capture<br />

people’s attention). As a result, predicting <strong>the</strong> behavior <strong>of</strong><br />

people relying on representativeness requires both a substantive<br />

understanding <strong>of</strong> how <strong>the</strong>y judge salience and a normative<br />

understanding <strong>of</strong> what features really matter. Bias<br />

arises when <strong>the</strong> two are misaligned, or when people apply<br />

appropriate rules ineffectively.<br />

Sample size is one normatively relevant feature that<br />

tends to be neglected. A second is <strong>the</strong> population frequency<br />

<strong>of</strong> a behavior, when making predictions for a specific individual.<br />

People feel that <strong>the</strong> observed properties <strong>of</strong> <strong>the</strong> individual<br />

(sometimes called “individuating” or “case-specific”<br />

information) need to be represented in future events, even<br />

when those observations are not that robust (e.g., small sample,<br />

unreliable source).<br />

Bias can also arise when normatively relevant features<br />

are recognized but misunderstood. Thus, people know that<br />

random processes should show variability, but expect too<br />

much <strong>of</strong> it. One familiar expression is <strong>the</strong> “gambler’s fallacy,”<br />

leading people to expect, say, a “head” coin flip after<br />

four tails, but not after four alternations <strong>of</strong> head-tail. An<br />

engaging example is <strong>the</strong> unwarranted perception that basketball<br />

players have a “hot hand,” caused by not realizing<br />

how <strong>of</strong>ten such (unrandom-looking) streaks arise by chance<br />

(Gilovich, Vallone, and Tversky 1985). In a sense, representativeness<br />

is a metaheuristic, a very general rule from which<br />

more specific ones are derived for particular situations. As a<br />

result, researchers need to predict how a heuristic will be<br />

used in order to generate testable predictions for people’s<br />

judgments. Where those predictions fail, it may be that <strong>the</strong><br />

heuristic was not used at all or that it was not used in that<br />

particular way.<br />

Two o<strong>the</strong>r (meta)heuristics are availability and anchoring<br />

and adjustment. Reliance on availability means judging<br />

an event as likely to <strong>the</strong> extent that one can remember examples<br />

or imagine it happening. It can lead one astray when<br />

instances <strong>of</strong> an event are disproportionately (un)available in<br />

MEMORY. Reliance on anchoring and adjustment means<br />

estimating a quantity by thinking <strong>of</strong> why it might be larger<br />

or smaller than some initial value. Typically, people adjust<br />

too little, leaving <strong>the</strong>m unduly “anchored” in that initial<br />

value, however arbitrarily it has been selected. Obviously,<br />

<strong>the</strong>re are many ways in which examples can be produced,<br />

anchors selected, and adjustments made. The better <strong>the</strong>se<br />

processes are understood, <strong>the</strong> sharper <strong>the</strong> predictions that<br />

can be made for heuristic-based judgments.<br />

These seminal papers have produced a large research literature<br />

(Kahneman, Slovic, and Tversky 1982). Their influence<br />

can be traced to several converging factors (Dawes<br />

1997; Jungermann 1983), including: (1) The initial demonstrations<br />

have proven quite robust, facilitating replications<br />

in new domains and <strong>the</strong> exploration <strong>of</strong> boundary conditions<br />

(e.g., Plous 1993). (2) The effects can be described in<br />

piquant ways, which present readers and investigators in a<br />

flattering light (able to catch o<strong>the</strong>rs making mistakes). (3)<br />

The perspective fits <strong>the</strong> cognitive revolution’s subtext <strong>of</strong><br />

tracing human failures to unintended side effects <strong>of</strong> generally<br />

adaptive processes. (4) The heuristics operationalize<br />

Simon’s (1957) notions <strong>of</strong> BOUNDED RATIONALITY in ways<br />

subject to experimental manipulation.<br />

The heuristics-and-biases metaphor also provides an<br />

organizing <strong>the</strong>me for <strong>the</strong> broader literature on failures <strong>of</strong><br />

human DECISION MAKING. For example, many studies have<br />

found people to be insensitive to <strong>the</strong> extent <strong>of</strong> <strong>the</strong>ir own<br />

knowledge (Keren 1991; Yates 1990). When this trend<br />

emerges as overconfidence, one contributor is <strong>the</strong> tendency<br />

to look for reasons supporting favored beliefs (Koriat, Lichtenstein,<br />

and Fischh<strong>of</strong>f 1980). Although that search is a sensible<br />

part <strong>of</strong> hypo<strong>the</strong>sis testing, it can produce bias when<br />

done without a complementary sensitivity to disconfirming<br />

evidence (Fischh<strong>of</strong>f and Beyth-Marom 1983). O<strong>the</strong>r studies<br />

have examined hindsight bias, <strong>the</strong> tendency to exaggerate<br />

<strong>the</strong> predictability <strong>of</strong> past events (or reported facts; Fischh<strong>of</strong>f<br />

1975). One apparent source <strong>of</strong> that bias is automatically<br />

making sense <strong>of</strong> new information as it arrives. Such rapid<br />

updating should facilitate learning—at <strong>the</strong> price <strong>of</strong> obscuring<br />

how much has been learned. Underestimating what one<br />

had to learn may mean underestimating what one still has to<br />

learn, <strong>the</strong>reby promoting overconfidence.<br />

Scientists working within this tradition have, naturally,<br />

worried about <strong>the</strong> generality <strong>of</strong> <strong>the</strong>se behavioral patterns.<br />

One central concern has been whe<strong>the</strong>r laboratory results<br />

extend to high-stakes decisions, especially ones with experts<br />

working on familiar tasks. Unfortunately, it is not that easy<br />

to provide significant positive stakes (or threaten significant<br />

losses) or to create appropriate tasks for experts. Those<br />

studies that have been conducted suggest that stakes alone<br />

do not eliminate bias nor lead people, even experts, to abandon<br />

faulty judgments (Camerer 1995).<br />

In addition to experimental evidence, <strong>the</strong>re are anecdotal<br />

reports and systematic observations <strong>of</strong> real-world expert<br />

performance showing biases that can be attributed to using<br />

heuristics (Gilovich 1991; Mowen 1993). For example,<br />

overconfidence has been observed in <strong>the</strong> confidence assessments<br />

<strong>of</strong> particle physicists, demographers, and economists<br />

(Henrion and Fischh<strong>of</strong>f 1986). A noteworthy exception is<br />

wea<strong>the</strong>r forecasters, whose assessments <strong>of</strong> <strong>the</strong> probability<br />

<strong>of</strong> precipitation are remarkably accurate (e.g., it rains 70<br />

percent <strong>of</strong> <strong>the</strong> times that <strong>the</strong>y forecast a 70 percent chance<br />

<strong>of</strong> rain; Murphy and Winkler 1992). These experts make<br />

many judgments under conditions conducive to LEARNING:<br />

prompt, unambiguous feedback that rewards <strong>the</strong>m for accuracy<br />

(ra<strong>the</strong>r than, say, for bravado or hedging). Thus, <strong>the</strong>se<br />

judgments may be a learnable cognitive skill. That process<br />

may involve using conventional heuristics more effectively<br />

or acquiring better ones.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!