07.02.2015 Views

The Expected Utility Model: Its Variants, Purposes, Evidence and ...

The Expected Utility Model: Its Variants, Purposes, Evidence and ...

The Expected Utility Model: Its Variants, Purposes, Evidence and ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Schoemaker: <strong>The</strong> <strong>Expected</strong> <strong>Utility</strong> <strong>Model</strong> 551<br />

consumer products across stores. <strong>The</strong><br />

price variance increased markedly with<br />

price level, but was fairly constant when<br />

expressed as a percentage of price.<br />

Often reference point effects underlie<br />

sunk-cost fallacies <strong>and</strong> failures to treat opportunity<br />

costs as being real. As an illustration,<br />

consider the following actual example<br />

offered by Richard Thaler:<br />

Mr. R bought a case of good wine in the late<br />

50's for about $5 a bottle. A few years later<br />

his wine merchant offered to buy the wine back<br />

for $100 a bottle. He refused, although he<br />

never paid more than $35 for a bottle of wine<br />

[1980, p. 431.<br />

Although such behavior could be rationalized<br />

through income effects <strong>and</strong> transaction<br />

costs, it probably involves a psychological<br />

difference between opportunity<br />

costs <strong>and</strong> actual out-of-pocket costs.<br />

Finally, there is a fifth major factor that<br />

underlies EU violations, namely the psychology<br />

of probability judgments (Hogarth,<br />

1975a). We shall briefly touch on<br />

three aspects: (1)the impact of objective<br />

(or stated) probabilities on choice, (2)<br />

quantitative expressions of degrees of beliefs<br />

through subjective probabilities, <strong>and</strong><br />

(3)probabilistic reasoning, particularly inference.<br />

A general finding of subjective expected<br />

utility research is that subjective probabilities<br />

relate non-linearly to objective ones<br />

(e.g., Edwards, 1953; 1954a). Typically,<br />

the subjective probability curves overweigh<br />

low probabilities <strong>and</strong> underweigh<br />

high ones (Wayne Lee, 1971, p. 61). For<br />

example, Menahem Yaari (1965) explained<br />

the acceptance of actuarially unfair<br />

gambles as resulting from optimism<br />

regarding low-probability events, as opposed<br />

to convexities in the utility function.<br />

Although the evidence is inconclusive as<br />

to the general nature of probability transformations,<br />

particularly with respect to its<br />

stability across tasks <strong>and</strong> its dependence<br />

on outcomes (Richard Rosett, 1971), R. W.<br />

Marks, 1951, Francis Irwin, 1953, <strong>and</strong> Slo-<br />

vic, 1966 found that subjective probabilities<br />

tend to be higher as outcomes become<br />

more desirable (i.e., a type of wishful<br />

thinking). Furthermore, subjective "probabilities"<br />

often violate mathematical properties<br />

of probability (Kahneman <strong>and</strong> Tversky,<br />

1979). In such cases we refer to them<br />

as decision weights w(p),as noted earlier.<br />

In expressing degrees of confidence<br />

probabilistically, other biases are encountered.<br />

A robust result is that people are<br />

generally overconfident. For instance, if<br />

one were to check out all instances where<br />

a person claimed 90 percent confidence<br />

about the occurrence of events, typically<br />

that person would be correct only about<br />

80 percent of the time (Lichtenstein, et<br />

al., 1981). This same bias manifests itself<br />

in subjective confidence intervals, which<br />

are typically too tight (M. Alpert <strong>and</strong><br />

RaifFa, 1981). Many biases in subjective<br />

probability judgments stem from the type<br />

of heuristics (i.e., simplifying rules) people<br />

use to estimate relative likelihoods. For<br />

instance, a doctor may judge the likelihood<br />

of a patient having disease A versus<br />

B solely on the basis of the similarity of<br />

the symptoms to textbook stereotypes of<br />

these diseases. This so-called representativeness<br />

heuristic, however, ignores possible<br />

differences in the a priori probabilities<br />

of these diseases (Kahneman <strong>and</strong> Tversky,<br />

1972). Another common heuristic is estimation<br />

on the basis of availability (Tversky<br />

<strong>and</strong> Kahneman, 1973). In judging the<br />

chances of dying from a car accident versus<br />

lung cancer, people may base their<br />

estimates solely on the frequencies with<br />

which they hear of them. Due to unrepresentative<br />

news coverage in favor of car<br />

accidents, these estimates will be systematically<br />

biased (Lichtenstein et al., 1978).<br />

Finally, people suffer from a serious hindsight<br />

bias resulting in suboptimal learning.<br />

Events that did (not) happen appear in<br />

retrospect more (less) likely than they did<br />

before the outcome was known. Baruch<br />

Fischhoff (1975) explains this phenome-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!