26.03.2013 Views

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

MIT Encyclopedia of the Cognitive Sciences - Cryptome

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

860 Utility Theory<br />

neoclassical <strong>the</strong>ory. Recognition <strong>of</strong> this role was <strong>the</strong> result<br />

<strong>of</strong> <strong>the</strong> so-called marginal utility revolution <strong>of</strong> <strong>the</strong> 1870s, in<br />

which Carl Menger, W. Stanley Jevons, Francis Ysidro<br />

Edgeworth, Léon Walras, and o<strong>the</strong>r leading “marginalists”<br />

demonstrated that values/prices could be founded on utility.<br />

The standard <strong>the</strong>ory <strong>of</strong> utility starts with a preference<br />

order, < ~ , typically taken to be a complete preorder over <strong>the</strong><br />

outcome space, ϑ. y <<br />

~ x means that x is (weakly) preferred<br />

to y, and if in addition ¬ (x < ~ y), x is strictly preferred.<br />

Granting certain topological assumptions, <strong>the</strong> preference<br />

order can be represented by a real-valued utility function, U:<br />

ϑ → R, in <strong>the</strong> sense that U (y) ≤ U (x) if and only if y < ~ x. If<br />

U represents < ~ , <strong>the</strong>n so does ϕ ° U, for any monotone func-<br />

tion ϕ on <strong>the</strong> real numbers. Thus, utility is an ordinal scale<br />

(Krantz et al. 1971).<br />

Under UNCERTAINTY, <strong>the</strong> relevant preference comparison<br />

is over prospects ra<strong>the</strong>r than outcomes. One can extend<br />

<strong>the</strong> utility-function representation to prospects by taking<br />

expectations with respect to <strong>the</strong> utility for constituent outcomes.<br />

Write [F,p; F'] to denote <strong>the</strong> prospect formed by<br />

combining prospects F and F' with probabilities p and 1 – p,<br />

respectively. If F(ω) denotes <strong>the</strong> probability <strong>of</strong> outcome ω<br />

in prospect F, <strong>the</strong>n<br />

[ Fp; , F′ ] ( ω)<br />

≡ pF( ω)<br />

+ ( 1 – p)F′<br />

( ω).<br />

The independence axiom <strong>of</strong> utility <strong>the</strong>ory states that if<br />

F' < ~ F'', <strong>the</strong>n for any prospect F and probability p,<br />

[ F′ , p ; F]<br />

<<br />

~ [ F″ , p ; F].<br />

In o<strong>the</strong>r words, preference is decomposable according to <strong>the</strong><br />

prospect’s exclusive possibilities.<br />

Given <strong>the</strong> properties <strong>of</strong> an order relation, <strong>the</strong> independence<br />

axiom, and an innocuous continuity condition, preference<br />

for a prospect can be reduced to <strong>the</strong> expected value <strong>of</strong><br />

<strong>the</strong> outcomes in its probability distribution. The expected<br />

utility Û <strong>of</strong> a prospect F is defined by<br />

Û( F)<br />

≡ E<br />

F<br />

[ U]<br />

= U( ω)F(<br />

ω)<br />

ω∈ϑ For <strong>the</strong> continuous case, replace <strong>the</strong> sum by an appropriate<br />

integral and interpret F as a probability density function.<br />

Because expectation is generally not invariant with respect<br />

to monotone transformations, <strong>the</strong> measure <strong>of</strong> utility for <strong>the</strong><br />

uncertain case must be cardinal ra<strong>the</strong>r than ordinal. As with<br />

preferences over outcomes, <strong>the</strong> utility function representation<br />

is not unique. If U is a utility function representation <strong>of</strong><br />

< ~ and ϕ a positive linear (affine) function on <strong>the</strong> reals, <strong>the</strong>n<br />

ϕ ° u also represents <<br />

~ .<br />

∑<br />

Frank Plumpton Ramsey (1964/1926) was <strong>the</strong> first to<br />

derive expected utility from axioms on preferences and<br />

belief. The concept achieved prominence in <strong>the</strong> 1940s,<br />

when John VON NEUMANN and Oskar Morgenstern presented<br />

an axiomatization in <strong>the</strong>ir seminal volume on GAME<br />

THEORY (von Neumann and Morgenstern 1953). (Indeed,<br />

many still refer to “vN-M utility.”) Savage (1972) presented<br />

what is now considered <strong>the</strong> definitive ma<strong>the</strong>matical argument<br />

for expected utility from <strong>the</strong> Bayesian perspective.<br />

Although it stands as <strong>the</strong> cornerstone <strong>of</strong> accepted decision<br />

<strong>the</strong>ory, <strong>the</strong> doctrine is not without its critics. Allais<br />

(1953) presented a compelling early example in which most<br />

individuals would make choices violating <strong>the</strong> expectation<br />

principle. Some have accounted for this by expanding <strong>the</strong><br />

outcome description to include determinants <strong>of</strong> regret (see<br />

Bell 1982), whereas o<strong>the</strong>rs (particularly researchers in<br />

behavioral DECISION MAKING) have constructed alternate<br />

preference <strong>the</strong>ories (Kahneman and TVERSKY’s 1979 prospect<br />

<strong>the</strong>ory) to account for this as well as o<strong>the</strong>r phenomena.<br />

Among those tracing <strong>the</strong> observed deviations to <strong>the</strong> premises,<br />

<strong>the</strong> independence axiom has been <strong>the</strong> greatest source<br />

<strong>of</strong> controversy. Although <strong>the</strong> dispute centers primarily<br />

around its descriptive validity, some also question its normative<br />

status. See Machina (1987, 1989) for a review <strong>of</strong><br />

alternate approaches and discussion <strong>of</strong> descriptive and normative<br />

issues.<br />

Behavioral models typically posit more about preferences<br />

than that <strong>the</strong>y obey <strong>the</strong> expected utility axioms. One<br />

<strong>of</strong> <strong>the</strong> most important qualitative properties is risk aversion,<br />

<strong>the</strong> tendency to prefer <strong>the</strong> expected value <strong>of</strong> a prospect to<br />

<strong>the</strong> prospect itself. For scalar outcomes, <strong>the</strong> risk aversion<br />

function (Pratt 1964),<br />

– U″ ( x)<br />

rx ( ) =<br />

----------------- ,<br />

U′ ( x)<br />

is <strong>the</strong> standard measure <strong>of</strong> this tendency. Properties <strong>of</strong> <strong>the</strong><br />

risk aversion measure (e.g., is constant, proportional, or<br />

decreasing) correspond to analytical forms for utility functions<br />

(Keeney and Raiffa 1976), or stochastic dominance<br />

tests for decision making (Fishburn and Vickson 1978).<br />

When outcomes are multiattribute (nonscalar), <strong>the</strong> outcome<br />

space is typically too large to consider specifying<br />

preferences without imposing some structure on <strong>the</strong> utility<br />

function. Independence concepts for preferences (Bacchus<br />

and Grove 1996; Gorman 1968; Keeney and Raiffa 1976)—<br />

analogous to those for probability—define conditions under<br />

which preferences for some attributes are invariant with<br />

respect to o<strong>the</strong>rs. Such conditions lead to separability <strong>of</strong> <strong>the</strong><br />

multiattribute utility function into a combination <strong>of</strong> subutility<br />

functions <strong>of</strong> lower dimensionality.<br />

Modeling risk aversion, attribute independence, and o<strong>the</strong>r<br />

utility properties is part <strong>of</strong> <strong>the</strong> domain <strong>of</strong> decision analysis<br />

(Raiffa 1968; Watson and Buede 1987), <strong>the</strong> methodology <strong>of</strong><br />

applied decision <strong>the</strong>ory. Decision analysts typically construct<br />

preference models by asking decision makers to make<br />

hypo<strong>the</strong>tical choices (presumably easier than <strong>the</strong> original<br />

decision), and combining <strong>the</strong>se with analytical assumptions<br />

to constrain <strong>the</strong> form <strong>of</strong> a utility function.<br />

Designers <strong>of</strong> artificial agents must also specify preferences<br />

for <strong>the</strong>ir artifacts. Until relatively recently, Artificial<br />

Intelligence PLANNING techniques have generally been limited<br />

to goal predicates, binary indicators <strong>of</strong> an outcome<br />

state’s acceptability. Recently, however, decision-<strong>the</strong>oretic<br />

methods have become increasingly popular, and many<br />

developers encode utility functions in <strong>the</strong>ir systems. Some<br />

researchers have attempted to combine concepts from utility<br />

<strong>the</strong>ory and KNOWLEDGE REPRESENTATION to develop flexible<br />

preference models suitable for artificial agents (Bacchus<br />

and Grove 1996; Haddawy and Hanks 1992; Wellman and<br />

Doyle 1991), but this work is still at an early stage <strong>of</strong> development.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!