27.11.2014 Views

Lecture 5. Choice under Uncertainty II - Ecares

Lecture 5. Choice under Uncertainty II - Ecares

Lecture 5. Choice under Uncertainty II - Ecares

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Lecture</strong> <strong>5.</strong> <strong>Choice</strong> <strong>under</strong> <strong>Uncertainty</strong> <strong>II</strong><br />

1. Stochastic Dominance<br />

Comparison of payo¤ distributions<br />

First order stochastic dominance - two equivalent de…nitions:<br />

i) F (.) …rst-order stochastically dominates F 0 (.) if for every nondecreasing<br />

function u : R ! R it holds:<br />

Z<br />

Z<br />

u(x)dF (x) u(x)dF 0 (x)<br />

”Every expected utility maximizer who appreciates money prefers F (.)<br />

over F 0 (.)."<br />

() September 24, 2012 1 / 8


ii) F (.) …rst-order stochastically dominates F 0 (.) if<br />

F (x) F 0 (x) for all x.<br />

”The probability, that the realized payo¤ is above a certain threshold x, is<br />

larger for F (.) than for F 0 (.) for any such threshold.”<br />

Second order stochastic dominance:<br />

For any two distributions F (.) and F 0 (.) with the same mean, F (.)<br />

second-order stochastically dominates F 0 (.) if for every nondecreasing<br />

concave function u : R ! R it holds:<br />

Z<br />

Z<br />

u(x)dF (x) u(x)dF 0 (x)<br />

"Provided that both distributions give the same expected monetary payo¤,<br />

every risk averse agent prefers F (.) over F 0 (.)."<br />

() September 24, 2012 2 / 8


2. State dependent utility<br />

Agent cares not only about consequences, but also about reasons for<br />

consequences.<br />

S : set of states of nature, …nite; actual state unknown<br />

π s > 0 : probability that s occurs; objective probabilities<br />

Function g : S ! R + maps states of nature into monetary outcomes.<br />

Every g(.) induces a lottery F (.) with F (x) =<br />

∑ π s<br />

fs:g (s)x g<br />

Each g also represented by (x 1 ...x S ). Set of (nonnegative) g is R S +<br />

() September 24, 2012 3 / 8


: rational preferences de…ned on R S +<br />

De…nition: has an extended expected utility representation i¤ for every<br />

s 2 S there is a function u s : R + ! R such that: (x 1 ...x S ) (x1 0...x S 0 ) if<br />

and only if ∑ π s u s (x s ) ∑ π s u s (xs 0 ).<br />

s2S<br />

s2S<br />

u s: state dependent utility function (before: state-independent or state<br />

uniform utility functions)<br />

Furthermore, we allow that within each state the monetary payo¤ is not a<br />

certain amount, but a lottery with distribution F s (.).<br />

Hence, alternative L = (F 1 , F 2 ....F S ).<br />

() September 24, 2012 4 / 8


Extended independence: satis…es the extended independence axiom if<br />

for all L, L 0 , L" and α 2 (0, 1), we have:<br />

L L 0 i¤ αL + (1 α)L" αL 0 + (1 α)L"<br />

Proposition: Suppose is rational, continuous and satis…es the extended<br />

independence axiom. Then we can assign utility functions u s (.) for money<br />

in every state s such that for any L = (F 1 , F 2 ...F S ) and L 0 = (F 0 1 , F 0 2 ...F 0 S )<br />

we have:<br />

Z<br />

∑ π s (<br />

s2S<br />

L L 0 i¤<br />

Z<br />

u s (x s )dF s (x s )) ∑ π s (<br />

s2S<br />

u s (x s )dF 0<br />

s (x s )).<br />

() September 24, 2012 5 / 8


3. Subjective Probability Theory<br />

Till now: objective probabilities<br />

Now: no objective probabilities - uncertainty<br />

Goal: Derivation of subjective probabilities from preferences<br />

”The agent has preferences over the uncertain alternatives as if he<br />

maximizes an expected utility function with a probability distribution that<br />

looks like... ”<br />

S: set of states without objective probabilities<br />

g again represented by (x 1 ...x S ). Set of (nonnegative) g is R S +<br />

() September 24, 2012 6 / 8


Again we allow that within each state the monetary payo¤ is not a certain<br />

amount, but a lottery with distribution F s (.).<br />

Hence, again alternative L = (F 1 , F 2 ....F S )<br />

state dependent preferences = ( 1 , 2 , ... S )<br />

Preferences are assumed to be rational, continuous, and satisfy the<br />

extended independence axiom. Then we have u s (.) functions that are<br />

Bernoulli functions for every state.<br />

De…nition: The state preferences ( 1 , 2 , ... S ) are state uniform i¤<br />

s = s 0for all s, s 0 2 S.<br />

"The risk attitude towards money is the same in all states."<br />

() September 24, 2012 7 / 8


Proposition: Suppose is rational, continuous, satis…es the extended<br />

independence axiom, and is state uniform. Then there are probabilities<br />

(π 1 , π 2 ...π S ) >> 0 and a Bernoulli function u(.) on amounts of money<br />

such that for any (x 1 ...x S ) and(x1 0...x S 0 ) we have:<br />

∑<br />

s<br />

(x 1 ...x s ) (x1...x 0 s 0 ) if and only if<br />

π s u(x s ) ∑ π s u(xs 0 ).<br />

s<br />

Moreover, the probabilities are uniquely determined, and the utility<br />

function is unique up to a scalar transformation.<br />

() September 24, 2012 8 / 8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!