31.03.2015 Views

green & myerson 2004.. - of /courses

green & myerson 2004.. - of /courses

green & myerson 2004.. - of /courses

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Green and Myerson Page 12 <strong>of</strong> 48<br />

NIH-PA Author Manuscript NIH-PA Author Manuscript NIH-PA Author Manuscript<br />

emphasizes the use <strong>of</strong> parallel experimental procedures and mathematical modeling techniques<br />

to rigorously evaluate the case for similarities between temporal and probability discounting.<br />

We would emphasize that the discounting framework, through its use <strong>of</strong> parallel approaches,<br />

also is well suited to reveal theoretically meaningful differences between the two types <strong>of</strong><br />

discounting. When different procedures and analytical techniques are used, as is typically the<br />

case, any apparent differences observed between temporal and probability discounting could<br />

be due to the procedural and analytic approaches rather than to true differences in the underlying<br />

processes.<br />

The conceptual similarity <strong>of</strong> discounting involving delayed and probabilistic rewards may be<br />

seen by considering reversals in preference. Recall that it was argued previously that preference<br />

reversals with delayed rewards occur because the subjective value <strong>of</strong> the smaller, sooner reward<br />

increases more than the subjective value <strong>of</strong> the larger, later reward as the delays to both are<br />

decreased equally. Similarly, preference reversals with probabilistic rewards might be<br />

explained by assuming that the subjective value <strong>of</strong> the smaller, less risky reward decreases<br />

more than the subjective value <strong>of</strong> the larger, more risky reward if the probabilities <strong>of</strong> winning<br />

decrease. As was true for temporal discounting, however, preference reversals with risky<br />

rewards do not greatly constrain the mathematical form <strong>of</strong> the probability discounting function<br />

(although they do preclude a simple expected value model <strong>of</strong> risky choice).<br />

Given what is now known about the form <strong>of</strong> the temporal discounting function, the question<br />

becomes whether a similar hyperbola-like mathematical function also describes the<br />

discounting <strong>of</strong> probabilistic rewards. Recent evidence suggests that, indeed, this is the case.<br />

Moreover, the fact that the same mathematical function describes both temporal and probability<br />

discounting has generated suggestions that the same (or similar) underlying processes might<br />

account for both probability and temporal discounting (e.g., Green & Myerson, 1996; Prelec<br />

& Loewenstein, 1991; Rachlin et al., 1986, 1994; Stevenson, 1986). After presenting two<br />

proposed probability discounting functions that have the same mathematical forms as the<br />

temporal discounting functions discussed previously, we evaluate such single-process<br />

accounts.<br />

Mathematical Descriptions <strong>of</strong> Probability Discounting<br />

Rachlin et al. (1991) proposed that the value <strong>of</strong> probabilistic rewards may be described by a<br />

discounting function <strong>of</strong> the same form (i.e., a hyperbola) as that which they used to describe<br />

delayed rewards:<br />

V = A / (1 + hΘ), (4)<br />

where V represents the subjective value <strong>of</strong> a probabilistic reward <strong>of</strong> amount A, h is a parameter<br />

(analogous to k in Equation 2) that reflects the rate <strong>of</strong> decrease in subjective value, and Θ<br />

represents the odds against receipt <strong>of</strong> a probabilistic reward (i.e., Θ = [1 − p]/p, where p is the<br />

probability <strong>of</strong> receipt). When h is greater than 1.0, choice is always risk averse; when h is less<br />

than 1.0, choice is always risk seeking; and when h equals 1.0, the subjective value predicted<br />

by Equation 4 is equivalent to the expected value.<br />

Alternatively, Ostaszewski et al. (1998) suggested that the discounting <strong>of</strong> probabilistic rewards<br />

may be better described by a hyperbola-like form analogous to that for delayed rewards (i.e.,<br />

Equation 3):<br />

V = A / ( 1 + hΘ) s . (5)<br />

The parameter s may represent the nonlinear scaling <strong>of</strong> amount and/or odds against and is<br />

usually less than 1.0 (Green et al., 1999a). This is analogous to the role <strong>of</strong> the exponent in<br />

Psychol Bull. Author manuscript; available in PMC 2006 February 24.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!