04.02.2015 Views

Stochastic Programming - Index of

Stochastic Programming - Index of

Stochastic Programming - Index of

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

156 STOCHASTIC PROGRAMMING<br />

an exercise.<br />

Example 2.5 Let us conclude this section with another similar example. You<br />

are to throw a die twice, and you will win 1 if you can guess the total number<br />

<strong>of</strong> eyes from these two throws. The optimal guess is 7 (if you did not know<br />

that already, check it out!), and that gives you a chance <strong>of</strong> winning <strong>of</strong> 1 6 .So<br />

the expected win is also 1 6 .<br />

Now, you are <strong>of</strong>fered to pay for knowing the result <strong>of</strong> the first throw. How<br />

much will you pay (or alternatively, what is the EVPI for the first throw) A<br />

close examination shows that knowing the result <strong>of</strong> the first throw does not<br />

help at all. Even if you knew, guessing a total <strong>of</strong> 7 is still optimal (but that<br />

is no longer a unique optimal solution), and the probability that that will<br />

happen is still 1 6<br />

. Hence, the EVPI for the first stage is zero.<br />

Alternatively, you are <strong>of</strong>fered to pay for learning the value <strong>of</strong> both throws<br />

before “guessing”. In that case you will <strong>of</strong> course make a correct guess, and<br />

be certain <strong>of</strong> winning one. Therefore the expected gain has increased from 1 6<br />

to 1, so the EVPI for knowing the value <strong>of</strong> both random variables is 5 6 . ✷<br />

As you see, EVPI is not one number for a stochastic program, but can<br />

be calculated for any combination <strong>of</strong> random variables. If only one number is<br />

given, it usually means the value <strong>of</strong> learning everything, in contrast to knowing<br />

nothing.<br />

References<br />

[1] Bellman R. (1957) Dynamic <strong>Programming</strong>. Princeton University Press,<br />

Princeton, New Jersey.<br />

[2] Helgason T. and Wallace S. W. (1991) Approximate scenario solutions in<br />

the progressive hedging algorithm. Ann. Oper. Res. 31: 425–444.<br />

[3] Howard R. A. (1960) Dynamic <strong>Programming</strong> and Markov Processes. MIT<br />

Press, Cambridge, Massachusetts.<br />

[4] Nemhauser G. L. (1966) Dynamic <strong>Programming</strong>. John Wiley & Sons, New<br />

York.<br />

[5] Rockafellar R. T. and Wets R. J.-B. (1991) Scenarios and policy aggregation<br />

in optimization under uncertainty. Math. Oper. Res. 16: 119–147.<br />

[6] Schaefer M. B. (1954) Some aspects <strong>of</strong> the dynamics <strong>of</strong> populations<br />

important to the management <strong>of</strong> the commercial marine fisheries. Inter-<br />

Am. Trop. Tuna Comm. Bull. 1: 27–56.<br />

[7] Wallace S. W. and Helgason T. (1991) Structural properties <strong>of</strong> the<br />

progressive hedging algorithm. Ann. Oper. Res. 31: 445–456.<br />

[8] Watson S. R. and Buede D. M. (1987) Decision Synthesis. The Principles

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!