04.09.2013 Views

Algorithm Design

Algorithm Design

Algorithm Design

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

720<br />

Chapter 13 Randomized <strong>Algorithm</strong>s<br />

declaring this to have the value c~ if the sum diverges. Thus, for example,<br />

if X takes each of the values in {1, 2, n} with probability l/n, then<br />

E IX] = 1(!/n) + 2(l/n) +-" + n(lln) --- (n-~l)/n --- (n + 1)/2.<br />

Example: Waiting for a First Success<br />

Here’s a more useful example, in which we see how an appropriate random<br />

variable lets us talk about something like the "running time" of a simple<br />

random process. Suppose we have a coin that comes up heads with probability<br />

p > 0, and ~cails with probability 1- p. Different flips of the coin have<br />

independent outcomes. If we flip the coin until we first get a heads, what’s<br />

the expected number of flips we will perform? To answer this, we let X denote<br />

the random variable equal to the number of flips performed. For j > 0, we<br />

h ~,_ r X ;~ r 1 p)i-lp, in order for the process to take exactly j steps,<br />

the first j - 1 flips must come up ta±ls, and the j~" must come up heads.<br />

Now, applying the definition, we have<br />

E[X] =~_~J’Pr[X~J]=~__, j(1-p)]-lp- 1 p 7= 1<br />

~=o 7=I<br />

= ~_2_. 0- P___2 = !.<br />

1 -- p p2 p<br />

Thus we get the following intuitively sensible result.<br />

(13.7) If we repeatedly perform independent trials of an experimer& each of<br />

which succeeds with probability p > 0, then the expected number of trials we<br />

need to perform until the first success is 1/p.<br />

Linearity of Expectation ....<br />

In Sections 13.1 and 13.2, we broke events down into unions of mudh simpler<br />

events, and worked with the probabilities of these simpler events. This is a<br />

powerful technique when working with random variables as well, and it is<br />

based on the principle of linearity of expectation.<br />

(13.8) Linearity of Expectation. Given two random variables X and Y defined<br />

over the same probability space, we can define X + Y to be the random variable<br />

equal to X(~o) + Y(co) on a sample point o~. For any X and Y, we have<br />

~ [X + ~] = ~ [X] + ~ [V] . .......<br />

We omit the proof, which is not difficult. Much of the power of (13.8)<br />

comes from the fact that it applies to the sum of any random variables; no<br />

restrictive assumptions are needed. As a result, if we need to compute the<br />

13.3 Random Variables and Their Expectations<br />

expectation of a complicated random variable X, we can first write it as a<br />

sum of simpler random variables X = X 1 + X 2 +. ¯ ¯ + Xn, compute each E [Xi],<br />

and then determine E [X] = y~ E [Xi]. We now look at some examples of this<br />

principle in action.<br />

Example: Guessing Cards<br />

Memoryless Guessing To amaze your friends, you have them shuffle a deck<br />

of 52 cards and then turn over one card at a time. Before each card is turned<br />

over, you predict its identity. Unfortunately, you don’t have any particular<br />

psychic abilities--and you’re not so good at remembering what’s been turned<br />

over already--so your strategy is simply to guess a card uniformly at random<br />

from the full deck each time. On how many predictions do you expect to be<br />

correct?<br />

Let’s work this out for the more general setting in which the deck has n<br />

distinct cards, using X to denote the random variable equal to the number of<br />

correct predictions. A surprisingly effortless way to compute X is to define the<br />

random variable Xi, for i = 1, 2 ..... n, to be equal to 1 if the i th prediction is<br />

correct, and 0 otherwise. Notice that X = X~ + X 2 + ¯ ¯ ¯ + X n, and<br />

[x,] = o . *’r IX, = O] + 1. er IX, = q = er IX, = 1] = i.<br />

It’s worth pausing to note a useful fact that is implicitly demonstrated by the<br />

above calculation: If Z is any random variable that only takes the values 0 or<br />

1, then E [Z] = Pr [Z = !].<br />

1 for each i, we have<br />

Since E [Xi] = -~<br />

E [X] = 2 E [X,] = n (-~ ) = I.<br />

f=l<br />

Thus we have shown the following.<br />

(13.9) The expected number of correct predictions under the memoryIess<br />

guessing strategy is 1, independent of n.<br />

Trying to compute E [X] directly from the definition ~0J" Pr [X =j]<br />

would be much more painful, since it would involve working out a much more<br />

elaborate summation. A significant amount of complexity is hidden away in<br />

the seemingly innocuous statement of (13.8).<br />

Guessing with Memory Now let’s consider a second scenario. Your psychic<br />

abilities have not developed any further since last time, but you have become<br />

very good at remembering which cards have already been turned over. Thus,<br />

when you predict the next card now, you only guess uniformly from among<br />

721

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!