04.09.2013 Views

Algorithm Design

Algorithm Design

Algorithm Design

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

758<br />

Chapter 13 Randomized <strong>Algorithm</strong>s<br />

This is the probability of a miss on the request to s. Summing over all requests<br />

to unmarked items, we have<br />

i=1<br />

Thus the total expected number of misses incurred by the Randomized<br />

Marking <strong>Algorithm</strong> is<br />

r<br />

i=1 i=1<br />

Combining (13.39) and (13.40), we immediately get the following perfor-<br />

mance guarantee.<br />

(13.41) The expected nwraber of misses incurred by the Randomized Mafking<br />

<strong>Algorithm</strong> is at most 2H(k) ¯ f(a) = O(log k)- f(a).<br />

13.9 Chernoff Bounds<br />

In Section 13.3, we defined the expectation of a random variable formally and<br />

have worked with this definition and its consequences ever since. Intuitively,<br />

we have a sense that the value of a random variable ought to be "near" its<br />

expectation with reasonably high probability, but we have not yet explored<br />

the extent to which this is true. We now turn to some results that allow us to<br />

reach conclusions like this, and see a sampling of the applications that follow.<br />

We say that two random variables X and Y are independent if, for any<br />

values i and j, the events Pr IX = i] and Pr [Y =j] are independent. This<br />

definition extends naturally to larger sets of random variables. Now consider<br />

a random variable X that is a sum of several independent 0-1-valued random<br />

variables: X = X! + X~. +. ¯ ¯ + Xn, where Xi takes the value I with probability<br />

otherwise. By linearity of expectation, we have E [X] =<br />

pi, and the value 0 the independence of the random variables X1, X2 .....Xn<br />

~=~ p~. Intuitively,<br />

suggests that their fluctuations are likely to "cancel out;’ and so their sum<br />

X will have a value close to its expectation with high probability. This is in<br />

fact true, and we state two concrete versions of this result: one bounding the<br />

probability that X deviates above E[X], the other bounding the probability that<br />

X deviates below E[X]. We call these results Chemoff bounds, after one of the<br />

probabilists who first established bounds of this form.<br />

r<br />

13.9 Chernoff Bounds<br />

Proof. To bound the probability that X exceeds (1 + 8)/z, we go through a<br />

sequence of simple transformations. First note that, for any t > 0, we have<br />

Pr [X > (1 + 8)/z] = Pr [e tx > et(l+~)u], as the function f(x) = e tx is monotone<br />

in x. We will use this observation with a t that we’ll select later.<br />

Next we use some simple properties of the expectation. For a random<br />

variable Y, we have y Pr [Y > ~] _< E [Y], by the definition of the expectation.<br />

This allows us to bound the probability that Y exceeds y in terms of E [Y].<br />

Combining these two ideas, we get the following inequalities.<br />

Next we need to bound the expectation E [etX]. Writing X as X = Ei Xi, the<br />

expectation is E [e tx] = E [et ~x~] = E [r-[i etX;] ¯ For independent variables Y<br />

and Z, the expectation of the product YZ is E [YZ] = E [Y] E [Z]. The variables<br />

Xi are independent, so we get E [F[ie tx~] = r-liE [etX~].<br />

Now, e a~ is e ~ with probability Pi and e ° = 1 otherwise, so its expectation<br />

can be bounded as<br />

E [e a*] =pier+ (1 -Pi) = 1 +pi(e t - 1) < e p6e’-l),<br />

where the last inequality follows from the fact that 1 + a _< e ~ for any ~ >_ 0.<br />

Combining the inequalities, we get the following bound.<br />

--< e-t(l+~)tz H ePz(e~-l)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!