04.09.2013 Views

Algorithm Design

Algorithm Design

Algorithm Design

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

648<br />

Chapter 11 Approximation <strong>Algorithm</strong>s<br />

The New Dynamic Programming <strong>Algorithm</strong> for the<br />

Knapsack Problem<br />

To solve a problem by dynamic programming, we have to define a polynomial<br />

set of subproblems. The dynamic programming algorithm we defined when we<br />

studied the Knapsack Problem earlier uses subproblems of the form OPT(i, w):<br />

the subproblem of finding the maximum value of any solution using a subset of<br />

the items 1 ..... i and a knapsack of weight w. When the weights are large, this<br />

is a large set of problems. We need a set of subproblems that work well when<br />

the values are reasonably small; this suggests that we should use subproblems<br />

associated with values, not weights. We define our subproblems as follows.<br />

The subproblem is defined by i and a target value V, and b-~(i, V) is the<br />

smallest knapsack weight W so that one can obtain a solution using a subset<br />

of items {1 ..... i} with value at least VI We will have a subproblem for all<br />

i = 0 ..... n and values V = 0,.. i, ~=1 vi" If v* denotes maxi vi, then we see<br />

that the largest V can get in a subproblem is ~=1 vj E n-~ -<br />

i=~ vi, then-6-P~(n, V) = wn +-6-~(n !, V - vn). Otherwise<br />

b-fir(n, V) = min(b-fff(n - !, V), w~ + b-fff(n - 1, max(0, V - vn))).<br />

We can then write down an analogous dynamic programming algorithm.<br />

Knapsack(n) :<br />

Array M[0 ... n, 0... V]<br />

For i = 0 ..... n<br />

M[i, O] = 0<br />

Endfor<br />

For i=1,2 ..... n<br />

For V = 1 ..... ~=I<br />

v > vj then<br />

M[i, V]=wi+M[i- 1, V]<br />

Else<br />

M[i, V]’= min(M[i - I~ V], m~ +~[f - I, m~(O, V - u~)])<br />

Endif<br />

~dfor<br />

~or<br />

Retu~ the m~im~ v~ue V such that ~[n, V]~ ~<br />

Solved Exercises<br />

(11.40) Knapsack(n) takes O(n~v *) time and correctly computes the-optimal<br />

values of the subproblems.<br />

As was done before, we can trace back through the table M containing the<br />

optimal values of the subpmblems, to find an optimal solution.<br />

Solved Exercises<br />

Solved Exercise 1<br />

Recall the Shortest-First greedy algorithm for the Interval Scheduling Problem:<br />

Given a set of intervals, we repeatedly pick the shortest interval I, delete all<br />

the other intervals I’ that intersect I, and iterate.<br />

In Chapter 4, we saw that this algorithm does not always produce a<br />

maximum-size set of nonoveflapping intervals. However, it turns out to hdve<br />

the fol!owing interesting approximation guarantee. If s* is the maximum size<br />

of a set of nonoverlapping intervals, and s is the size of the set produced<br />

by the Shortest-First <strong>Algorithm</strong>, then s > ½s* (that is, Shortest-First is a 2approximation).<br />

Prove this fact.<br />

Solution Let’s first recall the example in Figure 4.1 from Chapter 4, which<br />

showed that Shortest-First does not necessarily find an optimal set of intervals.<br />

The difficulty is clear: We may select a short interval ] while eliminating two<br />

longer flanking intervals i and i’. So we have done only half as well as the<br />

optimum.<br />

The question is to show that Shortest-First could never do worse than this.<br />

The issues here are somewhat similar to what came up in the analysis of the<br />

649

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!