13.07.2015 Views

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) 2.1 ...

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) 2.1 ...

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) 2.1 ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Limits of <strong>Approximation</strong> <strong>Algorithms</strong>28 Jan, 2010 (TIFR)<strong>Lec</strong>. 2: <strong>Approximation</strong> <strong>Algorithms</strong> <strong>for</strong> <strong>NP</strong>-<strong>hard</strong> <strong>Problems</strong> (<strong>Part</strong> <strong>II</strong>)<strong>Lec</strong>turer: Prahladh HarshaScribe: S. Ajesh BabuWe will continue the survey of approximation algorithms in this lecture. First, we willdiscuss a (1+ε)-approximation algorithm <strong>for</strong> Knapsack in time poly(n, 1/ε). We will thensee applications of some heavy hammers such as linear programming (LP) and semi-definiteprogramming (SDP) towards approximation algorithms. More specifically, we will see LPbasedapproximation <strong>for</strong> MAXSAT and MAXCUT. In the following lecture, we will see aSDP-based algorithm <strong>for</strong> MAXCUT.The references <strong>for</strong> this lecture include <strong>Lec</strong>ture 1 of the DIMACS tutorial on Limitsof approximation [HC09], <strong>Lec</strong>tures 2, 3 and 4 of Sudan’s course on inapproximability atMIT [Sud99], and Section 8 from Vazirani’s book on <strong>Approximation</strong> <strong>Algorithms</strong> [Vaz04].<strong>2.1</strong> KnapsackThe Knapsack problem is defined as follows.Input: • n items, each item i ∈ [n] has weight w i ∈ Z ≥0 and value v i ∈ Z ≥0 .• A Knapsack with capacity c ∈ Z ≥0 .Output: Find a subcollection of items S ⊆ [n] such that ∑ i∈S w i ≤ c.Objective: Maximize the total value of the subcollection: ∑ i∈S v i<strong>2.1</strong>.1 Greedy approachThe following is a natural greedy approach <strong>for</strong> this problem.1 Sort the items by v i /w i (in non-increasing order). That is, arrange the items suchthat v j /w j ≥ v j+1 /w j+1 .2 Find the smallest k such that ∑ i≤k w i > c3 Output S = {1, · · · , k − 1}It can be shown that this algorithm per<strong>for</strong>ms arbitrarily bad wrt approximation. Moreprecisely, <strong>for</strong> every M, there is a Knapsack instance such that the the greedy output isless than OP T/M. We can do slightly better if we replace the final step as follows.3’ If v k > ∑ k−1i=1 v i then output {k} else output SIt can be shown that the above modification to the greedy approach yields a 2-approximation.2-1


<strong>2.1</strong>.2 FPTAS <strong>for</strong> KnapsackWe will now show that we can get arbitrarily good approximations <strong>for</strong> Knapsack. Moreprecisely, we will show that <strong>for</strong> every ε > 0, there exists a (1 + ε)-approximation algorithm<strong>for</strong> Knapsack running in time poly(n, 1/ε).For this, we will first require the following claim which shows that Knapsack is optimallysolvable if we allow the algorithm to run in time poly(V ) where V is the maximumvalue, ie., max i∈[n] v i instead of poly log V . Such algorithms which run in time polynomialin the value of the input integers rather than polynomial in the bit-complexity of the inputintegers are called pseudo-polynomial algorithms.Claim <strong>2.1</strong>.1. There is a pseudo-polynomial time algorithm OPT-Knapsack that solvesKnapsack optimally, i.e. in time poly(n, V ) = O(n 2 V ).This claim tells us that if the value of the items are small enough (more precisely,polynomially small), then the problem is optimally solvable. We will assume this claim <strong>for</strong>now and show how using it, one can construct a (1 + ε)-approximation algorithm. Themain idea is that as we only want to obtain a (1 + ε)-approximation, we can round thevalues v i to a smaller-sized set so that the new set of values is only polynomially large andthen apply OPT-Knapsack on it.Let V = max i∈[n] v i and K = εV/n.(1 + ε) approximate algorithm <strong>for</strong> Knapsack1 Construct v ′ i = ⌊ v iεV/n⌋= ⌊vi /K⌋, ∀i ∈ [n]2 Run OPT-Knapsack over the input {(v ′ i , w i), i = 1, . . . , n} to obtain set S3 Output SAnalysis: Let O be the optimal sub-collection O ⊆ [n] <strong>for</strong> the original input {(v i , w i )}.For any sub-collection A ⊆ [n], let val(A) = ∑ i∈A v i and val ′ (A) = ∑ i∈A v′ i . We want tocompare OP T = val(O) with val(S). We know that Kv i ′ ≤ v i < Kv i ′ + K and V ≤ OP T .Hence,val(O) − Kval ′ (O) ≤ |O|K ≤ nK = εV ≤ εOP T⇒ OP T − Kval ′ (O) ≤ εOP T⇒ Kval ′ (O) ≥ (1 − ε)OP TS is an optimal set <strong>for</strong> the input {(v ′ i , w i)}. There<strong>for</strong>e,val(S) ≥ Kval ′ (S) ≥ Kval ′ (O) ≥ (1 − ε)OP T .Hence assuming the Claim <strong>2.1</strong>.1, there is a (1 + ε)-approx algorithm that runs in timepoly(n, n/ε) = poly(n, 1/ε). Such an algorithm is called a fully polynomial time approximationscheme (FPTAS)2-2


Definition <strong>2.1</strong>.2 (FPTAS). A optimization problem Π is said to have fully polynomialtime approximation scheme (FPTAS) if there exists an algorithm A such that on inputinstance I of Π and ε > 0, the algorithm A outputs a (1 + ε)- approximate solution in timepoly(|I|, 1/ε)We conclude by proving Claim <strong>2.1</strong>.1.Pseudo polynomial algorithm <strong>for</strong> OPT-Knapsack: The claim is proved via dynamicprogramming. Typically, all pseudo-polynomial algorithms are obtained via dynamic programming.Define( ) ∑w ii∈Af k,v = minA⊆[k],val(A)≥vThe table of f k,v ’s can be computed using the following recurrence by dynamic programming.{w 1 v 1 ≥ vf 1,v =, ∀v ∈ {1, · · · , nV }∞ otherwisef k+1,v = min {f k,v , f k,v−vk + w k }The optimal value is then obtained from the fk,v ′ s using the <strong>for</strong>mulaWe have thus shown thatOP T = max{v|f n,v ≤ c}.Theorem <strong>2.1</strong>.3. Knapsack has an FPTAS.2.2 Minimum MakespanThe Minimum-Makespan problem is defined as follows.Input: • n jobs, each job j ∈ [n] takes time t j .• m processessors.Output: Find a partition of [n] into m sets (S 1 , · · · , S m ) so as to minimize the max timeon any processor that is min S1 ,··· ,S mmax j∑i∈S jt i .Like Knapsack, Minimum-Makespan has a (1 + ε)-approximate algorithm <strong>for</strong> everyε > 0, however, unlike Knapsack this algorithm runs in time O(n 1/ε ). We will assume thisalgorithm without proof (<strong>for</strong> details see [Vaz04, Section 10]).Thus, even though we have very good approximation <strong>for</strong> Minimum-Makespan, thedependence of the running time on ε is not polynomial. Such, approximation schemes arecalled polynomial time approximation scheme (PTAS).Definition 2.<strong>2.1</strong> (PTAS). An optimization problem Π is said to have a polynomial timeapproximations scheme (PTAS), if <strong>for</strong> every ε, there is an algorithm A ε that on input Ioutputs an (1 + ε) approximate solution in time poly(|I|).Note the order of quantifier, first ε and then A ε . Thus, the running time of algorithmA ε can have arbitrary dependence on ε (eg., poly(n, 1/ε), n poly(1/ε) , n 21/ε etc). Clearly, if Πhas an FPTAS, then Π has a PTAS.2-3.


2.3 MAXSAT and MAXCUTThe first of the problems is MAXSAT.Input: A set of m clauses on n boolean variables: c 1 , · · · , c meg: c i = (x i1 ∨ ¯x i2 ∨ x i3 )Output: An assignmentGoal: Maximize number of clauses being satisfiedThere exists a trivial 2-approximation: Try the all-true and all-false assignments. Everyclause is satisfied by at least one of them, thus, the best of the two satisifies at least halfthe total number of clauses and thus gives a 2-approximation.The second of the problems is MAXCUT (which is actually a special case of MAXSAT).Input: Graph G = (V, E) undirectedOutput: A cut(S, ¯S), S ⊆ VGoal: Maximize E(S, ¯S) (the number of edges crossing the cut)MAXCUT has an easy 2-approximation based on a greedy approach.1. Put the first vertex in S2. Iterate through the rest of the vertices and incrementally add them to one of S or ¯S,based on which of these options are currently better.It is easy to see that this greedy approach ensures that at least half the edges are cut andthus gives a 2-approximation.2.3.1 Randomized RoundingWe will now demonstrate the randomized rounding technique taking the example of MAXSAT.This will give us an alternate 2-approximation <strong>for</strong> MAXSAT.The rounding procedure is as follows:For each variable x i set it to true with probability 1 2and false otherwise.For any clause C j , the probability that it is not satisifed is exactly 1/2 k j, where k j is thenumber of literals in C j . Let I j be the indicator variable such that{1 if C j is satisfiedI j =0 otherwise.There<strong>for</strong>e, Pr [I j = 1] = 1 − 1/2 k j. If C denoted the number of satisfied clauses, then C =∑ mj=1 I j. We thus have E[C] = ∑ mj=1 (1 − 1/2k j) ≥ m/2. So in expection, we satisfy atleasthalf the number of clauses. Observe that this is a randomized procedure and the aboveanalysis gives only a guarantee on the expected number of clauses. Both of these can be2-4


overcome. We can repeat this randomized algorithm several times to get an assignment thatper<strong>for</strong>ms nearly as well as a random assignment. Furthermore, the randomzied algorithmcan be derandomized using the method of conditional expectations. But <strong>for</strong> the purposesof this course, we will be satisifed with such randomized algorithms with guarantees onlyon the expected value of the output.The following observation of the above trivial randomized algorithm will come useful.Note that the algorithm per<strong>for</strong>ms better <strong>for</strong> wider clauses. More precisely, if all theclauses have atleast k literals, then E[C] ≥ m(1 − 1/2 k ) and we thus have a 1/(1 − 2 −k )-approximation algorithm.2.4 Heavy hammers <strong>for</strong> approximation – Linear ProgrammingWe will now see how to use heavy hammers such as LP-solvers towards approximationalgorithms.2.4.1 Linear Program (LP) and LP-solversRecall a linear program from <strong>Lec</strong>ture 1.Maximizesubject toC T xAx ≤ bwhere c ∈ R n , b ∈ R m and A ∈ R m×n . By now, we now know that such linear programs canbe solved in polynomial time using either the interior points methods [Karmarkar ’84] orthe ellipsoid method [Khachiyan ’79]. More precisely, it is known that LPs can be solved intime polynomial in n, m and l, where l = # bits required to represent the elements of A, b, c.We will use these LP-solvers as black-boxes while designing approximation algorithms. Inother words, we will try to frame the given problem, rather a relaxation of the problem asa LP, solve the LP and then per<strong>for</strong>m some rounding to obtain an integral solution.2.4.2 LP-based approximation algorithm <strong>for</strong> MAXSATWe first restate MAXSAT as an Integer Program (IP) and relax it to an (LP),IP <strong>for</strong> MAXSAT:• For each variable x i we have a 0-1 variable z i .z i is supposed to indicate if x i is true. i.e., z i = 1 iff x i = T rue• For each clause C j we have a 0-1 variable y j . y j is supposed to indicate if C j issatisfied. i.e., y j = 1 iff C j is satisfied• Constrants on y j and x i ’s that check if y j is 1 iff one of the literals in C j are true.Instead of stating in generally, let us take a typical clause, say C 2 = (x 1 ∨ x 2 ∨ ¯x 3 ).It will be easy to see that this can be extended to all clauses. For C 2 , we will imposethe constraint E 2 : y 2 ≤ z 1 + z 2 + (1 − z 3 ). This ensures that y 2 is 1 iff one of z 1 , z 2or (1 − z 3 ) are 1.2-5


Thus, the IP is as follows:max ∑ y jsubject to constraint E j , ∀j ∈ [m]z i ∈ {0, 1}, ∀i ∈ [n]y j ∈ {0, 1}, ∀j ∈ [m]Notice that but <strong>for</strong> the last two 0-1 constraints all the other constraints are linear. We nowrelax the last two constraints to be linear as follows.max ∑ y jsubject to constraint E j , ∀j ∈ [m]z i ∈ [0, 1], ∀i ∈ [n]y j ∈ [0, 1], ∀j ∈ [m]Now that we have relaxed the original problem to a LP, we can solve it by running theLP-solver to obtain the LP optimal solution OP T LP .Observe that the solution space has only increased by this relaxation. Any feasibleintegral solution to the original IP is a feasible solution to the LP. Hence, we call thisprogram, a LP relaxation of the original IP. Furthermore, since it is a relaxation the LPoptimal is at least the OPT (i.e., OP T LP ≥ OP T ). However, the LP optimal need not bean integral solution. In fact, this ratio between the LP optimal fractional solution and theintegral solution is called the integrality gap of the relaxation.Integrality Gap =maxI:I is an instanceLP fractional solution.Integral solutionRecall that while designing an approximation algorithm, we compared the quality of thealgorithm against a easily computable upper bound on the actual optimum (lower boundin case of minimization problems). The LP optimal is a natural candidate <strong>for</strong> such anupper bound. In fact, in almost all LP-relaxation based approximation algorithms, we willcompare the quality of the algorithm against the true optimum by comparing it against theLP optimum.Rounding: Though we have solved the LP, we have not yet solved the original problemor an approximation of it since the LP optimal may give a solution in which some x i , y j arefractions and not 0-1 solutions. We need to design a rounding mechansim to obtain integralsolutions from this fractional solutions. There are two natural rounding algorithms.Deterministic Rounding: For each variable x i , set x i to true if z i > 0.5 and false otherwise.Randomized Rounding: For each variable x i independently, set x i to true with probabilityz i and false with probability (1 − z i ).We will use the latter rounding scheme <strong>for</strong> this problem. Let us now analyse the qualityof this rounding algorithm by comparing its expected output to the LP optimum.2-6


Analysis: For ease of explanation, we will do the analysis only <strong>for</strong> monotone <strong>for</strong>mulae,that is the literals in each clause appear in their uncomplemented <strong>for</strong>m. It will be easyto see that this analysis extends to all <strong>for</strong>mulae. Consider any such clause, say C j =z i1 ∨ z i1 ∨ · · · ∨ z ikj with k j literals. The probability that C j is not satisified is exactly∏ kjl=1 (1 − z i l). Thus, the expected number of clauses satisified is as follows.E [number of satisfied clauses] ==m∑Pr [C j is satisified ]j=1⎡⎤k m∑ ∏ j⎣1 − (1 − z il ) ⎦On the other hand, the LP optimum is ∑ mj=1 y j = ∑ {mj=1 min ∑kj}l=1 z i l, 1 . We will comparethe rounded solution to the LP optimum by comparing each term in one summationagainst the corresponding term in the other summation, i.e., 1 − ∏ k j{ l=1 (1 − z i l) vs.∑kj}y j = minl=1 z i l, 1 . We would like to determine the worst possible ratio betweenk jj=1l=1∏1 − (1 − z il ) vs y j = min{1, ∑ z il }l=1The worst ratio is (i.e., the ratio is minimized) when all the z il are equal, i.e., z il = 1/k, inwhich case the ratio is 1 − (1 − 1/k) k ≥ 1 − e −1 (See Appendix A <strong>for</strong> the detailed analysis).Thus, the worst ratio between the expected number of clauses satisfied and the LP optimumis at least 1 − e −1 . Thus, this gives us a 1/(1 − e −1 ) ≈ 1.582-approximation algorithm.Observe that the ratio of 1 − e −1 is attained only <strong>for</strong> very large k. For smaller k,the ratio 1 − (1/k) k is much better. Thus, in contrast to the vanilla rounding discussed inSection 2.3.1, the LP-based aproximation per<strong>for</strong>ms well when the clauses are small. Thus, ifall the clauses are narrow it is better to per<strong>for</strong>m the LP-based approximation while if all theclauses are wide, it is better to per<strong>for</strong>m the vanilla rounding algorithm. What about, whenwe have clauses of both types? In the next section, we will see that the simple algorithm ofchoosing the best of these two algorithms yields a 4/3≈ 1.333-approximation.2.5 MAXSAT: Best of Vanilla Rounding and LP-based roundingConisder the algorithm which on a given MAXSAT, per<strong>for</strong>ms both the vanilla roundingalgorithm and the LP based rounding algorithm and chooses the better of the two. We can2-7


analyse this algorithm as follows.E[max (vanilla, LP-based)] ≥ 1 (E[vanilla] + E[LP-based])2≥∑ [ (11 − 1 ) (2 2 k + 1 1 −(1 − 1 ) ) ]kjyjj2k jj∈[m]≥∑ [ (11 − 1 ) (2 2 k yj j + 1 1 −(1 − 1 ) ) ]kjy j2k jj∈[m]≥∑ [1 − 12 k j+1 − 1 (1 − 1 ) ]kjy j2 k jj∈[m]The last inequality follows since 1 − 1≥ 3 4∑yj = 3 4 OP T LP2 k j +1 − 1 2(1 − 1 k j) kjis equal to 3/4 <strong>for</strong> kj = 1, 2 andstrictly greater than 3/4 <strong>for</strong> all other k j . We thus have a 4/3-approximation <strong>for</strong> MAXSAT.Such an approximation was first obtained by Yannakakis, but the above algorithm is dueto Goemans and Williamson.References[HC09] Prahladh Harsha and Moses Charikar. Limits of approximation algorithms: PCPsand unique games, 2009. (DIMACS Tutorial, July 20-21, 2009). arXiv:1002.3864.[Sud99] Madhu Sudan. 6.893: Approximability of optimization problems, 1999. (A course onApproximability of Optimization <strong>Problems</strong> at MIT, Fall 1999).[Vaz04] Vijay V. Vazirani. <strong>Approximation</strong> <strong>Algorithms</strong>. Springer, 2004.AAppendixWe show that 1−∏ (1−z il ) ky j≥ 1 − ( 1 − 1 ) kk where yj = min{1, ∑ z il }. The two cases toconsider are as follows.2-8


• Suppose y j = 1: There<strong>for</strong>e,∑zil ≥ 1∑zil⇒⇒⇒⇒≥ 1 k k( ∑zil1 −k( ∑zil1 − 1 −k(∑ (1 − zil )1 −k) k (≤ 1 − 1 ) kk1 − ∏ (1 − z il ) k ≥ 1 −) k (≥ 1 − 1 − 1 ) kk) k (≥ 1 − 1 − 1 ) kk(1 − 1 ) ( k ∏using AM-GM inequality i.e. ai ≤k(∑aik) k).• Suppose y j = ∑ z il : There<strong>for</strong>e, ∑ z il < 1. The function f(x) = 1− ( 1 − x ) kk is concavein the range 0 ≤ x ≤ 1 (reason: f ′′ (x) = − k−1 ( )k 1 −x k−2(k is negative in this range).The line x 1 − ( 1 − 1 ) ) kkhas the same value as f(x) at x = 0 and x = 1 (i.e it is a secantline). Hence, by definition of concave functions, 1− ( 1 − x ) ( kk ≥ x 1 − ( 1 − 1 ) ) kk.Setting x = ∑ ( ∑ zil) k ∑ (z il and we get 1 − 1 −k ≥ ( zil ) 1 − ( 1 − 1 ) ) kk. Applyingthe AM-GM inequality, we get 1−∏ (1−z il ) k∑ zil≥(1 − ( 1 − 1 k) k).2-9

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!