09.09.2014 Views

algorithms

algorithms

algorithms

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

9.2 Approximation <strong>algorithms</strong><br />

In an optimization problem we are given an instance I and are asked to find the optimum<br />

solution—the one with the maximum gain if we have a maximization problem like INDEPEN-<br />

DENT SET, or the minimum cost if we are dealing with a minimization problem such as the<br />

TSP. For every instance I, let us denote by OPT(I) the value (benefit or cost) of the optimum<br />

solution. It makes the math a little simpler (and is not too far from the truth) to assume that<br />

OPT(I) is always a positive integer.<br />

We have already seen an example of a (famous) approximation algorithm in Section 5.4:<br />

the greedy scheme for SET COVER. For any instance I of size n, we showed that this greedy<br />

algorithm is guaranteed to quickly find a set cover of cardinality at most OPT(I) log n. This<br />

log n factor is known as the approximation guarantee of the algorithm.<br />

More generally, consider any minimization problem. Suppose now that we have an algorithm<br />

A for our problem which, given an instance I, returns a solution with value A(I). The<br />

approximation ratio of algorithm A is defined to be<br />

α A = max<br />

I<br />

A(I)<br />

OPT(I) .<br />

In other words, α A measures by the factor by which the output of algorithm A exceeds the<br />

optimal solution, on the worst-case input. The approximation ratio can also be defined for<br />

maximization problems, such as INDEPENDENT SET, in the same way—except that to get a<br />

number larger than 1 we take the reciprocal.<br />

So, when faced with an NP-complete optimization problem, a reasonable goal is to look for<br />

an approximation algorithm A whose α A is as small as possible. But this kind of guarantee<br />

might seem a little puzzling: How can we come close to the optimum if we cannot determine<br />

the optimum? Let’s look at a simple example.<br />

9.2.1 Vertex cover<br />

We already know the VERTEX COVER problem is NP-hard.<br />

VERTEX COVER<br />

Input: An undirected graph G = (V, E).<br />

Output: A subset of the vertices S ⊆ V that touches every edge.<br />

Goal: Minimize |S|.<br />

See Figure 9.3 for an example.<br />

Since VERTEX COVER is a special case of SET COVER, we know from Chapter 5 that it can<br />

be approximated within a factor of O(log n) by the greedy algorithm: repeatedly delete the<br />

vertex of highest degree and include it in the vertex cover. And there are graphs on which the<br />

greedy algorithm returns a vertex cover that is indeed log n times the optimum.<br />

A better approximation algorithm for VERTEX COVER is based on the notion of a matching,<br />

a subset of edges that have no vertices in common (Figure 9.4). A matching is maximal if no<br />

more edges can be added to it. Maximal matchings will help us find good vertex covers, and<br />

moreover, they are easy to generate: repeatedly pick edges that are disjoint from the ones<br />

chosen already, until this is no longer possible.<br />

275

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!