30.08.2013 Views

On the Approximability of NP-complete Optimization Problems

On the Approximability of NP-complete Optimization Problems

On the Approximability of NP-complete Optimization Problems

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

10 Chapter 2. Definitions<br />

For most <strong>of</strong> <strong>the</strong> <strong>NP</strong>-<strong>complete</strong> optimization problems <strong>the</strong>re is no hope <strong>of</strong><br />

finding a solution with constant absolute error. Therefore this measure is seldom<br />

used. Still <strong>the</strong>re are a few <strong>NP</strong>O problems which have approximation<br />

algorithms with constant absolute error guarantee, see Section 8.3.<br />

Orponen and Mannila have generalized <strong>the</strong>se measures and defined a general<br />

measure <strong>of</strong> approximation quality.<br />

Definition 2.6 [83] A general measure <strong>of</strong> approximation quality <strong>of</strong> a feasible<br />

solution <strong>of</strong> an <strong>NP</strong>O problem F is a function EF which, for each x ∈ IF ,<br />

satisfies EF (x, y) ≥ 0 if and only if y ∈ SF (x) andEF (x, y) = 0 if and only if<br />

mF (x, y) =opt F (x). We say that <strong>the</strong> measure is cost-respecting if<br />

mF (x, y1) ≤ mF (x, y2) ⇒EF (x, y1) ≥EF (x, y2) if opt F =max,<br />

mF (x, y1) ≤ mF (x, y2) ⇒EF (x, y1) ≤EF (x, y2) if opt F =min.<br />

All <strong>the</strong> measures defined above are cost-respecting.<br />

When analyzing an approximation algorithm <strong>the</strong> most common way to state<br />

<strong>the</strong> approximability is to give <strong>the</strong> performance ratio.<br />

Definition 2.7 The performance ratio <strong>of</strong> a feasible solution with respect to<br />

<strong>the</strong> optimum <strong>of</strong> an <strong>NP</strong>O problem F is defined as<br />

<br />

optF (x)/mF (x, y) if optF =max,<br />

RF (x, y) =<br />

mF (x, y)/optF (x) if optF =min.<br />

where x ∈IF and y ∈ SF (x).<br />

The performance ratio and <strong>the</strong> relative error are obviously related in <strong>the</strong><br />

following way.<br />

⎧<br />

⎪⎨ 1<br />

RF (x, y) = 1 −E<br />

⎪⎩<br />

r ∞<br />

= (E<br />

F (x, y)<br />

i=0<br />

r F (x, y)) i<br />

if optF =max,<br />

1+Er F (x, y) if optF =min.<br />

Definition 2.8 We say that an optimization problem F can be approximated<br />

within p for a constant p if <strong>the</strong>re exists a polynomial time algorithm A such<br />

that for all instances x ∈IF , A(x) ∈ SF (x) andRF (x, A(x)) ≤ p.<br />

For a few problems it is more interesting to consider <strong>the</strong> performance ratio<br />

just for input instances with large optimal value. This asymptotical behaviour<br />

is covered by <strong>the</strong> following definition.<br />

Definition 2.9 We say that an optimization problem F can be approximated<br />

asymptotically within p for a constant p if <strong>the</strong>re exists a polynomial time algorithm<br />

A and a positive constant N such that for all instances x ∈IF with<br />

opt F (x) ≥ N, A(x) ∈ SF (x) andRF (x, A(x)) ≤ p.<br />

Definition 2.10 The best performance ratio R[F ] for an optimization problem<br />

F is defined as<br />

R[F ]=inf{p : F can be approximated within p}.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!