28.11.2014 Views

Lecture Notes Discrete Optimization - Applied Mathematics

Lecture Notes Discrete Optimization - Applied Mathematics

Lecture Notes Discrete Optimization - Applied Mathematics

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Note that in this example the number of feasible solution in F is uncountable. So why<br />

does this problem qualify as a discrete optimization problem? The answer is that F<br />

defines a feasible set that corresponds to the convex hull of a finite number of vertices.<br />

It is not hard to see that if we optimize a linear function over a convex hull then there<br />

always exists an optimal solution that is a vertex. We can thus equivalently formulate the<br />

problem as finding a vertex x of the convex hull defined byF that minimizes c(x).<br />

1.2 Algorithms and Efficiency<br />

Intuitively, an algorithm for an optimization problem Π is a sequence of instructions<br />

specifying a computational procedure that solves every given instance I of Π. Formally,<br />

the computational model underlying all our considerations is the one of a Turing machine<br />

(which we will not define formally here).<br />

A main focus of this course is on efficient algorithms. Here, efficiency refers to the overall<br />

running time of the algorithm. We actually do not care about the actual running time<br />

(in terms of minutes, seconds, etc.), but rather about the number of basic operations.<br />

Certainly, there are different ways to represent the overall running time of an algorithm.<br />

The one that we will use here (and which is widely used in the algorithms community)<br />

is the so-called worst-case running time. Informally, the worst-case running time of an<br />

algorithm measures the running time of an algorithm on the worst possible input instance<br />

(of a given size).<br />

There are at least two advantages in assessing the algorithm’s performance by means<br />

of its worst-case running time. First, it is usually rather easy to estimate. Second, it<br />

provides a very strong performance guarantee: The algorithm is guaranteed to compute a<br />

solution to every instance (of a given size), using no more than the stated number of basic<br />

operations. On the downside, the worst-case running time of an algorithm might be an<br />

overly pessimistic estimation of its actual running time. In the latter case, assessing the<br />

performance of an algorithm by its average case running time or its smoothed running<br />

time might be suitable alternatives.<br />

Usually, the running time of an algorithm is expressed as a function of the size of the input<br />

instance I. Note that a-priori it is not clear what is meant by the size of I because there<br />

are different ways to represent (or encode) an instance.<br />

Example 1.1. Many optimization problems have a graph as input. Suppose we are given<br />

an undirected graph G=(V,E) with n nodes and m edges. One way of representing G is<br />

by its n×n adjacency matrix A=(a i j ) with a i j = 1 if(i, j)∈ E and a i j = 0 otherwise. The<br />

size needed to represent G by its adjacency matrix is thus n 2 . Another way to represent G<br />

is by its adjacency lists: For every node i∈ V, we maintain the set L i ⊆ V of nodes that<br />

are adjacent to i in a list. Note that each edge occurs on two adjacency lists. The size to<br />

represent G by adjacency lists is n+2m.<br />

The above example illustrates that the size of an instance depends on the underlying data<br />

structure that is used to represent the instance. Depending on the kind of operations that<br />

an algorithm uses, one might be more efficient than the other. For example, checking<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!