22.01.2014 Views

IEOR 269, Spring 2010 Integer Programming and Combinatorial ...

IEOR 269, Spring 2010 Integer Programming and Combinatorial ...

IEOR 269, Spring 2010 Integer Programming and Combinatorial ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>IEOR</strong><strong>269</strong> notes, Prof. Hochbaum, <strong>2010</strong> 11<br />

On the other h<strong>and</strong>, the number of operations we need is roughly O(n 3 ): For each pair of the<br />

O(n 2 ) rows, we need n additions (or subtractions) <strong>and</strong> n multiplications. Therefore, the number<br />

of operations is a polynomial function on the input size.<br />

However, there is an important issue we need to clarify here. When we are doing complexity<br />

analysis, we must be careful if multiplications <strong>and</strong> divisions are used. While additions, subtractions,<br />

<strong>and</strong> comparisons will not bring this problem, multiplications <strong>and</strong> divisions may increase the size of<br />

numbers. For example, after multiplying two numbers a <strong>and</strong> b, we need ⌈log 2 ab⌉ ⌈log 2 a⌉+⌈log 2 b⌉<br />

bits to represent a single number ab. If we further multiply ab by another number, we may need<br />

even more bits to represent it. This exponentially increasing length of numbers may bring two<br />

problems:<br />

• The length of the number may go beyond the storage limit of a computer.<br />

• It actually takes more time to do operations for “long” numbers.<br />

In short, when ever we have multiplications <strong>and</strong> divisions in our algorithms, we must make sure that<br />

the length of numbers does not grow exponentially, so that our algorithm is still polynomial-time,<br />

as we desire.<br />

For Gaussian elimination, the book by Schrjiver has Edmond’s proof (Theorem 3.3) that, at<br />

each iteration of Gaussian elimination, the numbers grow only polynomially (by a factor of ≤ 4 with<br />

each operation). With this in mind, we can conclude that Gaussian elimination is a polynomial-time<br />

algorithm.<br />

7.4 Linear <strong>Programming</strong><br />

Given an m × n matrix A, an m × 1 vector b, <strong>and</strong> an n × 1 vector c, the linear program to solve is<br />

max<br />

s.t.<br />

c T x<br />

Ax = b<br />

x ≥ 0.<br />

With the parameters A, b, <strong>and</strong> c, the input size is bounded below by<br />

mn +<br />

m∑<br />

i=1 j=1<br />

n∑<br />

⌈log 2 a ij ⌉ +<br />

m∑<br />

⌈log 2 b i ⌉ +<br />

i=1<br />

n∑<br />

⌈log 2 c j ⌉.<br />

Now let’s consider the complexity of some algorithms for linear programming. As we already<br />

know, the simplex method may need to go through almost all basic feasible solutions in some<br />

instances. This fact makes the simplex method an exponential-time algorithm. On the other h<strong>and</strong>,<br />

the ellipsoid method has been proved to be a polynomial-time algorithm for linear programming.<br />

Let n be the number of variables <strong>and</strong><br />

( )<br />

L = log 2 max{det B | B is a submatrix of A} ,<br />

it has been shown that the complexity of the ellipsoid method is O(n 6 L 2 ).<br />

It is worth mentioning that in practice we still prefer the simplex method to the ellipsoid<br />

method, even if theoretically the former is inefficient <strong>and</strong> the latter is efficient. In practice, the<br />

simplex method is usually faster than the ellipsoid method. Also note that the complexity of the<br />

ellipsoid method depends on the values of A. In other words, in we keep the numbers of variables<br />

<strong>and</strong> constraints the same but change the coefficients, we may result in a different running time.<br />

This does not happen in running the simplex method.<br />

j=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!