16.11.2012 Views

Wireless Network Design: Optimization Models and Solution ...

Wireless Network Design: Optimization Models and Solution ...

Wireless Network Design: Optimization Models and Solution ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4 An Introduction to Integer <strong>and</strong> Large-Scale Linear <strong>Optimization</strong> 93<br />

max<br />

π<br />

z IP(π). (4.31)<br />

Observe that because z IP(π) is determined by means of the inner minimization problem<br />

(4.30), problem (4.31) is a maximin problem (maximization of a minimum objective).<br />

For linear programming problems, such as (4.25), there will exist a π = π ⋆<br />

such that z IP(π ⋆ ) = zP (assuming again that zP has an optimal solution). We say that<br />

there is no duality gap for these problems. For integer <strong>and</strong>/or nonlinear programming<br />

problems, duality gaps may indeed exist.<br />

In some situations, (4.31) can be solved directly. When direct solution of this Lagrangian<br />

dual problem is not possible, a very common technique for solving (4.31)<br />

is via subgradient optimization, which exploits the fact that IP(π) is a piecewiselinear<br />

concave function of π. (In fact, when maximizing a concave function, one<br />

correctly refers to “supergradients” rather than subgradients, although both terms<br />

are used in the literature in this case.) We refer the reader to [6, 8] for excellent discussions<br />

of this technique, along with the classical text [28]. However, we provide<br />

a sketch of the subgradient optimization method here for completeness.<br />

Step 0. Set iteration counter i = 0, <strong>and</strong> choose an initial estimate, π 0 , of the dual<br />

values π. (One often starts with π 0 ≡ 0.) Select a positive integer R to be a maximum<br />

number of iterations, <strong>and</strong> ε > 0 to be an acceptable subgradient norm for algorithm<br />

termination. Initialize the best lower bound computed thus far as LB = −∞. Continue<br />

to Step 1.<br />

Step 1. Solve IP(π i ), <strong>and</strong> obtain an optimal solution ( ˆx 1,i ,..., ˆx |K|,i ). (Note that in<br />

(4.30), the x k -vectors can be optimized in |K| separable problems.) If z IP(π i ) > LB,<br />

then set LB = z IP(π i ) . (Note that because subgradients are being used in this procedure,<br />

we are not guaranteed that z IP(π 0 ) , z IP(π 1 ) ,... will be a nondecreasing sequence<br />

of values.) Continue to Step 2.<br />

Step 2. One subgradient of IP(π i ) is given by s i = (b − ∑k∈K A k ˆx k,i ). If the (Euclidean)<br />

norm of s i is sufficiently small, then the current dual estimate is sufficiently<br />

close to optimal, <strong>and</strong> we stop. Thus, we terminate if ||s i || < ε. Also, to control the<br />

maximum computational effort that we exert, we can also terminate if i = R. Otherwise,<br />

continue to Step 3.<br />

Step 3. Based on the subgradient computed in Step 2, we adjust our dual estimate by<br />

moving in the (normalized) direction of the subgradient multiplied by a step length<br />

parameter α i determined based on the iteration. That is,<br />

π i+1 i si<br />

= πi + α<br />

||si . (4.32)<br />

||<br />

Note that the step-length parameter α i is a critical consideration in guaranteeing key<br />

convergence properties of the subgradient optimization algorithm; see [6, 8, 28]. Set<br />

i = i + 1, <strong>and</strong> return to Step 1.<br />

Remark 4.6. There are many computational <strong>and</strong> theoretical issues that arise in subgradient<br />

optimization that are beyond the scope of an introductory chapter such as<br />

this. Some of the finer points that are particularly important regard the dual updat-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!