16.01.2015 Views

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

236 DICOPT<br />

where λ i is the Lagrange multiplier of the i-th equality constraint.<br />

If f is pseudo-convex, h is quasi-convex, and g is quasi-convex, then x 0 is also the solution of the following NLP:<br />

minimize c T y (0) + f(x)<br />

subject to T (0) (Ay (0) + h(x)) ≤ 0<br />

By (0) + g(x) ≤ 0<br />

l ≤ x ≤ u<br />

(11.5)<br />

In colloquial terms, under certain assumptions concerning the convexity of the nonlinear functions, an equality<br />

constraint can be “relaxed” to be an inequality constraint. This property is used in the MIP master problem to<br />

accumulate linear approximations.<br />

Augmented Penalty refers to the introduction of (non-negative) slack variables on the right hand sides of the<br />

just described inequality constraints and the modification of the objective function when assumptions concerning<br />

convexity do not hold.<br />

<strong>The</strong> algorithm underlying DICOPT starts by solving the NLP in which the 0-1 conditions on the binary variables<br />

are relaxed. If the solution to this problem yields an integer solution the search stops. Otherwise, it continues<br />

with an alternating sequence of nonlinear programs (NLP) called subproblems and mixed-integer linear programs<br />

(MIP) called master problems. <strong>The</strong> NLP subproblems are solved for fixed 0-1 variables that are predicted by the<br />

MIP master problem at each (major) iteration. For the case of convex problems, the master problem also provides<br />

a lower bound on the objective function. This lower bound (in the case of minimization) increases monotonically<br />

as iterations proceed due to the accumulation of linear approximations. Note that in the case of maximization<br />

this bound is an upper bound. This bound can be used as a stopping criterion through a DICOPT option stop 1<br />

(see section 8). Another stopping criterion that tends to work very well in practice for non-convex problems (and<br />

even on convex problems) is based on the heuristic: stop as soon as the NLP subproblems start worsening (i.e.<br />

the current NLP subproblem has an optimal objective function that is worse than the previous NLP subproblem).<br />

This stopping criterion relies on the use of the augmented penatly and is used in the description of the algorithm<br />

below. This is also the default stopping criterion in the implementation of DICOPT. <strong>The</strong> algorithm can be stated<br />

briefly as follows:<br />

1. Solve the NLP relaxation of the MINLP program. If y (0) = y is integer, stop(“integer optimum found”).<br />

Else continue with step 2.<br />

2. Find an integer point y (1) with an MIP master problem that features an augmented penalty function to<br />

find the minimum over the convex hull determined by the half-spaces at the solution (x (0) , y (0) ).<br />

3. Fix the binary variables y = y (1) and solve the resulting NLP. Let (x (1) , y (1) ) be the corresponding solution.<br />

4. Find an integer solution y (2) with a MIP master problem that corresponds to the minimization over the<br />

intersection of the convex hulls described by the half-spaces of the KKT points at y (0) and y (1) .<br />

5. Repeat steps 3 and 4 until there is an increase in the value of the NLP objective function. (Repeating step<br />

4 means augmenting the set over which the minimization is performed with additional linearizations - i.e.<br />

half-spaces - at the new KKT point).<br />

In the MIP problems integer cuts are added to the model to exclude previously determined integer vectors<br />

y (1) , y (2) , ..., y (K) .<br />

For a detailed description of the theory and references to earlier work, see [5, 3, 1].<br />

<strong>The</strong> algorithm has been extended to handle general integer variables and integer variables appearing nonlinearly<br />

in the model.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!