16.01.2015 Views

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

MINOS 337<br />

to find an approximate solution to the one-dimensional problem<br />

minimize F (x + αp)<br />

α<br />

subject to 0 < α < β<br />

where β is determined by the bounds on the variables. Another important piece in <strong>GAMS</strong>/MINOS is a step-length<br />

procedure used in the linesearch to determine the step-length α (see [6]). <strong>The</strong> number of nonlinear function<br />

evaluations required may be influenced by setting the Linesearch tolerance, as discussed in Section 9.<br />

As a linear programming solver, an equation B T π = gB is solved to obtain the dual variables or shadow prices π<br />

where gB is the gradient of the objective function associated with basic variables. It follows that gB − B T π = 0.<br />

<strong>The</strong> analogous quantity for superbasic variables is the reduced-gradient vector Z T g = gs−s T π; this should also be<br />

zero at an optimal solution. (In practice its components will be of order r||π|| where r is the optimality tolerance,<br />

typically 10 −6 , and ||π|| is a measure of the size of the elements of π.)<br />

3.3 Problems with Nonlinear Constraints<br />

If any of the constraints are nonlinear, <strong>GAMS</strong>/MINOS employs a project Lagrangian algorithm, based on a method<br />

due to [13], see [9]. This involves a sequence of major iterations, each of which requires the solution of a linearly<br />

constrained subproblem. Each subproblem contains linearized versions of the nonlinear constraints, as well as the<br />

original linear constraints and bounds.<br />

At the start of the k th major iteration, let x k be an estimate of the nonlinear variables, and let λ k be an estimate<br />

of the Lagrange multipliers (or dual variables) associated with the nonlinear constraints. <strong>The</strong> constraints are<br />

linearized by changing f(x) in equation (2) to its linear approximation:<br />

or more briefly<br />

f ′ (x, x k ) = f(x k ) + J(x k )(x − x k )<br />

f ′ = f k + J k (x − x k )<br />

where J(x k ) is the Jacobian matrix evaluated at x k . (<strong>The</strong> i-th row of the Jacobian is the gradient vector of the<br />

i-th nonlinear constraint function. As for the objective gradient, <strong>GAMS</strong> calculates the Jacobian using symbolic<br />

differentiation).<br />

<strong>The</strong> subproblem to be solved during the k-th major iteration is then<br />

minimize F (x) + c T x + d T y − λ T<br />

x,y<br />

k (f − f ′ ) + 0.5ρ(f − f ′ ) T (f − f ′ ) (5)<br />

subject to f ′ + A 1 y ∼ b 1 (6)<br />

A 2 x ( + A 3 y ∼ b 2 (7)<br />

x<br />

l ≤ ≤ u (8)<br />

y)<br />

<strong>The</strong> objective function (5) is called an augmented Lagrangian. <strong>The</strong> scalar ρ is a penalty parameter, and the term<br />

involving ρ is a modified quadratic penalty function.<br />

<strong>GAMS</strong>/MINOS uses the reduced-gradient algorithm to minimize (5) subject to (6) – (8). As before, slack variables<br />

are introduced and b 1 and b 2 are incorporated into the bounds on the slacks. <strong>The</strong> linearized constraints take the<br />

form ( ) ( ) ( ) ( ) ( )<br />

Jk A 1 x I 0 s1 Jk x<br />

+<br />

= k − f k<br />

A 2 A 3 y 0 I s 2 0<br />

This system will be referred to as Ax+Is = 0 as in the linear case. <strong>The</strong> Jacobian J k is treated as a sparse matrix,<br />

the same as the matrices A 1 , A 2 , and A 3 .<br />

In the output from <strong>GAMS</strong>/MINOS, the term Feasible subproblem indicates that the linearized constraints have<br />

been satisfied. In general, the nonlinear constraints are satisfied only in the limit, so that feasibility and optimality<br />

occur at essentially the same time. <strong>The</strong> nonlinear constraint violation is printed every major iteration. Even if<br />

it is zero early on (say at the initial point), it may increase and perhaps fluctuate before tending to zero. On<br />

well behaved problems, the constraint violation will decrease quadratically (i.e., very quickly) during the final few<br />

major iteration.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!