16.01.2015 Views

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

126 CONOPT<br />

X.SCALE(I,J,K)$IJK(I,J,K) = expression;<br />

<strong>The</strong> statement<br />

X.SCALE(I,J,K) = expression;<br />

will generate records for X in the <strong>GAMS</strong> database for all combinations of I, J, and K for which the expression<br />

is different from 1, i.e. up to 1.e6 records, and apart from spending a lot of time you will very likely run out of<br />

memory. Note that this warning also applies to non-default lower and upper bounds.<br />

7 NLP and DNLP Models<br />

<strong>GAMS</strong> has two classes of nonlinear model, NLP and DNLP. NLP models are defined as models in which all<br />

functions that appear with endogenous arguments, i.e. arguments that depend on model variables, are smooth<br />

with smooth derivatives. DNLP models can in addition use functions that are smooth but have discontinuous<br />

derivatives. <strong>The</strong> usual arithmetic operators (+, -, *, /, and **) can appear on both model classes.<br />

<strong>The</strong> functions that can be used with endogenous arguments in a DNLP model and not in an NLP model are<br />

ABS, MIN, and MAX and as a consequence the indexed operators SMIN and SMAX.<br />

Note that the offending functions can be applied to expressions that only involve constants such as parameters,<br />

var.l, and eq.m. Fixed variables are in principle constants, but <strong>GAMS</strong> makes its tests based on the functional<br />

form of a model, ignoring numerical parameter values and numerical bound values, and terms involving fixed<br />

variables can therefore not be used with ABS, MIN, or MAX in an NLP model.<br />

<strong>The</strong> NLP solvers used by <strong>GAMS</strong> can also be applied to DNLP models. However, it is important to know that the<br />

NLP solvers attempt to solve the DNLP model as if it was an NLP model. <strong>The</strong> solver uses the derivatives of the<br />

constraints with respect to the variables to guide the search, and it ignores the fact that some of the derivatives<br />

may change discontinuously. <strong>The</strong>re are at the moment no <strong>GAMS</strong> solvers designed specifically for DNLP models<br />

and no solvers that take into account the discontinuous nature of the derivatives in a DNLP model.<br />

7.1 DNLP Models: What Can Go Wrong<br />

<strong>Solver</strong>s for NLP Models are all based on making marginal improvements to some initial solution until some<br />

optimality conditions ensure no direction with marginal improvements exist. A point with no marginally improving<br />

direction is called a Local Optimum.<br />

<strong>The</strong> theory about marginal improvements is based on the assumption that the derivatives of the constraints with<br />

respect to the variables are a good approximations to the marginal changes in some neighborhood around the<br />

current point.<br />

Consider the simple NLP model, min SQR(x), where x is a free variable. <strong>The</strong> marginal change in the objective<br />

is the derivative of SQR(x) with respect to x, which is 2*x. At x = 0, the marginal change in all directions is<br />

zero and x = 0 is therefore a Local Optimum.<br />

Next consider the simple DNLP model, min ABS(x), where x again is a free variable. <strong>The</strong> marginal change in<br />

the objective is still the derivative, which is +1 if x > 0 and -1 if x < 0. When x = 0, the derivative depends on<br />

whether we are going to increase or decrease x. Internally in the DNLP solver, we cannot be sure whether the<br />

derivative at 0 will be -1 or +1; it can depend on rounding tolerances. An NLP solver will start in some initial<br />

point, say x = 1, and look at the derivative, here +1. Since the derivative is positive, x is reduced to reduce<br />

the objective. After some iterations, x will be zero or very close to zero. <strong>The</strong> derivative will be +1 or -1, so the<br />

solver will try to change x. however, even small changes will not lead to a better objective function. <strong>The</strong> point x<br />

= 0 does not look like a Local Optimum, even though it is a Local Optimum. <strong>The</strong> result is that the NLP solver<br />

will muddle around for some time and then stop with a message saying something like: ”<strong>The</strong> solution cannot be<br />

improved, but it does not appear to be optimal.”<br />

In this first case we got the optimal solution so we can just ignore the message. However, consider the following<br />

simple two-dimensional DNLP model: min ABS(x1+x2) + 5*ABS(x1-x2) with x1 and x2 free variables. Start

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!