16.01.2015 Views

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CONOPT 123<br />

out to constrain the optimal solution the algorithm will make it a binding constraint, and it will be satisfied as<br />

an equality. If you know from the economics or physics of the problem that the constraint must be binding in<br />

the optimal solution then you have the choice of defining the constraint as an equality from the beginning. <strong>The</strong><br />

inequality formulation gives a larger feasible space which can make it easier to find a first feasible solution. <strong>The</strong><br />

feasible space may even be convex. On the other hand, the solution algorithm will have to spend time determining<br />

which constraints are binding and which are not. <strong>The</strong> trade off will therefore depend on the speed of the algorithm<br />

component that finds a feasible solution relative to the speed of the algorithm component that determines binding<br />

constraints.<br />

In the case of CONOPT, the logic of determining binding constraints is slow compared to other parts of the<br />

system, and you should in general make equalities out of all constraints you know must be binding. You can<br />

switch to inequalities if CONOPT has trouble finding a first feasible solution.<br />

6.5 Scaling<br />

Nonlinear as well as Linear Programming Algorithms use the derivatives of the objective function and the constraints<br />

to determine good search directions, and they use function values to determine if constraints are satisfied<br />

or not. <strong>The</strong> scaling of the variables and constraints, i.e. the units of measurement used for the variables and<br />

constraints, determine the relative size of the derivatives and of the function values and thereby also the search<br />

path taken by the algorithm.<br />

Assume for example that two goods of equal importance both cost $1 per kg. <strong>The</strong> first is measured in gram,<br />

the second in tons. <strong>The</strong> coefficients in the cost function will be $1000/g and $0.001/ton, respectively. If cost is<br />

measured in $1000 units then the coefficients will be 1 and 1.e-6, and the smaller may be ignored by the algorithm<br />

since it is comparable to some of the zero tolerances.<br />

CONOPT assumes implicitly that the model to be solved is well scaled. In this context well scaled means:<br />

• Basic and superbasic solution values are expected to be around 1, e.g. from 0.01 to 100. Nonbasic variables<br />

will be at a bound, and the bound values should not be larger than say 100.<br />

• Dual variables (or marginals) on active constraints are expected to be around 1, e.g. from 0.01 to 100. Dual<br />

variables on non-binding constraints will of course be zero.<br />

• Derivatives (or Jacobian elements) are expected to be around 1, e.g. from 0.01 to 100.<br />

Variables become well scaled if they are measured in appropriate units. In most cases you should select the<br />

unit of measurement for the variables so their expected value is around unity. Of course there will always be<br />

some variation. Assume X(I) is the production at location I. In most cases you should select the same unit of<br />

measurement for all components of X, for example a value around the average capacity.<br />

Equations become well scaled if the individual terms are measured in appropriate units. After you have selected<br />

units for the variables you should select the unit of measurement for the equations so the expected values of the<br />

individual terms are around one. If you follow these rules, material balance equations will usually have coefficients<br />

of plus and minus one.<br />

Derivatives will usually be well scaled whenever the variables and equations are well scaled. To see if the<br />

derivatives are well scaled, run your model with a positive OPTION LIMROW and look for very large or very<br />

small coefficients in the equation listing in the <strong>GAMS</strong> output file.<br />

CONOPT computes a measure of the scaling of the Jacobian, both in the initial and in the final point, and if it<br />

seems large it will be printed. <strong>The</strong> message looks like:<br />

** WARNING ** <strong>The</strong> variance of the derivatives in the initial<br />

point is large (= 4.1 ). A better initial<br />

point, a better scaling, or better bounds on the<br />

variables will probably help the optimization.<br />

<strong>The</strong> variance is computed as SQRT(SUM(LOG(ABS(Jac(i)))**2)/NZ) where Jac(i) represents the NZ nonzero<br />

derivatives (Jacobian elements) in the model. A variance of 4.1 corresponds to an average value of LOG(JAC)**2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!