18.10.2013 Views

Using Solver Specific Options - Gams

Using Solver Specific Options - Gams

Using Solver Specific Options - Gams

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

GAMS -The <strong>Solver</strong> Manuals<br />

• <strong>Using</strong> <strong>Solver</strong> <strong>Specific</strong> <strong>Options</strong><br />

• BDMLP<br />

• CONOPT<br />

• CPLEX<br />

• DECIS<br />

• DICOPT<br />

• GAMSBAS<br />

• GAMSCHK<br />

• MILES<br />

• MINOS<br />

• MPSWRITE<br />

• OSL<br />

• OSL Stochastic Extensions<br />

• PATH<br />

• SBB<br />

• SNOPT<br />

• XA<br />

• XPRESS<br />

GAMS Development Corporation<br />

1217 Potomac Street, N.W.<br />

Washington, DC 20007, USA<br />

Tel: +1-202-342-0180<br />

Fax: +1-202-342-0181<br />

E-mail:sales@gams.com<br />

Http://www.gams.com/<br />

March 2001<br />

© GAMS Development Corporation, 2001<br />

GAMS Software GmbH<br />

Eupener Str. 135-137<br />

50933 Cologne, Germany<br />

Tel.: +49-221-949-9170<br />

Fax: +49-221-949-9171<br />

E-mail:info@gams.de<br />

Http://www.gams.de/


<strong>Using</strong> solver specific options 1<br />

USING SOLVER SPECIFIC OPTIONS<br />

INTRODUCTION<br />

In addition to the general options available in the GAMS language, most solvers allow the<br />

user to affect certain alg11rithmic details and thereby affect their effectiveness. The<br />

default values for all the solver specific options are usually appropriate for most problems.<br />

However, in some cases, it is possible to improve the performance of a solver by<br />

specifying non-standard values for some of the options.<br />

Please read the rele1vant section of the solver manual very carefully before using<br />

any of the solver options. Only advanced users who understand the implications<br />

of changing algorithmic parameters should attempt to use these options.<br />

THE SOLVER OPTION FILE<br />

If you want to have a solver use an option file, it is necessary to inform the solver in the<br />

GAMS model. Setting the optfile model suffix to a positive value will perform that<br />

task. For example,<br />

model mymodel /all/ ;<br />

mymodel.optfile = 1 ;<br />

solve mymodel using nlp maximizing dollars ;<br />

The name of the option file that is specific to each solver. being used after this assignment<br />

is solvername.XXX, where ’solvername’ is the name of the solver that is specified, and the<br />

suffix XXX depends on the value to which the model suffix optfile has been set. If its<br />

value is 1, the suffix is opt. For example, the option file for CONOPT is called conopt.opt;<br />

for DICOPT, it is dicopt.opt.<br />

If you do not set the .optfile suffix to 1, no option file will be used even if it<br />

exists.<br />

To allow different option file names for the same solver, the .optfile model suffix can<br />

take other values as well. Formally, the rule is that a file named solvername.opt will be<br />

read if it is set to 1, and solvername.opX, solvername.oXX or solvername.XXX, where X’s<br />

are the characters representing its value. No option file will be read if it is set to 0. This<br />

scheme implies an upper bound on n of 999. For example,


2 <strong>Using</strong> solver specific options<br />

optfile model suffix value Name of option file<br />

0 No option file used<br />

1 solvername.opt<br />

2 solvername.op2<br />

3 solvername.op3<br />

10 solvername.o10<br />

91 solvername.o91<br />

100 solvername.100<br />

999 solvername.999<br />

For example, setting mymodel.optfile to 23 will result in the option file conopt.o23<br />

being used for CONOPT, and dicopt.o23 being used for DICOPT.<br />

FORMAT OF THE SOLVER OPTION FILE<br />

The format of the options file is not completely standard and changes marginally<br />

from solver to solver. This section illustrates some of the common features of the<br />

option file format. Please check the specific section of the solver manual before<br />

using the option file.<br />

Each line of the option file falls into one of two categories<br />

- a comment line<br />

- an option specification line<br />

A comment line begins with an asterisk (*) in the first column, and is not interpreted by<br />

either GAMS or the solver, and is used purely for documentation. Each option<br />

specification line can contain only one option. The format for specifying options is as<br />

follows,<br />

keyword(s) [modifier] [value]<br />

The keyword may consist of one or more words and is not case sensitive. Numerical<br />

values of the keywords may be either an integer or a real constant. Real numbers may be<br />

expressed in F, E, or D formats. Note that not all options require modifiers or values.<br />

Any errors in the spelling of the key word(s) or modifiers will lead in that option<br />

not being understood and therefore not being used by the solver.


<strong>Using</strong> solver specific options 3<br />

Any errors in the values of the options will result in unpredictable behavior. The<br />

options is either ignored or set to a default or limiting value. A very careful check<br />

of the option values are needed before use.<br />

Consider the following CPLEX options file,<br />

* CPLEX options file<br />

barrier<br />

crossover 2<br />

The first line begins with an asterisk and therefore contains comments. The first option<br />

specifies the use of the barrier algorithm to solver the linear programming problem, while<br />

the second option specifies that the crossover option 2 is to be used. Details of these<br />

options can be found in the CPLEX section of this manual.<br />

Consider the following MINOS options file,<br />

*MINOS options file<br />

scale option 2<br />

completion partial<br />

The first option sets the scale option to a value of 2. In this case, the key word ’scale<br />

option’ consists of two words. In the second line, the completion option is set to<br />

partial. Details of these options can be found in the MINOS section of this manual.


GAMS/BDMLP User Notes<br />

GAMS/BDMLP is a free LP and MIP solver that comes with any GAMS system. It is intended for small to medium<br />

sized models. GAMS/BDMLP was originally developed at the World Bank by T. Brooke, A. Drud, and A. Meeraus<br />

and is now maintained by GAMS Development. The MIP part was added by M. Bussieck and A. Drud.<br />

GAMS/BDMLP is running on all platform for which GAMS is available.<br />

GAMS/BDMLP can solve reasonably sized LP models, as long as the models are not very degenerate and are well<br />

scaled. The Branch-and-Bound algorithm for solving MIP is not in the same league as other commercial MIP codes<br />

that are hooked up to GAMS. Nevertheless, the MIP part of GAMS/BDMLP provides free access to a MIP solver that<br />

supports all types of discrete variables supported by GAMS: Binary, Integer, Semicont, Semiint, Sos1, Sos2.<br />

How to run a Model with BDMLP<br />

GAMS/BDMLP can solve models of the following types: LP, RMIP, and MIP. If you did not specify BDMLP as the<br />

default LP, RMIP, or MIP solver, use the following statement in your GAMS model before the solve statement:<br />

option lp=bdmlp; { or RMIP or MIP}<br />

GAMS/BDMLP <strong>Options</strong><br />

<strong>Options</strong> can be specified through the option statement (iterlim, reslim, optca, and optcr) and through a model suffix.<br />

The option statement sets the GAMS option globally<br />

option iterlim = 100 ;<br />

where the model suffix sets the GAMS option for individual models:<br />

mymodel.iterlim = 10;<br />

The following GAMS options are used by GAMS/BDMLP:<br />

Option Description<br />

iterlim Sets the iteration limit. BDMLP will terminate and pass on the current solution to GAMS.<br />

reslim Sets the time limit in seconds. BDMLP will terminate and pass on the current solution to GAMS.<br />

nodlim Sets the branch and bound node limit. BDMLP will terminate and pass on the current solution to<br />

GAMS.<br />

bratio Determines whether or not to use an advanced basis.<br />

sysout Will echo the BDMLP messages to the GAMS listing file.<br />

optca Absolute optimality criterion. The optca option asks BDMLP to stop when<br />

(|BP-BF|)< optca where BF is the objective function value of the current best integer<br />

solution while BP is the best possible integer solution.<br />

optcr Relative optimality criterion. The optcr option asks BDMLP to stop when<br />

(|BP-BF|)/(|BP|)< optcr.<br />

prioropt Instructs BDMLP to use priority branching information passed by GAMS through the<br />

variable.prior parameters.<br />

cheat Cheat value: Each new integer solution must be at least better than the previous one.<br />

Can speed up the search, but you may miss the optimal solution. The cheat parameter is specified<br />

in absolute terms (like the optca option).<br />

cutoff Cutoff value: When the branch and bound search starts, the parts of the tree with an objective<br />

worse than are deleted. This can sometimes speed up the initial phase of the branch and<br />

bound algorithm.<br />

tryint Causes BDMLP to make use of current variable values. If a variable value is within of a<br />

bound, it will be moved to the bound and the preferred branching direction for that variable will<br />

be set toward the bound.<br />

GAMS/BDMLP does not support an option file and it has no additional user-settable options.


GAMS/CONOPT CONOPT 1<br />

GAMS/CONOPT<br />

Table of Contents:<br />

ARKI Consulting and Development<br />

Bagsvaerdvej 246A, DK-2880 Bagsvaerd, Denmark<br />

Tel:+4544490323,Fax:+4544490333,Email:info@arki.dk<br />

1. INTRODUCTION ......................................................................................................................2<br />

2. ITERATION OUTPUT...............................................................................................................4<br />

3. GAMS/CONOPT TERMINATION MESSAGES......................................................................5<br />

4. FUNCTION EVALUATION ERRORS.....................................................................................9<br />

5. THE CONOPT OPTIONS FILE...............................................................................................11<br />

6. HINTS ON GOOD MODEL FORMULATION ......................................................................12<br />

6.1 Initial Values.......................................................................................................................................12<br />

6.2 Bounds ................................................................................................................................................13<br />

6.3 Simple Expressions .............................................................................................................................14<br />

6.4 Equalities vs. Inequalities....................................................................................................................16<br />

6.5 Scaling.................................................................................................................................................16<br />

7. NLP AND DNLP MODELS.....................................................................................................20<br />

7.1 DNLP models: what can go wrong?....................................................................................................21<br />

7.2 Reformulation from DNLP to NLP.....................................................................................................22<br />

7.3 Smooth Approximations .....................................................................................................................23<br />

7.4 Are DNLP models always non-smooth? .............................................................................................24<br />

7.5 Are NLP models always smooth? .......................................................................................................25<br />

APPENDIX A: ALGORITHMIC INFORMATION ....................................................................26<br />

A1. Overview of GAMS/CONOPT ..........................................................................................................26<br />

A2. The CONOPT Algorithm ...................................................................................................................27<br />

A3. Iteration 0: The Initial Point ...............................................................................................................28<br />

A4. Iteration 1: Preprocessing...................................................................................................................29<br />

A4.1 Preprocessing: Pre-triangular Variables and Constraints .................................................................30<br />

A4.2 Preprocessing: Post-triangular Variables and Constraints................................................................34<br />

A4.3 Preprocessing: The Optional Crash Procedure .................................................................................36<br />

A5. Iteration 2: Scaling .............................................................................................................................37<br />

A6. Finding a Feasible Solution: Phase 0..................................................................................................38<br />

A7. Finding a Feasible Solution: Phase 1 and 2........................................................................................40<br />

A8. Linear and Nonlinear Mode: Phase 1 to 4 ..........................................................................................41


2 CONOPT GAMS/CONOPT<br />

A9. Linear Mode: The SLP Procedure......................................................................................................42<br />

A10. Linear Mode: The Steepest Edge Procedure ....................................................................................43<br />

A11. How to Select Non-default <strong>Options</strong>..................................................................................................43<br />

A12: Miscellaneous Topics.......................................................................................................................45<br />

APPENDIX B - CR-CELLS .........................................................................................................51<br />

APPENDIX C: REFERENCES ....................................................................................................55<br />

1. INTRODUCTION<br />

Nonlinear models created with GAMS must be solved with a nonlinear programming<br />

(NLP) algorithm. Currently, there are three standard NLP algorithms available, CONOPT,<br />

MINOS and SNOPT, and CONOPT is available in two versions, the old CONOPT and the<br />

new CONOPT2. All algorithms attempt to find a local optimum. The algorithms in<br />

CONOPT, MINOS, and SNOPT are all based on fairly different mathematical algorithms,<br />

and they behave differently on most models. This means that while CONOPT is superior<br />

for some models, MINOS or SNOPT will be superior for others. To offer modelers with a<br />

large portfolio of NLP models the best of all worlds, GAMS offers various NLP package<br />

deals consisting of two or three NLP solvers for a reduced price if purchased together.<br />

Even CONOPT and CONOPT2 behaves differently; the new CONOPT2 is best for most<br />

models, but there are a small number of models that are best solved with the older<br />

CONOPT and the old version is therefore still distributed together with CONOPT2 under<br />

the same license. However, you should notice that the old CONOPT is no longer being<br />

developed, so if you encounter problems with CONOPT, please try to use CONOPT2<br />

instead.<br />

It is almost impossible to predict how difficult it is to solve a particular model with a<br />

particular algorithm, especially for NLP models, so GAMS cannot select the best algorithm<br />

for you automatically. One of the nonlinear programming algorithms must be selected as<br />

the default when GAMS is installed. If you want to choose a different algorithm or if you<br />

want to switch between algorithms you may add the statement "OPTION NLP =<br />

CONOPT2;", "OPTION NLP = CONOPT;", "OPTION NLP = MINOS;" or<br />

"OPTION NLP = SNOPT;" in your GAMS source file before the SOLVE statement, or<br />

you may rerun the installation program. The only reliable way to find which solver to use<br />

for a particular class of models is so far to experiment. However, there are a few rules of<br />

thumb:<br />

GAMS/CONOPT2 is well suited for models with very nonlinear constraints. If you<br />

experience that MINOS has problems maintaining feasibility during the optimization you<br />

should try CONOPT2. On the other hand, if you have a model with few nonlinearities<br />

outside the objective function then either MINOS or SNOPT are probably the best solver.<br />

GAMS/CONOPT2 has a fast method for finding a first feasible solution that is particularly


GAMS/CONOPT CONOPT 3<br />

well suited for models with few degrees of freedom. If you have a model with roughly the<br />

same number of constraints as variable you should try CONOPT2. CONOPT2 can also be<br />

used to solve square systems of equations without an objective function corresponding to<br />

the GAMS model class CNS - Constrained Nonlinear System. If the number of variables is<br />

much larger than the number of constraints MINOS or SNOPT may be better.<br />

GAMS/CONOPT2 has a preprocessing step in which recursive equations and variables are<br />

solved and removed from the model. If you have a model where many equations can be<br />

solved one by one then CONOPT2 will take advantage of this property. Similarly,<br />

intermediate variables only used to define objective terms are eliminated from the model<br />

and the constraints are moved into the objective function.<br />

GAMS/CONOPT2 has many built-in tests and messages, for example for poor scaling, and<br />

many models that can and should be improved by the modeler are rejected with a<br />

constructive message. CONOPT2 is therefore also a helpful debugging tool during model<br />

development. The best solver for the final, debugged model may or may not be CONOPT2.<br />

GAMS/CONOPT2 has been designed for large and sparse models. This means that both<br />

the number of variables and equations can be large. Indeed, NLP models with over 20000<br />

equations and variables have been solved successfully, and CNS models with over 500000<br />

equations and variables have also been solved. The components used to build CONOPT2<br />

have been selected under the assumptions that the model is sparse, i.e. that most functions<br />

only depend on a small number of variables. CONOPT2 can also be used for denser<br />

models, but the performance will suffer significantly.<br />

GAMS/CONOPT2 is designed for models with smooth functions, but it can also be applied<br />

to models that do not have differentiable functions, in GAMS called DNLP models.<br />

However, there are no guaranties whatsoever for this class of models and you will often get<br />

termination messages like "Convergence too slow" or "No change in objective although the<br />

reduced gradient is greater than the tolerance" that indicate unsuccessful termination. If<br />

possible, you should try to reformulate a DNLP model to an equivalent or approximately<br />

equivalent form.<br />

Most modelers should not be concerned with algorithmic details such as choice of<br />

algorithmic sub-components or tolerances. CONOPT2 has considerable build-in logic that<br />

selects a solution approach that seems to be best suited for the type of model at hand, and<br />

the approach is adjusted dynamically as information about the behavior of the model is<br />

collected and updated. The description of the CONOPT2 algorithm has therefore been<br />

moved to Appendix A and most modelers can skip it. However, if you are solving very<br />

large or complex models or if you are experiencing solution difficulties you may benefit<br />

from using non-standard tolerances or options, in which case you will need some<br />

understanding of what CONOPT2 is doing to your model. Some guidelines for selecting<br />

options can be found at the end of Appendix A and a list of all options and tolerances is<br />

showninAppendixB.<br />

The main text of this User’s Guide will give a short overview over the iteration output you<br />

will see on the screen (section 2), and explain the termination messages (section 3). We<br />

will then discuss function evaluation errors (section 4), the use of options (section 5), and


4 CONOPT GAMS/CONOPT<br />

give a CONOPT2 perspective on good model formulation including topics such as initial<br />

values and bounds, simplification of expressions, and scaling (section 6). Finally, we will<br />

discuss the difference between NLP and DNLP models (section 7). The text is mainly<br />

concerned with the new CONOPT2 but most of it will also cover the older CONOPT and<br />

we will use the generic name CONOPT when refering to the solver. Some features are only<br />

available in the new CONOPT2 in which case we will mention it explicitly. Messages in<br />

the old CONOPT may have a format that is slightly different from the one shown here.<br />

2. ITERATION OUTPUT<br />

On most machines you will by default get a logline on your screen or terminal at regular<br />

intervals. The iteration log may look something like this:<br />

Iter Phase Ninf Infeasibility RGmax NSB Step SlpIt MX OK<br />

0 0 8.3247946065E+02 (Input point)<br />

Pre-triangular equations: 0<br />

Post-triangular equations: 5<br />

1 0 9.1593000000E+01 (After pre-processing)<br />

2 0 9.1593000000E+01 (After scaling)<br />

10 0 14 9.1593000000E+01 0.0E+00 F F<br />

21 1 17 4.8613463018E+01 1.3E+01 28 9.9E-03 T T<br />

26 1 15 4.0564945133E+01 4.0E+00 22 1.4E-01 7 T T<br />

31 1 12 3.0991342925E+01 3.1E+00 24 1.3E-01 5 F F<br />

36 1 11 2.6438208207E+01 2.0E+00 20 5.0E-01 3 F F<br />

41 1 8 1.9451475242E+01 1.6E+00 18 1.0E+00 1 F T<br />

46 1 8 1.6816400505E+01 1.3E+00 16 3.2E-01 1 F F<br />

Iter Phase Ninf Infeasibility RGmax NSB Step SlpIt MX OK<br />

51 1 7 1.2558236729E+01 1.0E+00 15 3.6E-01 4 F F<br />

56 1 5 8.7085621669E+00 1.1E+00 13 6.0E-02 6 F F<br />

61 1 3 6.3837868042E+00 1.0E+00 11 2.2E-01 1 F F<br />

66 1 2 4.6815820465E+00 1.0E+00 8 2.2E-02 1 F F<br />

71 1 2 3.0484549833E+00 1.0E+00 8 1.6E-01 F F<br />

81 1 1 2.0098693785E-01 1.0E+00 7 1.6E-01 F F<br />

** Feasible solution. Value of objective = 1571.04999262<br />

Iter Phase Ninf Objective RGmax NSB Step SlpIt MX OK<br />

91 3 1.4598200364E+03 8.7E+01 6 1.2E-05 F F<br />

101 3 1.3173744754E+03 7.1E+01 5 1.2E-03 F F<br />

111 4 1.0406602405E+03 1.2E+02 6 3.0E-03 F F<br />

121 4 9.3135586320E+02 1.0E+02 5 3.1E+00 F F<br />

131 4 9.2501180675E+02 1.5E+00 4 2.1E+00 F T<br />

141 4 9.2496527723E+02 2.7E-04 4 1.0E+00 F T<br />

144 4 9.2496527723E+02 1.4E-08 4<br />

** Optimal solution. Reduced gradient less than tolerance.<br />

The first few iterations have a special interpretation: iteration 0 represents the initial point<br />

exactly as received from GAMS, iteration 1 represent the initial point after CONOPT’s<br />

pre-processing, and iteration 2 represents the same point after scaling (even if scaling is<br />

turned off, which is the default).


GAMS/CONOPT CONOPT 5<br />

The remaining iterations are characterized by the "Phase" in column 2. The model is<br />

infeasible during Phase 0, 1, and 2 and the Sum of Infeasibilities in column 4 are<br />

minimized; the model is feasible during Phase 3 and 4 and the actual objective function,<br />

also shown in column 4, is minimized or maximized. Phase 0 iterations are cheap Newtonlike<br />

iterations. During Phase 1 and 3 the model behaves almost linearly and special linear<br />

iterations that take advantage of the linearity are performed, sometimes augmented with<br />

some inner "Sequential Linear Programming" iterations, indicated by a number in the SlpIt<br />

column. During Phase 2 and 4 the model behaves more nonlinearly and most aspects of the<br />

iterations are therefore changed: the line search is more elaborate, and CONOPT estimates<br />

second order information to improve the convergence.<br />

The column NSB for Number of SuperBasics defines the degree of freedom or the<br />

dimension of the current search space, and Rgmax measures the largest gradient of the<br />

non-optimal variables. Rgmax should eventually converge towards zero. The last two<br />

columns labeled MX and OK gives information about the line search: MX = T means that<br />

the line search was terminated by a variable reaching a bound, and MX = F means that the<br />

optimal step length was determined by nonlinearities. OK = T means that the line search<br />

was well-behaved, and OK = F means that the line search was terminated because it was<br />

not possible to find a feasible solution for large step lengths.<br />

3. GAMS/CONOPT TERMINATION MESSAGES<br />

GAMS/CONOPT may terminate in a number of ways. This section will show most of the<br />

termination messages and explain their meaning. It will also show the Model Status<br />

returned to GAMS in .Modelstat, where represents the name of<br />

the GAMS model. The <strong>Solver</strong> Status returned in .Solvestat will be given if<br />

it is different from 1 (Normal Completion). We will in all cases first show the message<br />

from CONOPT followed by a short explanation. The first 4 messages are used for optimal<br />

solutions and CONOPT will return Modelstat = 2 (Locally Optimal), except as noted<br />

below:<br />

** Optimal solution. There are no superbasic variables.<br />

The solution is a locally optimal corner solution. The solution is determined by constraints<br />

only, and it is usually very accurate. In some cases CONOPT can determine that the<br />

solution is globally optimal and it will return Modelstat = 1 (Optimal).<br />

** Optimal solution. Reduced gradient less than tolerance.<br />

The solution is a locally optimal interior solution. The largest component of the reduced<br />

gradient is less than the tolerance rtredg with default value around 1.d-7. The value of<br />

the objective function is very accurate while the values of the variables are less accurate<br />

due to a flat objective function in the interior of the feasible area.<br />

** Optimal solution. The error on the optimal objective function


6 CONOPT GAMS/CONOPT<br />

value estimated from the reduced gradient and the estimated<br />

Hessian is less than the minimal tolerance on the objective.<br />

The solution is a locally optimal interior solution. The largest component of the reduced<br />

gradient is larger than the tolerance rtredg. However, when the reduced gradient is<br />

scaled with information from the estimated Hessian of the reduced objective function the<br />

solution seems optimal. The objective must be large or the reduced objective must have<br />

large second derivatives so it is advisable to scale the model. See the sections on "Scaling"<br />

and "<strong>Using</strong> the Scale Option in GAMS" for details on how to scale a model.<br />

** Optimal solution. Convergence too slow. The change in<br />

objective has been less than xx.xx for xx consecutive<br />

iterations.<br />

CONOPT stops with a solution that seems optimal. The solution process is stopped<br />

because of slow progress. The largest component of the reduced gradient is greater than the<br />

optimality tolerance rtredg, but less than rtredg multiplied by the largest Jacobian<br />

element divided by 100. The model must have large derivatives so it is advisable to scale it.<br />

The four messages above all exist in versions where "Optimal" is replaced by "Infeasible"<br />

and Modelstat will be 5 (Locally Infeasible) or 4 (Infeasible). The infeasible messages<br />

indicate that a Sum of Infeasibility objective function is locally minimal, but positive. If<br />

the model is convex then it does not have a feasible solution; if the model is non-convex it<br />

may have a feasible solution in a different region. See the section on "Initial Values" for<br />

hints on what to do.<br />

** Feasible solution. Convergence too slow. The change in<br />

objective has been less than xx.xx for xx consecutive<br />

iterations.<br />

** Feasible solution. The tolerances are minimal and<br />

there is no change in objective although the reduced<br />

gradient is greater than the tolerance.<br />

The two messages above tell that CONOPT stops with a feasible solution. In the first case<br />

the solution process is very slow and in the second there is no progress at all. However, the<br />

optimality criteria have not been satisfied. These messages are accompanied by Modelstat<br />

= 7 (Intermediate Nonoptimal) and Solvestat = 4 (Terminated by <strong>Solver</strong>). The problem can<br />

be caused by discontinuities if the model is of type DNLP; in this case you should consider<br />

alternative, smooth formulations as discussed in section 7. The problem can also be caused<br />

by a poorly scaled model. See section 6.5 for hints on model scaling. Finally, it can be<br />

cause by stalling as described in section A12.4 in Appendix A. The two messages also<br />

exist in a version where "Feasible" is replaced by "Infeasible". Modelstat is in this case 6<br />

(Intermediate Infeasible) and Solvestat is still 4 (Terminated by <strong>Solver</strong>); these versions tell<br />

that CONOPT cannot make progress towards feasibility, but the Sum of Infeasibility<br />

objective function does not have a well defined local minimum.<br />

: The variable has reached infinity


GAMS/CONOPT CONOPT 7<br />

** Unbounded solution. A variable has reached ‘infinity’.<br />

Largest legal value (Rtmaxv) is xx.xx<br />

CONOPT considers a solution to be unbounded if a variable exceeds the indicated value<br />

and it returns with Modelstat = 3 (Unbounded). Check whether the solution appears<br />

unbounded or the problem is caused by the scaling of the unbounded variable <br />

mentioned in the first line of the message. If the model seems correct you are advised to<br />

scale it. There is also a lazy solution: you can increase the largest legal value, rtmaxv, as<br />

mentioned in the section on options. However, you will pay through reduced reliability or<br />

increased solution times. Unlike LP models, where an unbounded model is recognized by<br />

an unbounded ray and the iterations are stopped far from "infinity", CONOPT will actually<br />

return a feasible solution with large values for the variables.<br />

The message above exist in a version where "Unbounded" is replaced by "Infeasible" and<br />

Modelstat is 5 (Locally Infeasible). You may also see a message like<br />

: Free variable becomes too large<br />

** Infeasible solution. A free variable exceeds the allowable<br />

range. Current value is 4.20E+07 and current upper bound<br />

(Rtmaxv) is 3.16E+07<br />

These two messages indicate that some variables become very large before a feasible<br />

solution has been found. You should again check whether the problem is caused by the<br />

scaling of the unbounded variable mentioned in the first line of the message. If the<br />

model seems correct you should scale it.<br />

** The time limit has been reached.<br />

The time or resource limit defined in GAMS, either by default (usually 1000 seconds) or<br />

by "OPTION RESLIM = xx;" or ".RESLIM = xx;" statements, has<br />

been reached. CONOPT will return with Solvestat = 3 (Resource Interrupt) and Modelstat<br />

either 6 (Locally Infeasible) or 7 (Locally Nonoptimal).<br />

** The iteration limit has been reached.<br />

The iteration limit defined in GAMS, either by default (usually 1000 iterations) or by<br />

"OPTION ITERLIM = xx;" or ".ITERLIM = xx;" statements, has<br />

been reached. CONOPT will return with Solvestat = 2 (Iteration Interrupt) and Modelstat<br />

either 6 (Locally Infeasible) or 7 (Locally Nonoptimal).<br />

** Domain errors in nonlinear functions.<br />

Check bounds on variables.<br />

The number of function evaluation errors has reached the limit defined in GAMS by<br />

"OPTION DOMLIM = xx;" or ".DOMLIM = xx;" statements or the<br />

default limit of 0 function evaluation errors. CONOPT will return with Solvestat = 5


8 CONOPT GAMS/CONOPT<br />

(Evaluation Error Limit) and Modelstat either 6 (Locally Infeasible) or 7 (Locally<br />

Nonoptimal). See section 4 for more details on "Function Evaluation Errors".<br />

and<br />

** An initial derivative is too large (larger than Rtmaxj= xx.xx)<br />

Scale the variables and/or equations or add bounds.<br />

appearing in<br />

: Initial Jacobian element too large = xx.xx<br />

** A derivative is too large (larger than Rtmaxj= xx.xx).<br />

Scale the variables and/or equations or add bounds.<br />

appearing in<br />

: Jacobian element too large = xx.xx<br />

These two messages appear if a derivative or Jacobian element is very large, either in the<br />

initial point or in a later intermediate point. The relevant variable and equation pair(s) will<br />

show you where to look. A large derivative means that the function changes very rapidly<br />

with changes in the variable and it will most likely create numerical problems for many<br />

parts of the optimization algorithm. Instead of attempting to solve a model that most likely<br />

will fail, CONOPT will stop and you are advised to adjust the model if at all possible.<br />

If the offending derivative is associated with a LOG(X) or 1/X term you may try to<br />

increase the lower bound on X. If the offending derivative is associated with an EXP(X)<br />

term you must decrease the upper bound on X. You may also try to scale the model, either<br />

manually or using the variable.SCALE and/or equation.SCALE option in GAMS as<br />

described in section 6.5. There is also in this case a lazy solution: increase the limit on<br />

Jacobian elements, rtmaxj; however, you will pay through reduced reliability or longer<br />

solution times.<br />

In addition to the messages shown above you may see messages like<br />

or<br />

** An equation in the pre-triangular part of the model cannot be<br />

solved because the critical variable is at a bound.<br />

** An equation in the pre-triangular part of the model cannot be<br />

solved because of too small pivot.<br />

** An equation is inconsistent with other equations in the<br />

pre-triangular part of the model.<br />

These messages containing the word "Pre-triangular" are all related to infeasibilities<br />

identified by CONOPT’s pre-processing stage and they are explained in detail in section<br />

A4 in Appendix A.<br />

Usually, CONOPT will be able to estimate the amount of memory needed for the model<br />

based on statistics provided by GAMS. However, in some cases with unusual models, e.g.


GAMS/CONOPT CONOPT 9<br />

very dense models or very large models, the estimate will be too small and you must<br />

request more memory yourself using a statement like ".WORKSPACE = xx;"<br />

in GAMS. The message you will see is similar to the following:<br />

** FATAL ERROR ** Insufficient memory to continue the<br />

optimization.<br />

You must request more memory.<br />

Current CONOPT space = 0.29 Mbytes<br />

Estimated CONOPT space = 0.64 Mbytes<br />

Minimum CONOPT space = 0.33 Mbytes<br />

CONOPT time Total 0.109 seconds<br />

of which: Function evaluations 0.000 = 0.0%<br />

Derivative evaluations 0.000 = 0.0%<br />

Work length = 45875 double words = 0.35 Mbytes<br />

Estimate = 45875 double words = 0.35 Mbytes<br />

Max used = 45627 double words = 0.35 Mbytes<br />

The text after "Insufficient memory to" may be different; it says something about where<br />

CONOPT ran out of memory. If the memory problem appears during model setup the<br />

message will be accompanied by Solvestat = 9 (Error Setup Failure) and Modelstat = 13<br />

(Error No Solution) and CONOPT will not return any values. If the memory problem<br />

appears later during the optimization Solvestat will be 10 (Error Internal <strong>Solver</strong> Failure)<br />

and Modelstat will be either 6 (Intermediate Infeasible) or 7 (Intermediate Nonoptimal) and<br />

CONOPT will return primal solution values. The marginals of both equations and variables<br />

will be zero or EPS.<br />

The first set of statistics in the message text shows you how much memory is available for<br />

CONOPT, and the last set shows how much is available for GAMS and CONOPT<br />

combined (GAMS needs space to store the nonlinear functions). The GAMS<br />

WORKSPACE option corresponds to the combined memory, measured in Mbytes, so you<br />

should concentrate on the last numbers.<br />

4. FUNCTION EVALUATION ERRORS<br />

Many of the nonlinear functions available with GAMS are not defined for all values of<br />

their arguments. LOG is not defined for negative arguments, EXP overflows for large<br />

arguments, and division by zero is illegal. To avoid evaluating functions outside their<br />

domain of definition you should add reasonable bounds on your variables. CONOPT will<br />

in return guarantee that the nonlinear functions never are evaluated with variables outside<br />

their bounds.<br />

In some cases bounds are not sufficient, e.g. in the expression LOG( SUM(I, X(I) ) ): in<br />

some models each individual X should be allowed to become zero, but the SUM should<br />

not. In this case you should introduce an intermediate variable and an extra equation, e.g.<br />

XSUMDEF .. XSUM =E= SUM(I,X(I)); add a lower bound on XSUM; and use XSUM as


10 CONOPT GAMS/CONOPT<br />

the argument to the LOG function. See section 6.3 on "Simple Expressions" for additional<br />

commentsonthistopic.<br />

Whenever a nonlinear function is called outside its domain of definition, GAMS’ function<br />

evaluator will intercept the function evaluation error and prevent that the system crashes.<br />

GAMS will replace the undefined result by some appropriate real number, and it will make<br />

sure the error is reported to the modeler as part of the standard solution output in the<br />

GAMS listing file. GAMS will also report the error to CONOPT, so CONOPT can try to<br />

correct the problem by backtracking to a safe point. Finally, CONOPT will be instructed to<br />

stop after DOMLIM errors.<br />

During Phase 0, 1, and 3 CONOPT will often use large steps as the initial step in a line<br />

search and functions will very likely be called with some of the variables at their lower or<br />

upper bound. You are therefore likely to get a division-by-zero error if your model<br />

contains a division by X and X has a lower bound of zero. And you are likely to get an<br />

exponentiation overflow error if your model contains EXP(X) and X has no upper bound.<br />

However, CONOPT will usually not get trapped in a point outside the domain of definition<br />

for the model. When GAMS’ function evaluator reports that a point is "bad", CONOPT<br />

will decrease the step length, and it will for most models be able to recover and continue to<br />

an optimal solution. It is therefore safe to use a large value for DOMLIM instead of GAMS<br />

default value of 0.<br />

CONOPT may get stuck in some cases, for example because there is no previous point to<br />

backtrack to, because "bad" points are very close to "reasonable" feasible points, or<br />

because the derivatives are not defined in a feasible point. The more common messages<br />

are:<br />

** Fatal Error ** Function error in initial point in Phase 0<br />

procedure.<br />

** Fatal Error ** Function error after small step in Phase 0<br />

procedure.<br />

** Fatal Error ** Function error very close to a feasible point.<br />

** Fatal Error ** Function error while reducing tolerances.<br />

** Fatal Error ** Function error in Pre-triangular equations.<br />

** Fatal Error ** Function error after solving Pre-triangular<br />

equations.<br />

** Fatal Error ** Function error in Post-triangular equation.<br />

In the first four cases you must either add better bounds or define better initial values. If<br />

the problem is related to a pre- or post-triangular equation as shown by the last three<br />

messages then you can turn part of the pre-processing off as described in section A4 in<br />

Appendix A. However, this may make the model harder to solve, so it is usually better to<br />

add bounds and/or initial values.


GAMS/CONOPT CONOPT 11<br />

5. THE CONOPT OPTIONS FILE<br />

CONOPT has been designed to be self-tuning. Most tolerances are dynamic. As an<br />

example: The feasibility of a constraint is always judged relative to the dual variable on the<br />

constraint and relative to the expected change in objective in the coming iteration. If the<br />

dual variable is large then the constraint must be satisfied with a small tolerance, and if the<br />

dual variable is small then the tolerance is larger. When the expected change in objective in<br />

the first iterations is large then the feasibility tolerances are also large. And when we<br />

approach the optimum and the expected change in objective becomes smaller then the<br />

feasibility tolerances become smaller.<br />

Because of the self-tuning nature of CONOPT you should in most cases be well off with<br />

default tolerances. If you do need to change some tolerances, possibly following the advice<br />

in Appendix A, it can be done in the CONOPT <strong>Options</strong> file. The name of the CONOPT<br />

<strong>Options</strong> file is on most systems "conopt2.opt" for the new CONOPT2 and<br />

"conopt.opt" for the old CONOPT. You must tell the solver that you want to use an<br />

options file with the statement .OPTFILE = 1;, in your GAMS source file before<br />

the SOLVE statement.<br />

The format of the CONOPT <strong>Options</strong> file is different from the format of options file used by<br />

MINOS and ZOOM. It consists in its simplest form of a number of lines like these:<br />

rtmaxv := 1.e8;<br />

lfnsup := 500;<br />

An optional "set" verb can be added in front of the assignment statements, and the<br />

separators :, =, and; are silently ignored, so the first line could also be written as "set<br />

rtmaxv 1.e8" or simply "rtmaxv 1.e8". Upper case letters are converted to lower<br />

case so the second line could also be written as "LFNSUP := 500;". The assignment or<br />

set statement is used to assign a new value to internal CONOPT variables, so-called CR-<br />

Cells. The optional set verb, the name of the CR-Cell, and the value must be separated by<br />

blanks, tabs, commas, colons, and/or equal signs. The value must be written using legal<br />

Fortran format with a maximum of 10 characters, i.e. a real number may contain an<br />

optional E and D exponent, but a number may not contain blanks. The value must have the<br />

same type as the CR-Cell, i.e. real CR-Cells must be assigned real values, integer CR-Cells<br />

must be assigned integer values, and logical CR-Cells must be assigned logical values.<br />

Values can also be the name of a CR-Cell of the same type.<br />

The crprint statement with syntax "crprint crcell1 crcell2 ..;" is used<br />

to print the current value of one or more CR-Cells. The output of CRPRINT can only be<br />

seen in the <strong>Solver</strong> Status File that GAMS echoes back when you use "OPTION SYSOUT<br />

= ON".<br />

The step statement with syntax "step crcell value;" can be used to increment a<br />

CR-Cell by a certain amount. "Value" can be a number or a CR-Cell of the same type. For<br />

example, the statement "step rtredg rtredg;" will double the value of rtredg.


12 CONOPT GAMS/CONOPT<br />

The CONOPT <strong>Options</strong> file can be used for many other purposes not needed with GAMS,<br />

e.g. for certain iterative constructs and for SAVE and RESTART. These constructs are<br />

mainly used with the Subroutine Library version of CONOPT and for debugging and they<br />

are not covered here.<br />

6. HINTS ON GOOD MODEL FORMULATION<br />

This section will contain some comments on how to formulate a nonlinear model so it<br />

becomes easier to solve with CONOPT. Most of the recommendations will be useful for<br />

any nonlinear solver, but not all. We will try to mention when a recommendation is<br />

CONOPT specific.<br />

6.1 Initial Values<br />

Good initial values are important for many reasons. Initial values that satisfy or closely<br />

satisfy many of the constraints reduces the work involved in finding a first feasible<br />

solution. Initial values that in addition are close to the optimal ones also reduce the<br />

distance to the final point and therefore indirectly the computational effort. The progress of<br />

the optimization algorithm is based on good directional information and therefore on good<br />

derivatives. The derivatives in a nonlinear model depend on the current point, and the<br />

initial point in which the initial derivatives are computed is therefore again important.<br />

Finally, non-convex models may have multiple solutions, but the modeler is looking for<br />

one in a particular part of the search space; an initial point in the right neighborhood is<br />

more likely to return the desired solution.<br />

The initial values used by CONOPT are all coming from GAMS. The initial values used by<br />

GAMS are by default the value zero projected on the bounds. I.e. if a variable is free or has<br />

a lower bound of zero, then its default initial value is zero. Unfortunately, zero is in many<br />

cases a bad initial value for a nonlinear variable. An initial value of zero is especially bad if<br />

the variable appears in a product term since the initial derivative becomes zero, and it<br />

appears as if the function does not depend on the variable. CONOPT will warn you and ask<br />

you to supply better initial values if the number of derivatives equal to zero is larger than<br />

20 percent.<br />

If a variable has a small positive lower bound, for example because it appears as an<br />

argument to the LOG function or as a denominator, then the default initial value is this<br />

small lower bound and it is also bad since this point will have very large first and second<br />

derivatives.<br />

You should therefore supply as many sensible initial values as possible by making<br />

assignment to the level value, var.L, in GAMS. An easy possibility is to initialize all<br />

variables to 1, or to the scale factor if you use GAMS' scaling option. A better possibility is<br />

to select reasonable values for some variables that from the context are known to be


GAMS/CONOPT CONOPT 13<br />

important, and then use some of the equations of the model to derive values for other<br />

variables. A model may contain the following equation:<br />

PMDEF(IT) .. PM(IT) =E= PWM(IT)*ER*(1 + TM(IT)) ;<br />

where PM, PWM, and ER are variables and TM is a parameter. The following assignment<br />

statements use the equation to derive consistent initial values for PM from sensible initial<br />

values for PWM and ER:<br />

ER.L = 1; PWM(IT) = 1;<br />

PM.L(IT) = PWM.L(IT)*ER.L*(1 + TM(IT)) ;<br />

With these assignments equation PMDEF will be feasible in the initial point, and since<br />

CONOPT uses a feasible path method it will remain feasible throughout the optimization<br />

(unless the pre-processor destroys it, see section A4 in Appendix A).<br />

If CONOPT has difficulties finding a feasible solution for your model you should try to<br />

use this technique to create an initial point in which as many equations as possible are<br />

satisfied. If you use CONOPT2 you may also try the optional Crash procedure described in<br />

section A4.3 in Appendix A by adding the line "lstcrs=t" to the CONOPT2 options<br />

file. The crash procedure tries to identify equations with a mixture of un-initialized<br />

variables and variables with initial values, and it solves the equations with respect to the<br />

un-initialized variables; the effect is similar to the manual procedure shown above.<br />

6.2 Bounds<br />

Bounds have two purposes in nonlinear models. Some bounds represent constraints on the<br />

reality that is being modeled, e.g. a variable must be positive. These bounds are called<br />

model bounds. Other bounds help the algorithm by preventing it from moving far away<br />

from any optimal solution and into regions with singularities in the nonlinear functions or<br />

unreasonably large function or derivative values. These bounds are called algorithmic<br />

bounds.<br />

Model bounds have natural roots and do not cause any problems. Algorithmic bounds<br />

require a closer look at the functional form of the model. The content of a LOG should be<br />

greater than say 1.e-3, the content of an EXP should be less than 5 to 8, and a denominator<br />

should be greater than say 1.e-2. These recommended lower bounds of 1.e-3 and 1.e-2 may<br />

appear to be unreasonably large. However, both LOG(X) and 1/X are extremely nonlinear<br />

for small arguments. The first and second derivatives of LOG(X) at X=1.e-3 are 1.e+3 and<br />

-1.e6, respectively, and the first and second derivatives of 1/X at X=1.e-2 are -1.e+4 and<br />

2.e+6, respectively.<br />

If the content of a LOG or EXP function or a denominator is an expression then it may be<br />

advantageous to introduce a bounded intermediate variable as discussed in the next section.<br />

Note that bounds in some cases can slow the solution process down. Too many bounds<br />

may for example introduce degeneracy. If you have constraints of the following type


14 CONOPT GAMS/CONOPT<br />

or<br />

VUB(I) .. X(I) =L= Y;<br />

YSUM .. Y =E= SUM( I, X(I) );<br />

and X is a POSITIVE VARIABLE then you should in general not declare Y a<br />

POSITIVE VARIABLE or add a lower bound of zero on Y. If Y appears in a nonlinear<br />

function you may need a strictly positive bound. Otherwise, you should declare Y a free<br />

variable; CONOPT will then make Y basic in the initial point and Y will remain basic<br />

throughout the optimization. New logic in CONOPT2 tries to remove this problem by<br />

detecting when a harmful bound is redundant so it can be removed, but it is not yet a fool<br />

proof procedure.<br />

Section A4 in Appendix A gives another example of bounds that can be counter<br />

productive.<br />

6.3 Simple Expressions<br />

The following model component<br />

PARAMETER MU(I);<br />

VARIABLE X(I), S(I), OBJ;<br />

EQUATION OBJDEF;<br />

OBJDEF .. OBJ =E= EXP( SUM( I, SQR( X(I) - MU(I) ) / S(I) ) );<br />

can be re-written in the slightly longer but simpler form<br />

PARAMETER MU(I);<br />

VARIABLE X(I), S(I), OBJ, INTERM;<br />

EQUATION INTDEF, OBJDEF;<br />

INTDEF .. INTERM =E= SUM( I, SQR( X(I) - MU(I) ) / S(I) );<br />

OBJDEF .. OBJ =E= EXP( INTERM );<br />

The first formulation has very complex derivatives because EXP is taken of a long<br />

expression. The second formulation has much simpler derivatives; EXP is taken of a single<br />

variable, and the variables in INTDEF appear in a sum of simple independent terms.<br />

In general, try to avoid nonlinear functions of expressions, divisions by expressions, and<br />

products of expressions, especially if the expressions depend on many variables. Define<br />

intermediate variables that are equal to the expressions and apply the nonlinear function,<br />

division, or product to the intermediate variable. The model will become larger, but the<br />

increased size is taken care of by CONOPT's sparse matrix routines, and it is compensated<br />

by the reduced complexity.<br />

The reduction in complexity can be significant if an intermediate expression is linear. The<br />

following model fragment:


GAMS/CONOPT CONOPT 15<br />

VARIABLE X(I), Y;<br />

EQUATION YDEF;<br />

YDEF .. Y =E= 1 / SUM(I, X(I) );<br />

should be written as<br />

VARIABLE X(I), XSUM, Y;<br />

EQUATION XSUMDEF, YDEF;<br />

XSUMDEF .. XSUM =E= SUM(I, X(I) );<br />

YDEF .. Y =E= 1 / XSUM;<br />

XSUM.LO = 1.E-2;<br />

for two reasons. First, because the number of nonlinear derivatives is reduced in number<br />

and complexity. And second, because the lower bound on the intermediate result will<br />

bound the search away from the singularity at XSUM = 0.<br />

The last example shows an added potential saving by expanding functions of linear<br />

expressions. A constraint depends in a nonlinear fashion on the accumulated investments,<br />

INV, like<br />

CON(I) .. f( SUM( J$(ORD(J) LE ORD(I)), INV(J) ) ) =L= B(I);<br />

A new intermediate variable, CAP(I), that is equal to the content of the SUM can be<br />

defined recursively with the constraints<br />

CDEF(I) .. CAP(I) =E= INV(I) + CAP(I-1);<br />

and the original constraints become<br />

CON(I) .. f( CAP(I) ) =L= B(I);<br />

The reformulated model has N additional variables and N additional linear constraints. In<br />

return, the original N complex nonlinear constraints have been changed into N simpler<br />

nonlinear constraints. And the number of Jacobian elements, that has a direct influence on<br />

much of the computational work both in GAMS and in CONOPT, has been reduced from<br />

N*(N+1)/2 nonlinear elements to 3*N-1 linear elements and only N nonlinear element. If f<br />

is an invertable increasing function you may even rewrite the last constraint as a simple<br />

bound:<br />

CAP.LO(I) = f -1 (B(I));<br />

Some NLP solvers encourage you to move as many nonlinearities as possible into the<br />

objective which may make the objective very complex. This is neither recommended nor<br />

necessary with CONOPT. A special pre-processing step (discussed in section A4 in<br />

Appendix A) will aggregate parts of the model if it is useful for CONOPT without<br />

increasing the complexity in GAMS.


16 CONOPT GAMS/CONOPT<br />

6.4 Equalities vs. Inequalities<br />

A resource constraint or a production function is often modeled as an inequality constraint<br />

in an optimization model; the optimization algorithm will search over the space of feasible<br />

solutions, and if the constraint turns out to constrain the optimal solution the algorithm will<br />

make it a binding constraint, and it will be satisfied as an equality. If you know from the<br />

economics or physics of the problem that the constraint must be binding in the optimal<br />

solution then you have the choice of defining the constraint as an equality from the<br />

beginning. The inequality formulation gives a larger feasible space which can make it<br />

easier to find a first feasible solution. The feasible space may even be convex. On the other<br />

hand, the solution algorithm will have to spend time determining which constraints are<br />

binding and which are not. The trade off will therefore depend on the speed of the<br />

algorithm component that finds a feasible solution relative to the speed of the algorithm<br />

component that determines binding constraints.<br />

In the case of CONOPT, the logic of determining binding constraints is slow compared to<br />

other parts of the system, and you should in general make equalities out of all constraints<br />

you know must be binding. You can switch to inequalities if CONOPT has trouble finding<br />

a first feasible solution.<br />

6.5 Scaling<br />

Nonlinear as well as Linear Programming Algorithms use the derivatives of the objective<br />

function and the constraints to determine good search directions, and they use function<br />

values to determine if constraints are satisfied or not. The scaling of the variables and<br />

constraints, i.e. the units of measurement used for the variables and constraints, determine<br />

the relative size of the derivatives and of the function values and thereby also the search<br />

path taken by the algorithm.<br />

Assume for example that two goods of equal importance both cost $1 per kg. The first is<br />

measured in gram, the second in tons. The coefficients in the cost function will be $1000/g<br />

and $0.001/ton, respectively. If cost is measured in $1000 units then the coefficients will<br />

be 1 and 1.e-6, and the smaller may be ignored by the algorithm since it is comparable to<br />

some of the zero tolerances.<br />

CONOPT assumes implicitly that the model to be solved is well scaled. In this context well<br />

scaled means:<br />

• Basic and superbasic solution values are expected to be around 1, e.g. from 0.01 to 100.<br />

Nonbasic variables will be at a bound, and the bound values should not be larger than<br />

say 100.<br />

• Dual variables (or marginals) on active constraints are expected to be around 1, e.g.<br />

from 0.01 to 100. Dual variables on non-binding constraints will of course be zero.<br />

• Derivatives (or Jacobian elements) are expected to be around 1, e.g. from 0.01 to 100.


GAMS/CONOPT CONOPT 17<br />

Variables become well scaled if they are measured in appropriate units. In most cases you<br />

should select the unit of measurement for the variables so their expected value is around<br />

unity. Of course there will always be some variation. Assume X(I) is the production at<br />

location I. In most cases you should select the same unit of measurement for all<br />

components of X, for example a value around the average capacity.<br />

Equations become well scaled if the individual terms are measured in appropriate units.<br />

After you have selected units for the variables you should select the unit of measurement<br />

for the equations so the expected values of the individual terms are around one. If you<br />

follow these rules, material balance equations will usually have coefficients of plus and<br />

minus one.<br />

Derivatives will usually be well scaled whenever the variables and equations are well<br />

scaled. To see if the derivatives are well scaled, run your model with a positive OPTION<br />

LIMROW and look for very large or very small coefficients in the equation listing in the<br />

GAMS output file.<br />

CONOPT computes a measure of the scaling of the Jacobian, both in the initial and in the<br />

final point, and if it seems large it will be printed. The message looks like:<br />

** WARNING ** The variance of the derivatives in the initial<br />

point is large (= 4.1 ). A better initial<br />

point, a better scaling, or better bounds on the<br />

variables will probably help the optimization.<br />

The variance is computed as SQRT(SUM(LOG(ABS(Jac(i)))**2)/NZ) where Jac(i)<br />

represents the NZ nonzero derivatives (Jacobian elements) in the model. A variance of 4.1<br />

corresponds to an average value of LOG(JAC)**2 of 4.1**2, which means that Jacobian<br />

values outside the range EXP(-4.1)=0.017 to EXP(+4.1)=60.4 are about as common at<br />

values inside. This range is for most models acceptable, while a variance of 5,<br />

corresponding to about half the derivatives outside the range EXP(-5)=0.0067 to<br />

EXP(+5)=148, can be dangerous.<br />

6.5.1 Scaling of Intermediate Variables<br />

Many models have a set of variables with a real economic or physical interpretation plus a<br />

set of intermediate or helping variables that are used to simplify the model. We have seen<br />

some of these in section 6.3 on Simple Expressions. It is usually rather easy to select good<br />

scaling units for the real variables since we know their order of magnitude from economic<br />

or physical considerations. However, the intermediate variables and their defining<br />

equations should preferably also be well scaled, even if they do not have an immediate<br />

interpretation. Consider the following model fragment where X, Y, andZ are variables and<br />

Y is the intermediate variable:


18 CONOPT GAMS/CONOPT<br />

SET P / P0*P4 /<br />

PARAMETER A(P) / P0 211, P1 103, P2 42, P3 31, P4 6 /<br />

YDEF .. Y =E= SUM(P, A(P)*POWER(X,ORD(P)-1));<br />

ZDEF .. Z =E= LOG(Y);<br />

X lies in the interval 1 to 10 which means that Y will be between 211 and 96441 and Z will<br />

be between 5.35 and 11.47. Both X and Z are reasonably scaled while Y and the terms and<br />

derivatives in YDEF are about a factor 1.e4 too large. Scaling Y by 1.e4 and renaming it YS<br />

gives the following scaled version of the model fragment:<br />

YDEFS1 .. YS =E= SUM(P, A(P)*POWER(X,ORD(P)-1))*1.E-4;<br />

ZDEFS1 .. Z =E= LOG(YS*1.E4);<br />

The Z equation can also be written as<br />

ZDEFS2 .. Z =E= LOG(YS) + LOG(1.E4);<br />

Note that the scale factor 1.e-4 in the YDEFS1 equation has been placed on the right hand<br />

side. The mathematically equivalent equation<br />

YDEFS2 .. YS*1.E4 =E= SUM(P, A(P)*POWER(X,ORD(P)-1));<br />

will give a well scaled YS, but the right hand side terms of the equation and their<br />

derivatives have not changed from the original equation YDEF and they are still far too<br />

large.<br />

6.5.2 <strong>Using</strong> the Scale Option is GAMS<br />

The rules for good scaling mentioned above are exclusively based on algorithmic needs.<br />

GAMS has been developed to improve the effectiveness of modelers, and one of the best<br />

ways seems to be to encourage modelers to write their models using a notation that is as<br />

"natural" as possible. The units of measurement is one part of this natural notation, and<br />

there is unfortunately often a conflict between what the modeler thinks is a good unit and<br />

what constitutes a well scaled model.<br />

To facilitate the translation between a natural model and a well scaled model GAMS has<br />

introduced the concept of a scale factor, both for variables and equations. The notation and<br />

the definitions are quite simple. First of all, scaling is by default turned off. To turn it on,<br />

enter the statement ".SCALEOPT = 1;" in your GAMS program somewhere<br />

after the MODEL statement and before the SOLVE statement. "" is the name of<br />

the model to be solved. If you want to turn scaling off again, enter the statement<br />

".SCALEOPT = 0;" somewhere before the next SOLVE.<br />

The scale factor of a variable or an equation is referenced with the suffix ".SCALE", i.e.<br />

the scale factor of variable X(I) is referenced as X.SCALE(I). Note that there is one scale<br />

value for each individual component of a multidimensional variable or equation. Scale<br />

factors can be defined in assignment statements with X.SCALE(I) on the left hand side,


GAMS/CONOPT CONOPT 19<br />

and scale factors, both from variables and equations, can be used on the right hand side, for<br />

example to define other scale factors. The default scale factor is always 1, and a scale<br />

factor must be positive; GAMS will generate an execution time error if the scale factor is<br />

less than 1.e-20.<br />

The mathematical definition of scale factors is a follows: The scale factor on a variable, V s<br />

is used to related the variable as seen by the modeler, V m , to the variable as seen by the<br />

algorithm, V a , asfollows:<br />

V m = V a * V s<br />

This means, that if the variable scale, V s , is chosen to represent the order of magnitude of<br />

the modeler's variable, V m , then the variable seen by the algorithm, V a ,willbearound1.<br />

The scale factor on an equation, G s , is used to related the equation as seen by the modeler,<br />

G m , to the equation as seen by the algorithm, G a ,asfollows:<br />

G m = G a * G s<br />

This means, that if the equation scale, G s , is chosen to represent the order of magnitude of<br />

the individual terms in the modelers version of the equation, G m , then the terms seen by the<br />

algorithm, G a , will be around 1.<br />

The derivatives in the scaled model seen by the algorithm, i.e. dG a /dV a , are related to the<br />

derivatives in the modelers model, dG m /dV m , through the formula:<br />

dG a /dV a = dG m /dV m * V s / G s<br />

i.e. the modelers derivative is multiplied by the scale factor of the variable and divided by<br />

the scale factor of the equation. Note, that the derivative is unchanged if V s = G s .<br />

Therefore, if you have a GAMS equation like<br />

G .. V =E= expression;<br />

and you select G s =V s then the derivative of V will remain 1.<br />

If we apply these rules to the example above with an intermediate variable we can get the<br />

following automatic scale calculation, based on an "average" reference value for X:<br />

SCALAR XREF; XREF = 6;<br />

Y.SCALE = SUM(P, A(P)*POWER(XREF,ORD(P)-1));<br />

YDEF.SCALE = Y.SCALE;<br />

or we could scale Y using values at the end of the X interval and add safeguards as<br />

follows:<br />

Y.SCALE = MAX( ABS(SUM(P, A(P)*POWER(X.LO,ORD(P)-1))),<br />

ABS(SUM(P, A(P)*POWER(X.UP,ORD(P)-1))),


20 CONOPT GAMS/CONOPT<br />

0.01 );<br />

Lower and upper bounds on variables are automatically scaled in the same way as the<br />

variable itself. Integer and binary variables cannot be scaled.<br />

GAMS' scaling is in most respects hidden for the modeler. The solution values reported<br />

back from a solution algorithm, both primal and dual, are always reported in the user's<br />

notation. The algorithm's versions of the equations and variables are only reflected in the<br />

derivatives in the equation and column listings in the GAMS output if OPTION LIMROW<br />

and/or LIMCOL are positive, and in debugging output from the solution algorithm,<br />

generated with OPTION SYSOUT = ON. In addition, the numbers in the algorithms<br />

iteration log will represent the scaled model: the infeasibilities and reduced gradients will<br />

correspond to the scaled model, and if the objective variable is scaled, the value of the<br />

objective function will be the scaled value.<br />

A final warning about scaling of multidimensional variables is appropriate. Assume<br />

variable X(I,J,K) only appears in the model when the parameter IJK(I,J,K) is nonzero, and<br />

assume that CARD(I) = CARD(J) = CARD(K) = 100 while CARD(IJK) is much smaller<br />

than 100**2 = 1.e6. Then you should only scale the variables that appear in the model, i.e.<br />

The statement<br />

X.SCALE(I,J,K)$IJK(I,J,K) = expression;<br />

X.SCALE(I,J,K) = expression;<br />

will generate records for X in the GAMS database for all combinations of I, J, and K for<br />

which the expression is different from 1, i.e. up to 1.e6 records, and apart from spending a<br />

lot of time you will very likely run out of memory. Note that this warning also applies to<br />

non-default lower and upper bounds.<br />

7. NLP AND DNLP MODELS<br />

GAMS has two classes of nonlinear model, NLP and DNLP. NLP models are defined as<br />

models in which all functions that appear with endogenous arguments, i.e. arguments that<br />

depend on model variables, are smooth with smooth derivatives. DNLP models can in<br />

addition use functions that are smooth but have discontinuous derivatives. The usual<br />

arithmetic operators (+, -, *, /, and **) can appear on both model classes.<br />

The functions that can be used with endogenous arguments in a DNLP model and not in an<br />

NLP model are ABS, MIN, and MAX and as a consequence the indexed operators SMIN<br />

and SMAX.<br />

Note that the offending functions can be applied to expressions that only involve constants<br />

such as parameters, var.l, and eq.m. Fixed variables are in principle constants, but GAMS<br />

makes its tests based on the functional form of a models, ignoring numerical parameter


GAMS/CONOPT CONOPT 21<br />

values and numerical bound values, and terms involving fixed variables can therefore not<br />

be used with ABS, MIN, or MAX in an NLP model.<br />

TheNLPsolversusedbyGAMScanalsobeappliedtoDNLPmodels.However,itis<br />

important to know that the NLP solvers attempt to solve the DNLP model as if it was an<br />

NLP model. The solver uses the derivatives of the constraints with respect to the variables<br />

to guide the search, and it ignores the fact that some of the derivatives may change<br />

discontinuously. There are at the moment no GAMS solvers designed specifically for<br />

DNLP models and no solvers that take into account the discontinuous nature of the<br />

derivatives in a DNLP model.<br />

7.1 DNLP models: what can go wrong?<br />

<strong>Solver</strong>s for NLP Models are all based on making marginal improvements to some initial<br />

solution until some optimality conditions ensure no direction with marginal improvements<br />

exist. A point with no marginally improving direction is called a Local Optimum.<br />

The theory about marginal improvements is based on the assumption that the derivatives of<br />

the constraints with respect to the variables is a good approximation to the marginal<br />

changes in some neighborhood around the current point.<br />

Consider the simple NLP model, min SQR(x), where x is a free variable. The marginal<br />

change in the objective is the derivative of SQR(x) with respect to x, which is 2*x. At x =<br />

0, the marginal change in all directions is zero and x = 0 is therefore a Local Optimum.<br />

Next consider the simple DNLP model, min ABS(x), where x again is a free variable. The<br />

marginal change in the objective is still the derivative, which is +1 if x > 0 and -1 if x < 0.<br />

When x = 0, the derivative depends on whether we are going to increase or decrease x.<br />

Internally in the DNLP solver, we cannot be sure whether the derivative at 0 will be -1 or<br />

+1; it can depends on rounding tolerances. An NLP solver will start in some initial point,<br />

say x = 1, and look at the derivative, here +1. Since the derivative is positive, x is reduced<br />

to reduce the objective. After some iterations, x will be zero or very close to zero. The<br />

derivative will be +1 or –1, so the solver will try to change x. however, even small changes<br />

will not lead to a better objective function. The point x = 0 does not look like a Local<br />

Optimum, even though it is a Local Optimum. The result is that the NLP solver will<br />

muddle around for some time and then stop with a message saying something like: “The<br />

solution cannot be improved, but it does not appear to be optimal.”<br />

In this first case we got the optimal solution so we can just ignore the message. However,<br />

consider the following simple two-dimensional DNLP model: min ABS(x1+x2) +<br />

5*ABS(x1-x2) with x1 and x2 free variables. Start the optimization from x1 = x2 = 1.<br />

Small increases in x1 will increase both terms and small decreases in x1 (by dx) will<br />

decrease the first term by dx but it will increase the second term by 5*dx. Any change in<br />

x1 only is therefore bad, and it is easy to see that any change in x2 only also is bad. An<br />

NLP solver may therefore be stuck in the point x1 = x2 = 1, even though it is not a local<br />

solution: the direction (dx1,dx2) = (-1,-1) will lead to the optimum in x1 = x2 = 0.<br />

However, the NLP solver cannot distinguish what happens with this model from what


22 CONOPT GAMS/CONOPT<br />

happened in the previous model; the message will be of the same type: “The solution<br />

cannot be improved, but it does not appear to be optimal.”<br />

7.2 Reformulation from DNLP to NLP<br />

The only reliable way to solve a DNLP model is to reformulate it as an equivalent smooth<br />

NLP model. Unfortunately, it may not always be possible. In this section we will give<br />

some examples of reformulations.<br />

The standard reformulation approach for the ABS function is to introduce positive and<br />

negative deviations as extra variables: The term z = ABS(f(x)) is replaced by z = fplus +<br />

fminus, fplus and fminus are declared as positive variables and they are defined with the<br />

identity: f(x) =E= fplus - fminus. The discontinuous derivative from the ABS function has<br />

disappeared and the part of the model shown here is smooth. The discontinuity has been<br />

converted into lower bounds on the new variables, but bounds are handled routinely by any<br />

NLP solver. The feasible space is larger than before; f(x) = 5 can be obtained both with<br />

fplus = 5, fminus = 0, and z = 5, and with fplus = 1000, fminus = 995, and z = 1995.<br />

Provided the objective function has some term that tries to minimize z, either fplus or<br />

fminus will become zero and z will end with its proper value.<br />

You may think that adding the smooth constraint fplus * fminus =e= 0 would ensure that<br />

either fplus or fminus is zero. However, this type of so-called complementarity constraint<br />

is "bad" in any NLP model. The feasible space consists of the two half lines: (fplus = 0 and<br />

fminus >= 0) and (fplus >= 0 and fminus=0). Unfortunately, the marginal change methods<br />

used by most NLP solvers cannot move from one half line to the other, and the solution is<br />

stuck at the half line it happens to reach first.<br />

There is also a standard reformulation approach for the MAX function. The equation z =E=<br />

MAX(f(x),g(y)) is replace by the two inequalities, z =G= f(x) and z =G= g(y). Provided the<br />

objective function has some term that tries to minimize z, one of the constraints will<br />

become binding as equality and z will indeed be the maximum of the two terms.<br />

The reformulation for the MIN function is similar. The equation z =E= MIN(f(x),g(y)) is<br />

replaced by the two inequalities, z =L= f(x) and z =L= g(y). Provided the objective<br />

function has some term that tries to maximize z, one of the constraints will become binding<br />

as equality and z is indeed the minimum of the two terms.<br />

MAX and MIN can have more than two arguments and the extension should be obvious.<br />

The non-smooth indexed operators, SMAX and SMIN can be handled using a similar<br />

technique: for example, z =E= SMAX(I, f(x,I) ) is replaced by the indexed inequality:<br />

Ineq(I) .. z =L= f(x,I);<br />

The reformulations that are suggested here all enlarge the feasible space. They require the<br />

objective function to move the final solution to the intersection of this larger space with the<br />

original feasible space. Unfortunately, the objective function is not always so helpful. If it<br />

is not, you may try using one of the smooth approximations described next. However, you<br />

should realize, that if the objective function cannot help the "good" approximations


GAMS/CONOPT CONOPT 23<br />

described here, then your overall model is definitely non-convex and it is likely to have<br />

multiple local optima.<br />

7.3 Smooth Approximations<br />

Smooth approximations to the non-smooth functions ABS, MAX, and MIN are<br />

approximations that have function values close to the original functions, but have smooth<br />

derivatives.<br />

A smooth GAMS approximation for ABS(f(x)) is<br />

SQRT( SQR(f(x)) + SQR(delta) )<br />

where delta is a small scalar. The value of delta can be used to control the accuracy of the<br />

approximation and the curvature around f(x) = 0. The approximation error is largest when<br />

f(x) is zero, in which case the error is delta. The error is reduced to approximately<br />

SQR(delta)/2 for f(x) = 1. The second derivative is 1/delta at f(x) = 0 (excluding terms<br />

related to the second derivative of f(x)). A delta value between 1.e-3 and 1.e-4 should in<br />

most cases be appropriate. It is possible to use a larger value in an initial optimization,<br />

reduce it and solve the model again. You should note, that if you reduce delta below 1.d-4<br />

then large second order terms might lead to slow convergence or even prevent<br />

convergence.<br />

The approximation shown above has its largest error when f(x) = 0 and smaller errors when<br />

f(x) is far from zero. If it is important to get accurate values of ABS exactly when f(x) = 0,<br />

then you may use the alternative approximation<br />

SQRT( SQR(f(x)) + SQR(delta) ) – delta<br />

instead. The only difference is the constant term. The error is zero when f(x) is zero and the<br />

error grows to -delta when f(x) is far from zero.<br />

Some theoretical work uses the Huber, H(*), function as an approximation for ABS. The<br />

Huber function is defined as<br />

H(x) = x for x > delta,<br />

H(x) = -x for x < -delta and<br />

H(x) = SQR(x)/2/delta + delta/2 for –delta < x < delta.<br />

Although the Huber function has some nice properties, it is for example accurate when<br />

ABS(x) > delta, it is not so useful for GAMS work because it is defined with different<br />

formulae for the three pieces.<br />

A smooth GAMS approximation for MAX(f(x),g(y)) is<br />

( f(x) + g(y) + SQRT( SQR(f(x)-g(y)) + SQR(delta) ) )/2


24 CONOPT GAMS/CONOPT<br />

where delta again is a small scalar. The approximation error is delta/2 when f(x) = g(y) and<br />

decreases with the difference between the two terms. As before, you may subtract a<br />

constant term to shift the approximation error from the area f(x) = g(y) to areas where the<br />

difference is large. The resulting approximation becomes<br />

( f(x) + g(y) + SQRT( SQR(f(x)-g(y)) + SQR(delta) ) – delta )/2<br />

Similar smooth GAMS approximations for MIN(f(x),g(y)) are<br />

and<br />

( f(x) + g(y) - SQRT( SQR(f(x)-g(y)) + SQR(delta) ) )/2<br />

( f(x) + g(y) - SQRT( SQR(f(x)-g(y)) + SQR(delta) ) + delta )/2<br />

Appropriate delta values are the same as for the ABS approximation: in the range from 1.e-<br />

2to1.e-4<br />

It appears that there are no simple symmetric extensions for MAX and MIN of three or<br />

more arguments or for indexed SMAX and SMIN.<br />

7.4 Are DNLP models always non-smooth?<br />

A DNLP model is defined as a model that has an equation with an ABS, MAX, or MIN<br />

function with endogenous arguments. The non-smooth properties of DNLP models are<br />

derived from the non-smooth properties of these functions through the use of the chain<br />

rule. However, composite expressions involving ABS, MAX, or MIN can in some cases<br />

have smooth derivatives and the model can therefore in some cases be smooth.<br />

One example of a smooth expression involving an ABS function is common in water<br />

systems modeling. The pressure loss over a pipe, dH, is proportional to the flow, Q, to<br />

some power, P. P is usually around +2. The sign of the loss depend on the direction of the<br />

flow so dH is positive if Q is positive and negative if Q is negative. Although GAMS has a<br />

SIGN function, it cannot be used in a model because of its discontinuous nature. Instead,<br />

the pressure loss can be modeled with the equation dH =E= const * Q * ABS(Q)**(P-1),<br />

where the sign of the Q-term takes care of the sign of dH, and the ABS function guaranties<br />

that the real power ** is applied to a non-negative number. Although the expression<br />

involves the ABS function, the derivatives are smooth as long as P is greater than 1. The<br />

derivative with respect to Q is const * (P-1) * ABS(Q)**(P-1) for Q > 0 and -const * (P-1)<br />

* ABS(Q)**(P-1) for Q < 0. The limit for Q going to zero from both right and left is 0, so<br />

the derivative is smooth in the critical point Q = 0 and the overall model is therefore<br />

smooth.<br />

Another example of a smooth expression is the following terribly looking Sigmoid<br />

expression:


GAMS/CONOPT CONOPT 25<br />

Sigmoid(x) = exp( min(x,0) ) / (1+exp(-abs(x)))<br />

Thestandarddefinitionofthesigmoidfunctionis<br />

Sigmoid(x) = exp(x) / ( 1+exp(x) )<br />

This definition is well behaved for negative and small positive x, but it not well behaved<br />

for large positive x since exp overflows. The alternative definition:<br />

Sigmoid(x) = 1 / ( 1+exp(-x) )<br />

is well behaved for positive and slightly negative x, but it overflows for very negative x.<br />

Ideally, we would like to select the first expression when x is negative and the second<br />

when x is positive, i.e.<br />

Sigmoid(x) = (exp(x)/(1+exp(x)))$(x lt 0) + (1/(1+exp(-x)))$(x gt 0)<br />

but a $-control that depends on an endogenous variable is illegal. The first expression<br />

above solves this problem. When x is negative, the nominator becomes exp(x) and the<br />

denominator becomes 1+exp(x). And when x is positive, the nominator becomes exp(0) =<br />

1 and the denominator becomes 1+exp(-x). Since the two expressions are mathematically<br />

identical, the combined expression is of course smooth, and the exp function is never<br />

evaluated for a positive argument.<br />

Unfortunately, GAMS cannot recognize this and similar special cases so you must always<br />

solve models with endogenous ABS, MAX, or MIN as DNLP models, even in the cases<br />

where the model is smooth.<br />

7.5 Are NLP models always smooth?<br />

NLP models are defined as models in which all operators and functions are smooth. The<br />

derivatives of composite functions, that can be derived using the chain rule, will therefore<br />

in general be smooth. However, it is not always the case. The following simple composite<br />

function is not smooth: y = SQRT( SQR(x) ). The composite function is equivalent to y =<br />

ABS(x), one of the non-smooth DNLP functions.<br />

What went wrong? The chain rule for computing derivatives of a composite function<br />

assumes that all intermediate expressions are well defined. However, the derivative of<br />

SQRT grows without bound when the argument approaches zero, violating the assumption.<br />

There are not many cases that can lead to non-smooth composite functions, and they are all<br />

related to the case above: The real power, x**y, for 0 < y < 1 and x approaching zero. The<br />

SQRT function is a special case since it is equivalent to x**y for y = 0.5.


26 CONOPT GAMS/CONOPT<br />

If you have expressions involving a real power with an exponent between 0 and 1 or a<br />

SQRT, you should in most cases add bounds to your variables to ensure that the derivative<br />

or any intermediate terms used in their calculation become undefined. In the example<br />

above, SQRT( SQR(x) ), a bound on x is not possible since x should be allowed to be both<br />

positive and negative. Instead, changing the expression to SQRT( SQR(x) + SQR(delta) )<br />

may lead to an appropriate smooth formulation.<br />

Again, GAMS cannot recognize the potential danger in an expression involving a real<br />

power, and the presence of a real power operator is not considered enough to flag a model<br />

as a DNLP model. During the solution process, the NLP solver will compute constraint<br />

values and derivatives in various points within the bounds defined by the modeler. If these<br />

calculations result in undefined intermediate or final values, a function evaluation error is<br />

reported, an error counter is incremented, and the point is flagged as a bad point. The<br />

following action will then depend on the solver. The solver may try to continue, but only if<br />

the modeler has allowed it with an “Option Domlim = xxx”. The problem of detecting<br />

discontinuities is changed from a structural test at the GAMS model generation stage to a<br />

dynamic test during the solution process.<br />

You may have a perfectly nice model in which intermediate terms become undefined. The<br />

composite function SQRT( POWER(x,3) ) is mathematically well defined around x = 0,<br />

but the computation will involve the derivative of SQRT at zero, that is undefined. It is the<br />

modeler’s responsibility to write expressions in a way that avoids undefined intermediate<br />

terms in the function and derivatives computations. In this case, you may either add a small<br />

strictly positive lower bound on x or rewrite the function as x**1.5.<br />

APPENDIX A: ALGORITHMIC INFORMATION<br />

The objective of this Appendix is to give technically oriented users some understanding of<br />

what CONOPT is doing so they can get more information out of the iteration log. This<br />

information can be used to prevent or circumvent algorithmic difficulties or to make<br />

informed guesses about which options to experiment with to improve CONOPT’s<br />

performance on particular model classes.<br />

A1. Overview of GAMS/CONOPT<br />

GAMS/CONOPT is a GRG-based algorithm specifically designed for large nonlinear<br />

programming problems expressed in the following form<br />

min or max f(x) (1)<br />

subject to g(x) = b (2)<br />

lo < x < up (3)<br />

where x is the vector of optimization variables, lo and up are vectors of lower and upper<br />

bounds, some of which may be minus or plus infinity, b is a vector of right hand sides, and<br />

f and g are differentiable nonlinear functions that define the model. n will in the following


GAMS/CONOPT CONOPT 27<br />

denote the number of variables and m the number of equations. (2) will be referred to as<br />

the (general) constraints and (3) as the bounds.<br />

The relationship between the mathematical model in (1)-(3) above and the GAMS model is<br />

simple: The inequalities defined in GAMS with =L= or =G= are converted into equalities<br />

by addition of properly bounded slacks. Slacks with lower and upper bound of zero are<br />

added to all GAMS equalities to ensure that the Jacobian matrix, i.e. the matrix of<br />

derivatives of the functions g with respect to the variables x, has full row rank. All these<br />

slacks are together with the normal GAMS variables included in x. lo represent the lower<br />

bounds defined in GAMS, either implicitly with the POSITIVE VARIABLE declaration,<br />

or explicitly with the VAR.LO notation, as well as any bounds on the slacks. Similarly, up<br />

represent upper bounds defined in GAMS, e.g. with the VAR.UP notation, as well as any<br />

bounds on the slacks. g represent the non-constant terms of the GAMS equations<br />

themselves; non-constant terms appearing on the right hand side are by GAMS moved to<br />

the left hand side and constant terms on the left hand side are moved to the right. The<br />

objective function f is simply the GAMS variable to be minimized or maximized.<br />

Additional comments on assumptions and design criteria can be found in the Introduction<br />

to the main text.<br />

A2. The CONOPT Algorithm<br />

The algorithm used in GAMS/CONOPT is based on the GRG algorithm first suggested by<br />

Abadie and Carpentier (1969). The actual implementation has many modifications to make<br />

it efficient for large models and for models written in the GAMS language. Details on the<br />

algorithm can be found in Drud (1985 and 1992). Here we will just give a short verbal<br />

description of the major steps in a generic GRG algorithm. The later sections in this<br />

Appendix will discuss some of the enhancements in CONOPT that make it possible to<br />

solve large models.<br />

The key steps in any GRG algorithm are:<br />

1. Initialize and Find a feasible solution.<br />

2. Compute the Jacobian of the constraints, J.<br />

3. Select<br />

matrix<br />

a set of n basic variables, xb, such that B, the sub-<br />

of basic column from J, is nonsingular. Factorize B. The<br />

remaining variables, xn, are called nonbasic.<br />

4. Solve B T u = df/dxb for the multipliers u.<br />

5. Compute the reduced gradient, r = df/dx - J T definition be zero for the basic variables.<br />

u. r will by<br />

6. If r projected on the bounds is small, then stop. The current<br />

point is close to optimal.<br />

7. Select the set of<br />

nonbasic variables<br />

superbasic variables, xs, as a subset<br />

that profitably can be changed, and<br />

of the<br />

find a<br />

search direction, ds, for the superbasic variables based on rs<br />

possibly on some second order information.<br />

and<br />

8. Perform a line search along the direction d. For each step, xs is<br />

changed in the direction ds and xb is subsequently adjusted to


28 CONOPT GAMS/CONOPT<br />

satisfy g(xb,xs) = b in a pseudo-Newton process using the factored<br />

B from step 3.<br />

9. Go to 2.<br />

The individual steps are of course much more detailed in a practical implementation like<br />

CONOPT. Step 1 consists of several pre-processing steps as well as a special Phase 0 and a<br />

scaling procedure as described in the following sections A3 to A6. The optimizing steps<br />

are specialized in several versions according to the whether the model appears to be almost<br />

linear or not. For "almost" linear models some of the linear algebra work involving the<br />

matrices J and B can be avoided or done using cheap LP-type updating techniques, second<br />

order information is not relevant in step 7, and the line search in step 8 can be improved by<br />

observing that the optimal step as in LP almost always will be determined by the first<br />

variable that reaches a bound. Similarly, when the model appears to be fairly nonlinear<br />

other aspects can be optimized: the set of basic variables will often remain constant over<br />

several iterations, and other parts of the sparse matrix algebra will take advantage of this<br />

(section A7 and A8). If the model is "very" linear an improved search direction (step 7) can<br />

be computed using some specialized inner LP-like iterations (section A9), and a steepest<br />

edge procedure can be useful for certain models that needs very many iterations (section<br />

A10).<br />

The remaining two sections give some short guidelines for selecting non-default options<br />

(section A11), and discuss miscellaneous topics (section A12) such as CONOPT’s facilities<br />

for strictly triangular models (A12.1) and for square systems of equations, in GAMS<br />

represented by the model class called CNS or Constrained Nonlinear Systems (A12.2), as<br />

well as numerical difficulties due to loss of feasibility (A12.3) and slow or no progress due<br />

to stalling (A12.4).<br />

A3. Iteration 0: The Initial Point<br />

The first few "iterations" in the iteration log (see section 2 in the main text for an example)<br />

are special initialization iterations, but they have been counted as real iterations to allow<br />

the user to interrupt at various stages during initialization. Iteration 0 corresponds to the<br />

input point exactly as it was received from GAMS. The sum of infeasibilities in the column<br />

labeled "Infeasibility" includes all residuals, also from the objective constraint where "Z<br />

=E= expression" will give rise to the term abs( Z - expression ) that may be nonzero if Z<br />

has not been initialized. You may stop CONOPT after iteration 0 with "OPTION<br />

ITERLIM = 0;" in GAMS. The solution returned to GAMS will contain the input point<br />

and the values of the constraints in this point. The marginals of both variables and<br />

equations have not yet been computed and they will be returned as EPS.<br />

This possibility can be used for debugging when you have a reference point that should be<br />

feasible, but is infeasible for unknown reasons. Initialize all variables to their reference<br />

values, also all intermediated variables, and call CONOPT with ITERLIM = 0. Then<br />

compute and display the following measures of infeasibility for each block of constraints,<br />

represented by the generic name EQ:


GAMS/CONOPT CONOPT 29<br />

=E= constraints: ROUND(ABS(EQ.L - EQ.LO),3)<br />

=L= constraints: ROUND(MIN(0,EQ.L - EQ.UP),3)<br />

=G= constraints: ROUND(MIN(0,EQ.LO - EQ.L),3)<br />

The ROUND function rounds to 3 decimal places so GAMS will only displays the<br />

infeasibilities that are larger than 5.e-4.<br />

Similar information can be derived from inspection of the equation listing generated by<br />

GAMS with "OPTION LIMROW = nn;", but although the method of going via<br />

CONOPT requires a little more work during implementation it can be convenient in many<br />

cases, for example for large models and for automated model checking.<br />

A4. Iteration 1: Preprocessing<br />

Iteration 1 corresponds to a pre-processing step. Constraints-variable pairs that can be<br />

solved a priori (so-called pre-triangular equations and variables) are solved and the<br />

corresponding variables are assigned their final values. Constraints that always can be<br />

made feasible because they contain a free variable with a constant coefficient (so-called<br />

post-triangular equation-variable pairs) are excluded from the search for a feasible solution,<br />

and from the Infeasibility measure in the iteration log. Implicitly, the equations and<br />

variables are ordered as shown in Fig. 1.<br />

A: Zeros<br />

C:<br />

B:<br />

I: III: II:<br />

Figure 1: The ordered Jacobian after Preprocessing.


30 CONOPT GAMS/CONOPT<br />

A4.1 Preprocessing: Pre-triangular Variables and Constraints<br />

The pre-triangular equations are those labeled A in Fig. 1. They are solved one by one<br />

along the "diagonal" with respect to the pre-triangular variables labeled I. In practice,<br />

GAMS/CONOPT looks for equations with only one non-fixed variable. If such an equation<br />

exists, GAMS/CONOPT tries to solve it with respect to this non-fixed variable. If this is<br />

not possible the overall model is infeasible, and the exact reason for the infeasibility is easy<br />

to identify as shown in the examples below. Otherwise, the final value of the variable has<br />

been determined, the variable can for the rest of the optimization be considered fixed, and<br />

the equation can be removed from further consideration. The result is that the model has<br />

one equation and one non-fixed variable less. As variables are fixed new equations with<br />

only one non-fixed variable may emerge, and CONOPT repeats the process until no more<br />

equations with one non-fixed variable can be found.<br />

This pre-processing step will often reduce the effective size of the model to be solved.<br />

Although the pre-triangular variables and equations are removed from the model during the<br />

optimization, CONOPT keeps them around until the final solution is found. The dual<br />

variables for the pre-triangular equations are then computed so they become available in<br />

GAMS.<br />

CONOPT has a special option for analyzing and solving completely triangular models.<br />

This option is described in section A12.1.<br />

The following small GAMS model shows an example of a model with pre-triangular<br />

variables and equations:<br />

VARIABLE X1, X2, X3, OBJ;<br />

EQUATION E1, E2, E3;<br />

E1 .. LOG(X1) + X2 =E= 1.6;<br />

E2 .. 5 * X2 =E= 3;<br />

E3 .. OBJ =E= SQR(X1) + 2 * SQR(X2) + 3 * SQR(X3);<br />

X1.LO = 0.1;<br />

MODEL DEMO / ALL /; SOLVE DEMO USING NLP MINIMIZING OBJ;<br />

Equation E2 is first solved with respect to X2 (result 3/5 = 0.6). It is easy to solve the<br />

equation since X2 appears linearly, and the result will be unique. X2 is then fixed and the<br />

equation is removed. Equation E1 is now a candidate since X1 is the only remaining nonfixed<br />

variable in the equation. Here X1 appears nonlinearly and the value of X1 is found<br />

using an iterative scheme based on Newton's method. The iterations are started from the<br />

value provided by the modeler or from the default initial value. In this case X1 is started<br />

from the default initial value, i.e. the lower bound of 0.1, and the result after some<br />

iterations is X1 =2.718=EXP(1).<br />

During the recursive solution process it may not be possible to solve one of the equations.<br />

If the lower bound on X1 in the model above is changed to 3.0 you will get the following<br />

output:<br />

** An equation in the pre-triangular part of the model cannot<br />

be solved because the critical variable is at a bound.


GAMS/CONOPT CONOPT 31<br />

Residual= 9.86122887E-02<br />

Tolerance (RTNWTR)= 6.34931126E-07<br />

E1: Infeasibility in pre-triangular part of model.<br />

X1: Infeasibility in pre-triangular part of model.<br />

The solution order of the critical equations and<br />

variables is:<br />

E2 is solved with respect to<br />

X2. Solution value = 6.0000000000E-01<br />

E1 could not be solved with respect to<br />

X1. Final solution value = 3.0000000000E+00<br />

E1 remains infeasible with residual = 9.8612288668E-02<br />

The problem is as indicated that the variable to be solved for is at a bound, and the value<br />

suggested by Newton's method is on the infeasible side of the bound. The critical variable<br />

is X1 and the critical equation is E1, i.e.X1 tries to exceed its bound when CONOPT<br />

solves equation E1 with respect to X1. To help you analyze the problem, especially for<br />

larger models, CONOPT reports the solution sequence that led to the infeasibility: In this<br />

case equation E2 was first solved with respect to variable X2, thenequationE1 was<br />

attempted to be solved with respect to X1 at which stage the problem appeared. To make<br />

the analysis easier CONOPT will always report the minimal set of equations and variables<br />

that caused the infeasibility.<br />

Another type of infeasibility is shown by the following model:<br />

VARIABLE X1, X2, X3, OBJ;<br />

EQUATION E1, E2, E3;<br />

E1 .. SQR(X1) + X2 =E= 1.6;<br />

E2 .. 5 * X2 =E= 3;<br />

E3 .. OBJ =E= SQR(X1) + 2 * SQR(X2) + 3 * SQR(X3);<br />

MODEL DEMO / ALL /; SOLVE DEMO USING NLP MINIMIZING OBJ;<br />

where LOG(X1) has been replaced by SQR(X1) and the lower bound on X1 has been<br />

removed. This model gives the message:<br />

** An equation in the pre-triangular part of the model cannot<br />

be solved because of too small pivot.<br />

Adding a bound or initial value may help.<br />

Residual= 4.0000000<br />

Tolerance (RTNWTR)= 6.34931126E-07<br />

E1: Infeasibility in pre-triangular part of model.<br />

X1: Infeasibility in pre-triangular part of model.


32 CONOPT GAMS/CONOPT<br />

The solution order of the critical equations and<br />

variables is:<br />

E2 is solved with respect to<br />

X2. Solution value = 6.0000000000E-01<br />

E1 could not be solved with respect to<br />

X1. Final solution value = 0.0000000000E+00<br />

E1 remains infeasible with residual =-4.0000000000E+00<br />

After equation E2 has been solved with respect to X2, equation E1 that contains the term<br />

X1 2 should be solved with respect to X1. The initial value of X1 is the default value zero.<br />

The derivative of E1 with respect to X1 is therefore zero, and it is not possible for<br />

CONOPT to determine whether to increase or decrease X1. IfX1 is given a nonzero initial<br />

value the model will solve. If X1 is given a positive initial value the equation will give X1<br />

=1,andifX1is given a negative initial value the equation will give X1 =-1.<br />

The last type of infeasibility that can be detected during the solution of the pre-triangular<br />

or recursive equations is shown by the following example<br />

VARIABLE X1, X2, X3, OBJ;<br />

EQUATION E1, E2, E3, E4;<br />

E1 .. LOG(X1) + X2 =E= 1.6;<br />

E2 .. 5 * X2 =E= 3;<br />

E3 .. OBJ =E= SQR(X1) + 2 * SQR(X2) + 3 * SQR(X3);<br />

E4 .. X1 + X2 =E= 3.318;<br />

X1.LO = 0.1;<br />

MODEL DEMO / ALL /; SOLVE DEMO USING NLP MINIMIZING OBJ;<br />

that is derived from the first model by the addition of equation E4. This model produces<br />

the following output<br />

** An equation is inconsistent with other equations in the<br />

pre-triangular part of the model.<br />

Residual= 2.81828458E-04<br />

Tolerance (RTNWTR)= 6.34931126E-07<br />

The pre-triangular feasibility tolerance may be relaxed with<br />

a line:<br />

SET RTNWTR X.XX<br />

in the CONOPT control program.<br />

E4: Inconsistency in pre-triangular part of model.<br />

The solution order of the critical equations and<br />

variables is:<br />

E2 is solved with respect to<br />

X2. Solution value = 6.0000000000E-01<br />

E1 is solved with respect to<br />

X1. Solution value = 2.7182818285E+00


GAMS/CONOPT CONOPT 33<br />

All variables in equation E4 are now fixed<br />

and the equation is infeasible. Residual = 2.8182845830E-04<br />

First E2 is solved with respect to X2, thenE1 is solved with respect to X1 as indicated by<br />

the last part of the output. At this point all variables that appear in equation E4, namely X1<br />

and X2, are fixed, but the equation is not feasible. E4 is therefore inconsistent with E1 and<br />

E2 as indicated by the first part of the output. In this case the inconsistency is fairly small,<br />

2.8E-04, so it could be a tolerance problem. CONOPT will always report the tolerance that<br />

was used, rtnwtr - the triangular Newton tolerance, and if the infeasibility is small it will<br />

also tell how the tolerance can be relaxed. Section 5 in the main text on "The CONOPT<br />

<strong>Options</strong> File" gives further details on how to change tolerances, and a complete list of<br />

options is given in Appendix B.<br />

You can turn the identification and solution of pre-triangular variables and equations off by<br />

adding the line "lspret = f" in the CONOPT control program. This can be useful in<br />

some special cases where the point defined by the pre-triangular equations gives a function<br />

evaluation error in the remaining equations. The following example shows this:<br />

VARIABLE X1, X2, X3, X4, OBJ;<br />

EQUATION E1, E2, E3, E4;<br />

E1 .. LOG(1+X1) + X2 =E= 0;<br />

E2 .. 5 * X2 =E= -3;<br />

E3 .. OBJ =E= 1*SQR(X1) + 2*SQRT(0.01 + X2 - X4) + 3*SQR(X3);<br />

E4 .. X4 =L= X2;<br />

MODEL FER / ALL /; SOLVE FER4 MINIMIZING OBJ USING NLP;<br />

All the nonlinear functions are defined in the initial point in which all variables have their<br />

default value of zero. The pre-processor will compute X2 =-0.6fromE2 and X1 =0.822<br />

from E1. When CONOPT continues and attempts to evaluate E3, the argument to the<br />

SQRT function is negative when these new triangular values are used together with the<br />

initial X4 = 0, and CONOPT cannot backtrack to some safe point since the function<br />

evaluation error appears the first time E3 is evaluated. When the pre-triangular<br />

preprocessor is turned off, X2 and X4 arechangedatthesametimeandtheargumenttothe<br />

SQRT function remains positive throughout the computations. Note, that although the<br />

purpose of the E4 inequality is to guarantee that the argument of the SQRT function is<br />

positive in all points, and although E4 is satisfied in the initial point, it is not satisfied after<br />

the pre-triangular constraints have been solved. Only simple bounds are strictly enforced at<br />

all times. Also note that if the option "lspret = f" is used then feasible linear<br />

constraints will in fact remain feasible.<br />

An alternative (and preferable) way of avoiding the function evaluation error is to define<br />

an intermediate variable equal to 0.01+X2-X4 and add a lower bound of 0.01 on this<br />

variable. The inequality E4 could then be removed and the overall model would have the<br />

same number of constraints.


34 CONOPT GAMS/CONOPT<br />

A4.2 Preprocessing: Post-triangular Variables and Constraints<br />

Consider the following fragment of a larger GAMS model:<br />

VARIABLE UTIL(T) Utility in period T<br />

TOTUTIL Total Utility;<br />

EQUATION UTILDEF(T) Definition of Utility<br />

TUTILDEF Definition of Total Utility;<br />

UTILDEF(T).. UTIL(T) =E= nonlinear function of other variables;<br />

TUTILDEF .. TOTUTIL =E= SUM( T , UTIL(T) / (1+R)**ORD(T) );<br />

MODEL DEMO / ALL /; SOLVE DEMO MAXIMIZING TOTUTIL USING NLP;<br />

The part of the model shown here is easy to read and from a modeling point of view it<br />

should be considered well written. However, it could be more difficult to solve than a<br />

model in which variable UTIL(T) was substituted out because all the UTILDEF<br />

equations are nonlinear constraints that the algorithms must ensure are satisfied.<br />

To make well written models like this easy to solve CONOPT will move as many<br />

nonlinearities as possible from the constraints to the objective function. This automatically<br />

changes the model from the form that is preferable for the modeler to the form that is<br />

preferable for the algorithm. In this process CONOPT looks for free variables that only<br />

appear in one equation outside the objective function. If such a variable exists and it<br />

appears linearly in the equation, like UTIL(T) appearswithcoefficient1inequation<br />

UTILDEF(T), then the equation can always be solved with respect to the variable. This<br />

means that the variable logically can be substituted out of the model and the equation can<br />

be removed. The result is a model that has one variable and one equation less, and a more<br />

complex objective function. As variables and equations are substituted out, new candidates<br />

for elimination may emerge, so CONOPT repeats the process until no more candidates<br />

exist.<br />

This so-called post-triangular preprocessing step will often move several nonlinear<br />

constraints into the objective function where they are much easier to handle, and the<br />

effective size of the model will decrease. In some cases the result can even be a model<br />

without any general constraints. The name post-triangular is derived from the way the<br />

equations and variables appear in the permuted Jacobian in fig. 1. The post-triangular<br />

equations and variables are the ones on the lower right hand corner labeled B and II,<br />

respectively.<br />

In the example above, the UTIL variables will be substituted out of the model together<br />

with the nonlinear UTILDEF equations provided the UTIL variables are free and do not<br />

appear elsewhere in the model. The resulting model will have fewer nonlinear constraints,<br />

but more nonlinear terms in the objective function.<br />

Although you may know that the nonlinear functions on the right hand side of UTILDEF<br />

always will produce positive UTIL values, you should in general not declare UTIL to be a<br />

POSITIVE VARIABLE. If you do, GAMS/CONOPT may not be able to eliminate<br />

UTIL(T), and the model will be harder to solve. It is of course unfortunate that a<br />

redundant bound changes the solution behavior, and to reduce this problem the new


GAMS/CONOPT CONOPT 35<br />

CONOPT2 will try to estimate the range of nonlinear expressions using interval arithmetic.<br />

If the computed range of the right hand side of the UTILDEF constraint is within the<br />

bounds of UTIL, then these bounds cannot be binding and UTIL is a so-called implied free<br />

variable that can be eliminated.<br />

The following model fragment from a least squares model shows another case where the<br />

preprocessing step in GAMS/CONOPT is useful:<br />

VARIABLE RESIDUAL(CASE) Residuals<br />

SSQ Sum of Squared Residuals;<br />

EQUATION EQEST(CASE) Equation to be estimated<br />

SSQDEF Definition of objective;<br />

EQEST(CASE).. RESIDUAL(CASE) =E= expression in other variables;<br />

SSQDEF .. SSQ =E= SUM( CASE, SQR( RESIDUAL(CASE) ) );<br />

MODEL LSQLARGE / ALL /; SOLVE LSQLARGE USING NLP MINIMIZING SSQ;<br />

GAMS/CONOPT will substitute the RESIDUAL variables out of the model using the<br />

EQEST equations. The model solved by GAMS/CONOPT is therefore mathematically<br />

equivalent to the following GAMS model<br />

VARIABLE SSQ Sum of Squared Residuals;<br />

EQUATION SSQD Definition of objective;<br />

SSQD .. SSQ =E= SUM( CASE, SQR(expression in other variables));<br />

MODEL LSQSMALL / ALL /;<br />

SOLVE LSQSMALL USING NLP MINIMIZING SSQ;<br />

However, if the "expression in other variables" is a little complicated, e.g. if it depends on<br />

several variables, then the first model, LSQLARGE, will be much faster to generate with<br />

GAMS because its derivatives in equation EQEST and SSQDEF are much simpler than the<br />

derivatives in the combined SSQD equation in the second model, LSQSMALL. Thelarger<br />

model will therefore be faster to generate, and it will also be faster to solve because the<br />

derivative computations will be faster.<br />

Note that the comments about what are good model formulations are dependent on the<br />

preprocessing capabilities in GAMS/CONOPT. Other algorithms may prefer models like<br />

LSQSMALL over LSQLARGE. Also note that the variables and equations that are<br />

substituted out are still indirectly part of the model. GAMS/CONOPT evaluates the<br />

equations and computes values for the variables each time the value of the objective<br />

function is needed, and their values are available in the GAMS solution.<br />

The number of pre- and post-triangular equations and variables is printed in the log file<br />

between iteration 0 and 1 as shown in the iteration log in Section 2 of the main text. The<br />

sum of infeasibilities will usually decrease from iteration 0 to 1 because fewer constraints<br />

usually will be infeasible. However, it may increase as shown by the following example:<br />

POSITIVE VARIABLE X, Y, Z;<br />

EQUATION E1, E2;<br />

E1.. X =E= 1;<br />

E2.. 10*X - Y + Z =E= 0;


36 CONOPT GAMS/CONOPT<br />

started from the default values X.L =0,Y.L =0,andZ.L = 0. The initial sum of<br />

infeasibilities is 1 (from E1 only). During pre-processing X is selected as a pre-triangular<br />

variable in equation E1 anditisassigneditsfinalvalue1soE1 becomes feasible. After<br />

this change the sum of infeasibilities increases to 10 (from E2 only).<br />

You may stop CONOPT after iteration 1 with "OPTION ITERLIM = 1;" in GAMS.<br />

The solution returned to GAMS will contain the pre-processed values for the variables that<br />

can be assigned values from the pre-triangular equations, the computed values for the<br />

variables used to solve the post-triangular equations, and the input values for all other<br />

variables. The pre- and post-triangular constraints will be feasible, and the remaining<br />

constraints will have values that correspond to this point. The marginals of both variables<br />

and equations have not been computed yet and will be returned as EPS.<br />

The crash procedure described in the following sub-section is an optional part of iteration<br />

1.<br />

A4.3 Preprocessing: The Optional Crash Procedure<br />

In the initial point given to CONOPT the variables are usually split into a group with initial<br />

value provided by the modeler (in the following called the assigned variables) and a group<br />

of variables for which no initial value has been provided (in the following called the<br />

default variables). The objective of the optional crash procedure is to find a point in which<br />

as many of the constraints as possible are feasible, primarily by assigning values to the<br />

default variables and by keeping the assigned variables at their initial values. The implicit<br />

assumption in this procedure is that if the modeler has assigned an initial value to a<br />

variable then this value is "better" then a default initial value.<br />

The crash procedure is an extension of the triangular pre-processing procedure described<br />

above and is based on a simple heuristic: As long as there is an equation with only one<br />

non-fixed variable (a singleton row) then we should assign a value to the variable so the<br />

equation is satisfied or satisfied as closely as possible, and we should then temporarily fix<br />

the variable. When variables are fixed additional singleton rows may emerge and we repeat<br />

theprocess.Whentherearenosingletonrowswefixoneormorevariablesattheirinitial<br />

value until a singleton row appears, or until all variables have been fixed. The variables to<br />

be fixed at their initial value are selected using a heuristic that both tries to create many<br />

row singletons and tries to select variables with "good values". Since the values of many<br />

variables will come to depend in the fixed variables, the procedure favors assigned<br />

variables and among these it favors variables that appear in many feasible constraints.


GAMS/CONOPT CONOPT 37<br />

A: Zeros<br />

C:<br />

D:<br />

B:<br />

I: III: IV: II:<br />

Figure 2: The ordered Jacobian after Preprocessing and Crashing.<br />

Fig. 2 shows a reordered version of fig. 1. The variables labeled IV are the variables that<br />

are kept at their initial values, primarily selected from the assigned variables. The<br />

equations labeled C are then solved with respect to the variables labeled III, called the<br />

crash-triangular variables. The crash-triangular variables will often be variables without<br />

initial values, e.g. intermediate variables. The number of crash-triangular variables is<br />

shown on the iteration output between iteration 0 and 1, but only if the crash procedure is<br />

turned on.<br />

The result of the crash procedure is an updated initial point in which usually a large<br />

number of equations will be feasible, namely all equations labeled A, B, and C in Fig. 2.<br />

There is, as already shown with the small example in section A4.2 above, no guarantie that<br />

the sum of infeasibilities will be reduced, but it is often the case, and the point will often<br />

provide a good starting point for the following procedures that finds an initial feasible<br />

solution.<br />

The crash procedure is activated by adding the line "lstcrs=t" in the options file. The<br />

default value of lstcrs (lstcrs =Logical Switch for Triangular CRaSh) is f or<br />

false, i.e. the crash procedure is not normally used.<br />

The Crash procedure is only available in CONOPT2.<br />

A5. Iteration 2: Scaling<br />

Iteration 2 is the last dummy iteration during which the model is scaled, if scaling is turned<br />

on. The Infeasibility column shows the scaled sum of infeasibilities. You may again stop


38 CONOPT GAMS/CONOPT<br />

CONOPT after iteration 2 with "OPTION ITERLIM = 2;" in GAMS, but the solution<br />

that is reported in GAMS will have been scaled back again so there will be no change from<br />

iteration 1 to iteration 2.<br />

The following description of the automatic scaling procedure is included for completeness.<br />

Experiments have so far not given the results we hoped for, and scaling is therefore by<br />

default turned off, corresponding to the CONOPT option "lsscal = f". Usersare<br />

recommended to be cautious with the automatic scaling procedure. If scaling is a problem,<br />

try to use manual scaling or scaling in GAMS (see section 6.5 in the main text) based on an<br />

understanding of the model.<br />

The scaling procedure multiplies all variables in group III and all constraints in group C<br />

(see Fig. 1) by scale factors. All derivatives (Jacobian elements) are scaled by a row and a<br />

column factors, and the objective of the scaling procedure is (1) to minimize the deviation<br />

of the absolute values of the scaled Jacobian elements from one, and (2) to scale very large<br />

variables towards one in absolute value. The scaling procedure is based on simple<br />

equilibration: the rows of the Jacobian are divided by the geometic mean of the largest and<br />

smallest Jacobian element, the columns are then divided by the geometic mean of the<br />

largest and smallest Jacobian element (adjusted if the scaled variable is very large), and the<br />

process is repeated until the changes are small.<br />

The main difficulties involved in automatic scaling for NLP models are how to take<br />

account of<br />

1. very rapidly varying Jacobian element, e.g. elements associated with LOG(X) or 1/X as<br />

X approaches a very small lower bound, and<br />

2. very small Jacobian elements, e.g. elements associated with product terms in which one<br />

of the factors is close to zero.<br />

CONOPT tries to overcome these difficulties by (1) rescaling at regular intervals (see<br />

lfscal) and (2) by rounding very small elements up (rtminj) and by disallowing very<br />

large and very small scale factors (rtmins and rtmaxs).<br />

The CR-Cells that control scaling, lsscal, lfscal, rtminj, rtmins, and<br />

rtmaxs, are all described in Appendix B.<br />

The Scaling procedure is only available in CONOPT2.<br />

A6. Finding a Feasible Solution: Phase 0<br />

The GRG algorithm used by CONOPT is a feasible path algorithm. This means that once it<br />

has found a feasible point it tries to remain feasible and follow a path of improving feasible<br />

points until it reaches a local optimum. CONOPT starts with the point provided by GAMS.<br />

This point will always satisfy the bounds (3): GAMS will simply move a variable that is<br />

outside its bounds to the nearer bound before it is presented to the solver. If the general<br />

constraints (2) also are feasible then CONOPT will work with feasible solutions<br />

throughout the optimization. However, the initial point may not satisfy the general


GAMS/CONOPT CONOPT 39<br />

constraints (2). If this is not the case, GAMS/CONOPT must first find an initial feasible<br />

point. This first step can be just as hard as finding an optimum for some models. For some<br />

models feasibility is the only problem.<br />

GAMS/CONOPT has two methods for finding an initial feasible point. The first method is<br />

not very reliable but it is fast when it works; the second method is reliable but slower. The<br />

fast method is called Phase 0 and it is described in this section. It is used first. The reliable<br />

method, called Phase 1 and 2, will be used if Phase 0 terminates without a feasible<br />

solution.<br />

Phase0isbasedontheobservationthatNewton'smethodforsolvingasetofequations<br />

usually is very fast, but it may not always converge. Newton's method in its pure form is<br />

defined for a model with the same number of variables as equations, and no bounds on the<br />

variables. With our type of model there are usually too many variables, i.e. too many<br />

degrees of freedom, and there are bounds. To get around the problem of too many<br />

variables, GAMS/CONOPT selects a subset with exactly m "basic" variables to be<br />

changed. The rest of the variables will remain fixed at their current values, that are not<br />

necessarily at bounds. To accommodate the bounds, GAMS/CONOPT will try to select<br />

variables that are away from their bounds as basic, subject to the requirement that the Basis<br />

matrix, consisting of the corresponding columns in the Jacobian, must have full rank and<br />

be well conditioned.<br />

The Newton equations are solved to yield a vector of proposed changes for the basic<br />

variables. If the full proposed step can be applied we can hope for the fast convergence of<br />

Newton's method. However, several things may go wrong:<br />

a) The infeasibilities, measured by the 1-norm of g (i.e. the sum of the absolute<br />

infeasibilities, excluding the pre- and post-triangular equations), may not decrease as<br />

expected due to nonlinearities.<br />

b) The maximum step length may have to be reduced if a basic variable otherwise would<br />

exceed one of its bounds.<br />

In case a) GAMS/CONOPT tries various heuristics to find a more appropriate set of basic<br />

variables. If this does not work, some "difficult" equations, i.e. equations with large<br />

infeasibilities and significant nonlinearities, are temporarily removed from the model, and<br />

Newton's method is applied to the remaining set of "easy" equations.<br />

In case b) GAMS/CONOPT will remove the basic variable that first reaches one of its<br />

bounds from the basis and replace it by one of the nonbasic variables. Newton's method is<br />

then applied to the new set of basic variables. The logic is very close to that of the dual<br />

simplex method. In cases where some of the basic variables are exactly at a bound<br />

GAMS/CONOPT uses an anti degeneracy procedure based on Ryan and Osborne (1988) to<br />

prevent cycling.<br />

Phase 0 will end when all equations except possibly some "difficult" equations are feasible<br />

within some small tolerance. If there are no difficult equations, GAMS/CONOPT has<br />

found a feasible solution and it will proceed with Phase 3 and 4. Otherwise, Phase 1 and 2<br />

is used to make the difficult equations feasible.


40 CONOPT GAMS/CONOPT<br />

The iteration output will during Phase 0 have the following columns in the iteration log:<br />

Iter, Phase, Ninf, Infeasibility, Step, MX, and OK. The number in the Ninf column counts<br />

the number of "difficult" infeasible equations, and the number in the Infeasibility column<br />

shows the sum of the absolute infeasibilities in all the general constraints, both in the easy<br />

and in the difficult ones. There are three possible combinations of values in the MX and<br />

OK columns: combination (1) has F in the MX column and T in the OK column and it will<br />

always be combined with 1.0 in the Step column: this is an ideal Newton step. The<br />

infeasibilities in the easy equations should be reduced quickly, but the difficult equations<br />

may dominate the number in the Infeasibility column so you may not observe it. However,<br />

a few of these iterations is usually enough to terminate Phase 0. Combination (2) has T in<br />

the MX column indicating that a basic variable has reached its bound and is removed from<br />

the basis as in case b) above. This will always be combined with T in the OK column. The<br />

Step column will show a step length less than the ideal Newton step of 1.0. Combination<br />

(3) has F in both the MX and OK column. It is the bad case and will always be combined<br />

with a step of 0.0: this is an iteration where nonlinearities are dominating and one of the<br />

heuristics from case a) must be used.<br />

The success of the Phase 0 procedure is based on being able to choose a good basis that<br />

will allow a full Newton step. It is therefore important that as many variables as possible<br />

have been assigned reasonable initial values so GAMS/CONOPT has some variables away<br />

from their bounds to select from. This topic was discussed in more detail in section 6.1 on<br />

"Initial Values".<br />

The start and the iterations of Phase 0 can, in addition to the crash option described in<br />

section A6, be controlled with the three CR-cells lslack, lsmxbs, and lmmxsf<br />

describedinAppendixB.<br />

A7. Finding a Feasible Solution: Phase 1 and 2<br />

Most of the equations will be feasible when phase 0 stops. To remove the remaining<br />

infeasibilities CONOPT uses a procedure similar to the phase 1 procedure used in Linear<br />

Programming: artificial variables are added to the infeasible equations (the equations with<br />

Large Residuals), and the sum of these artificial variables is minimized subject to the<br />

feasible constraints remaining feasible. The artificial variable are already part of the model<br />

as slack variables; their bounds are simply relaxed temporarily.<br />

This infeasibility minimization problem is similar to the overall optimization problem:<br />

minimize an objective function subject to equality constraints and bounds on the variables.<br />

The feasibility problem is therefore solved with the ordinary GRG optimization procedure.<br />

As the artificial variables gradually become zero, i.e. as the infeasible equations become<br />

feasible, they are taken out of the auxiliary objective function. The number of<br />

infeasibilities (shown in the Ninf column of the log file) and the sum of infeasibilities (in<br />

the Infeasibility column) will therefore both decrease monotonically.


GAMS/CONOPT CONOPT 41<br />

The iteration output will label these iterations as phase 1 and/or phase 2. The distinction<br />

between phase 1 (linear mode) and 2 (nonlinear mode) is similar to the distinction between<br />

phase 3 and 4 that is described in the next sections.<br />

A8. Linear and Nonlinear Mode: Phase 1 to 4<br />

Theoptimizationitselffollowsstep2to9oftheGRGalgorithmshowninA2above.The<br />

factorization in step 3 is performed using an efficient sparse LU factorization similar to the<br />

one described by Suhl and Suhl (1990). The matrix operations in step 4 and 5 are also<br />

performed sparse.<br />

Step 7, selection of the search direction, has several variants, depending on how nonlinear<br />

the model is locally. When the model appears to be fairly linear in the area in which the<br />

optimization is performed, i.e. when the function and constraint values are close to their<br />

linear approximation for the steps that are taken, then CONOPT takes advantages of the<br />

linearity: The derivatives (the Jacobian) are not computed in every iteration, the basis<br />

factorization is updated using cheap LP techniques as described by Reid (1982), the search<br />

direction is determined without use of second order information, i.e. similar to a steepest<br />

descend algorithm, and the initial steplength is estimated as the step length where the first<br />

variable reaches a bound; very often, this is the only step length that has to be evaluated.<br />

These cheap almost linear iterations are refered to a Linear Mode and they are labeled<br />

Phase 1 when the model is infeasible and objective is the sum of infeasibilities and Phase 3<br />

when the model is feasible and the real objective function is optimized.<br />

When the constraints and/or the objective appear to be more nonlinear CONOPT will still<br />

follow step 2 to 9 of the GRG algorithm. However, the detailed content of each step is<br />

different. In step 2, the Jacobian must be recomputed in each iteration since the<br />

nonlinearities imply that the derivatives change. On the other hand, the set of basic<br />

variables will often be the same and CONOPT will take advantage of this during the<br />

factorization of the basis. In step 7 CONOPT uses the BFGS algorithm to estimate second<br />

order information and determine search directions. And in step 8 it will often be necessary<br />

to perform more than one step in the line search. These nonlinear iterations are labeled<br />

Phase 2 in the output if the solution is still infeasible, and Phase 4 if it is feasible. The<br />

iterations in phase 2 and 4 are in general more expensive than the iteration in phase 1 and<br />

3.<br />

Some models will remain in phase 1 (linear mode) until a feasible solution is found and<br />

then continue in phase 3 until the optimum is found, even if the model is truly nonlinear.<br />

However, most nonlinear models will have some iterations in phase 2 and/or 4 (nonlinear<br />

mode). Phase 2 and 4 indicates that the model has significant nonlinear terms around the<br />

current point: the objective or the constraints deviate significantly from a linear model for<br />

the steps that are taken. To improve the rate of convergence CONOPT tries to estimate<br />

second order information in the form of an estimated reduced Hessian using the BFGS<br />

formula.


42 CONOPT GAMS/CONOPT<br />

Each iteration is, in addition to the step length shown in column "Step", characterized by<br />

two logicals: MX and OK. MX = T means that the step was maximal, i.e. it was determined<br />

by a variable reaching a bound. This is the expected value in Phase 1 and 3. MX = F means<br />

that no variable reached a bound and the optimal step length will in general be determined<br />

by nonlinearities. OK = T means that the line search was well-behaved and an optimal step<br />

length was found; OK = F means that the line search was ill-behaved, which means that<br />

CONOPT would like to take a larger step, but the feasibility restoring Newton process used<br />

during the line search did not converge for large step lengths. Iterations marked with OK =<br />

F (and therefore also with MX = F) will usually be expensive, while iterations marked with<br />

MX = T and OK = T will be cheap.<br />

A9. Linear Mode: The SLP Procedure<br />

When the model continues to appear linear CONOPT will often take many small steps,<br />

each determined by a new variable reaching a bound. Although the line searches are fast in<br />

linear mode, each require one or more evaluations of the nonlinear constraints, and the<br />

overall cost may become high relative to the progress. In order to avoid the many nonlinear<br />

constraint evaluations CONOPT may replace the steepest descend direction in step 7 of the<br />

GRG algorithm with a sequential linear programming (SLP) technique to find a search<br />

direction that anticipates the bounds on all variables and therefore gives a larger expected<br />

change in objective in each line search. The search direction and the last basis from the<br />

SLP procedure are used in an ordinary GRG-type line search in which the solution is made<br />

feasible at each step. The SLP procedure is only used to generate good directions; the usual<br />

feasibility preserving steps in CONOPT are maintained, so CONOPT is still a feasible path<br />

method with all its advantages, especially related to reliability.<br />

Iterations in this so-called SLP-mode are identified by numbers in the column labeled<br />

"SlpIt" in the iteration log. The number in the SlpIt column is the number of nondegenerate<br />

SLP iterations. This number is adjusted dynamically according to the success of<br />

the previous iterations and the perceived linearity of the model.<br />

The SLP procedure generates a scaled search direction and the expected step length in the<br />

following line search is therefore 1.0. The step length may be less than 1.0 for several<br />

reasons:<br />

• The line search is ill-behaved. This is indicated with OK = F and MX = F.<br />

• A basic variable reaches a bound before predicted by the linear model. This is indicated<br />

with MX = T and OK = T.<br />

• The objective is nonlinear along the search direction and the optimal step is less than<br />

one. This is indicated with OK = T and MX = F.<br />

CONOPT will by default determine if it should use the SLP procedure or not, based on<br />

progress information. You may turn it off completely with the line "lseslp = f" in the<br />

CONOPT options file (usually conopt2.opt). The default value of lseslp (lseslp<br />

=Logical Switch Enabling SLP mode) is t or true, i.e. the SLP procedure is enabled and


GAMS/CONOPT CONOPT 43<br />

CONOPT may use it when considered appropriate. It is seldom necessary to define<br />

lseslp, but it can be useful if CONOPT repeatedly turns SLP on and off, i.e. if you see a<br />

mixture of lines in the iteration log with and without numbers in the SlpIt column.<br />

The SLP procedure is only available in CONOPT2.<br />

A10. Linear Mode: The Steepest Edge Procedure<br />

When optimizing in linear mode (Phase 1 or 3) CONOPT will by default use a steepest<br />

descend algorithm to determine the search direction. CONOPT allows you to use a<br />

Steepest Edge Algorithm as an alternative. The idea, borrowed from Linear Programming,<br />

is to scale the nonbasic variables according to the Euclidean norm of the "updated column"<br />

in a standard LP tableau, the so-called edge length. A unit step for a nonbasic variable will<br />

give rise to changes in the basic variables proportional to the edge length. A unit step for a<br />

nonbasic variable with a large edge length will therefore give large changes in the basic<br />

variables which has two adverse effects relative to a unit step for a nonbasic variable with a<br />

small edge length: a basic variable is more likely to reach a bound after a very short step<br />

length, and the large change in basic variables is more likely to give rise to larger nonlinear<br />

terms.<br />

The steepest edge algorithm has been very successful for linear programs, and our initial<br />

experience has also shown that it will give fewer iterations for most nonlinear models.<br />

However, the cost of maintaining the edge lengths can be more expensive in the nonlinear<br />

case and it depends on the model whether steepest edge results in faster overall solution<br />

times or not. CONOPT uses the updating methods for the edge lengths from LP, but it must<br />

re-initialize the edge lengths more frequently, e.g. when an inversion fails, which happens<br />

more frequently in nonlinear models than in linear models, especially in models with many<br />

product terms, e.g. blending models, where the rank of the Jacobian can change from point<br />

to point.<br />

Steepest edge is turned on with the line, "lsanrm = t", in the CONOPT options file<br />

(usually conopt2.opt). The default value of lsanrm (lsanrm =Logical Switch for A-<br />

NoRM) isf or false, i.e. the steepest edge procedure is turned off.<br />

The steepest edge procedure is mainly useful during linear mode iterations. However, it has<br />

some influence in phase 2 and 4 also: The estimated reduced Hessian in the BFGS method<br />

is initialized to a diagonal matrix with elements on the diagonal computed from the edge<br />

lengths, instead of the usual scaled unit matrix.<br />

The Steepest Edge procedure is only available in CONOPT2.<br />

A11. How to Select Non-default <strong>Options</strong><br />

The non-default options have an influence on different phases of the optimization and you<br />

must therefore first observe whether most of the time is spend in Phase 0, Phase 1 and 3, or<br />

in Phase 2 and 4.


44 CONOPT GAMS/CONOPT<br />

Phase 0: The quality of Phase 0 depends on the number of iterations and on the number and<br />

sum of infeasibilities after Phase 0. The iterations in Phase 0 are much faster than the other<br />

iterations, but the overall time spend in Phase 0 may still be rather large. If this is the case,<br />

or if the infeasibilities after Phase 0 are large you may try to use the triangular crash<br />

options:<br />

lstcrs = t<br />

Observe if the initial sum of infeasibility after iteration 1 has been reduced, and if the<br />

number of phase 0 iterations and the number of infeasibilities at the start of phase 1 have<br />

been reduced. If lstcrs reduces the initial sum of infeasibilities but the number of<br />

iterations still is large you may try:<br />

lslack = t<br />

CONOPT will after the preprocessor immediately add artificial variables to all infeasible<br />

constraints so Phase 0 will be eliminated, but the sum and number of infeasibilities at the<br />

start of Phase 1 will be larger. You are in reality trading Phase 0 iterations for Phase 1<br />

iterations.<br />

You may also try the experimental bending line search with<br />

lmmxsf = 1<br />

The line search in Phase 0 will with this option be different and the infeasibilities may be<br />

reduced faster than with the default "lmmxsf = 0". It is likely to be better if the number<br />

of iterations with both MX = F and OK = F is large. This option may be combined with<br />

"lstcrs = t". Usually, linear constraints that are feasible will remain feasible.<br />

However, you should note that with the bending linesearch linear feasible constraints could<br />

become infeasible.<br />

Phase 1 and 3: The number of iterations in Phase 1 and Phase 3 will probably be reduced if<br />

you use steepest edge, "lsanrm = t", but the overall time may increase. Steepest edge<br />

seems to be best for models with less than 5000 constraints, but work in progress tries to<br />

push this limit upwards. Try it when the number of iterations is very large, or when many<br />

iterations are poorly behaved identified with OK = F in the iteration log.<br />

The default SLP mode is usually an advantage, but it is too expensive for a few models. If<br />

you observe frequent changes between SLP mode and non-SLP mode, or if many line<br />

searches in the SLP iterations are ill-behaved with OK = F, then it may be better to turn<br />

SLP off with "lseslp = f".<br />

Phase 2 and 4: There are currently not many options available if most of the time is spend<br />

in Phase 2 and Phase 4. If the change in objective during the last iterations is very small,<br />

you may reduce computer time in return for a slightly worse objective by reducing the<br />

optimality tolerance, rtredg. If the number of superbasic variables is larger than 500<br />

then it may be worth while to increase the maximum size of the estimated Hessian matrix,


GAMS/CONOPT CONOPT 45<br />

lfnsup. Note that you in this case may need more memory than CONOPT allocates by<br />

default.<br />

A12: Miscellaneous Topics<br />

A12.1 Triangular Models<br />

A triangular model is one in which the non-fixed variables and the equations can be sorted<br />

such that the first equation only depends on the first variable, the second equation only<br />

depends on the first two variables, and the p-th equation only depends on the first p<br />

variables. Provided there are no difficulties with bounds or small pivots, triangular models<br />

can be solved one equation at a time using the method describe in section "A4.1<br />

Preprocessing: Pre-triangular Variables and Constraints" and the solution process will be<br />

very fast and reliable.<br />

Triangular models can in many cases be useful for finding a good initial feasible solution:<br />

Fix a subset of the variables so the remaining model is known to be triangular and solve<br />

this triangular simulation model. Then reset the bounds on the fixed variables to their<br />

original values and solve the original model. The first solve will be very fast and if the<br />

fixed variables have been fixed at good values then the solution will also be good. The<br />

second solve will start from the good feasible solution generated by the first solve and it<br />

will usually optimize much more quickly than from a poor start.<br />

The modeler can instruct CONOPT that a model is supposed to be triangular with the<br />

option "lstria = t". CONOPT will then use a special version of the preprocessing<br />

routine (see section 4.1) that solves the model very efficiently. If the model is solved<br />

successfully then CONOPT terminates with the message:<br />

** Feasible solution to a recursive model.<br />

and the Model Status will be 2, Locally Optimal, or 1, Optimal, depending on whether<br />

there were any nonlinear pivots or not. All marginals on both variables and equations are<br />

returned as 0 (zero) or EPS.<br />

Two SOLVEs with different option files can be arranged by writing the option files as they<br />

are needed from within the GAMS program with PUT statements followed by a<br />

PUTCLOSE. You can also have two different option files, for example conopt2.opt<br />

and conopt2.op2, and select the second with the GAMS statement<br />

".optfile = 2;".<br />

The triangular facility handles a number of error situations:<br />

1. Non-triangular models: CONOPT will ensure that the model is indeed triangular. If it is<br />

not, CONOPT will return model status 5, Locally Infeasible, plus some information that<br />

allows the modeler to identify the mistake. The necessary information is related to the<br />

order of the variables and equations and number of occurrences of variables and equations,


46 CONOPT GAMS/CONOPT<br />

and since GAMS does no have a natural place for this type of information CONOPT<br />

returns it in the marginals of the equations and variables. The solution order for the<br />

triangular equations and variables that have been solved successfully are defined with<br />

positive numbers in the marginals of the equations and variables. For the remaining nontriangular<br />

variables and equations CONOPT shows the number of places they appear as<br />

negative numbers, i.e. a negative marginal for an equation shows how many of the nontriangular<br />

variables that appear in this equation. You must fix one or more variables until at<br />

least one of the non-triangular equation only has one non-fixed variable left.<br />

2. Infeasibilities due to bounds: If some of the triangular equations cannot be solved with<br />

respect to their variable because the variable will exceed the bounds, then CONOPT will<br />

flag the equation as infeasible, keep the variable at the bound, and continue the triangular<br />

solve. The solution to the triangular model will therefore satisfy all bounds and almost all<br />

equations. The termination message will be<br />

** Infeasible solution. xx artificial(s) have been<br />

introduced into the recursive equations.<br />

and the model status will be 5, Locally Infeasible.<br />

The modeler may in this case add explicit artificial variables with high costs to the<br />

infeasible constraints and the resulting point will be an initial feasible point to the overall<br />

optimization model. You will often from the mathematics of the model know that only<br />

some of the constraints can be infeasible, so you will only need to check whether to add<br />

artificials in these equations. Assume that a block of equations MATBAL(M,T) could<br />

become infeasible. Then the artificials that may be needed in this equation can be modeled<br />

and identified automatically with the following GAMS constructs:<br />

SET APOSART(M,T) Add a positive artificial in Matbal<br />

ANEGART(M,T) Add a negative artificial in Matbal;<br />

APOSART(M,T) = NO; ANEGART(M,T) = NO;<br />

POSITIVE VARIABLE<br />

VPOSART(M,T) Positive artificial variable in Matbal<br />

VNEGART(M,T) Negative artificial variable in Matbal;<br />

MATBAL(M,T).. Left hand side =E= right hand side<br />

+ VPOSART(M,T)$APOSART(M,T) - VNEGART(M,T)$ANEGART(M,T);<br />

OBJDEF.. OBJ =E= other_terms +<br />

WEIGHT * SUM((M,T), VPOSART(M,T)$APOSART(M,T)<br />

+VNEGART(M,T)$ANEGART(M,T) );<br />

Solve triangular model ...<br />

APOSART(M,T)$(MATBAL.L(M,T) GT MATBAL.UP(M,T)) = YES;<br />

ANEGART(M,T)$(MATBAL.L(M,T) LT MATBAL.LO(M,T)) = YES;<br />

Solve final model ...


GAMS/CONOPT CONOPT 47<br />

3. Small pivots: The triangular facility requires the solution of each equation to be locally<br />

unique which also means that the pivots used to solve each equation must be nonzero. The<br />

model segment<br />

E1 .. X1 =E= 0;<br />

E2 .. X1 * X2 =E= 0;<br />

will give the message<br />

X2 appearing in<br />

E2: Pivot too small for triangular model. Value=0.000E+00<br />

** Infeasible solution. The equations were assumed to be<br />

recursive but they are not. A pivot element is too small.<br />

However, the uniqueness of X2 may not be relevant if the solution just is going to be used<br />

as an initial point for a second model. The option "lsismp = t" (for Logical Switch:<br />

Ignore SMall Pivots) will allow zero pivots as long as the corresponding equation is<br />

feasible for the given initial values.<br />

A12.2 Constrained Nonlinear System or Square Systems of Equations<br />

There is a special model class in GAMS called CNS - Constrained Nonlinear System. A<br />

constrained nonlinear system is a square system of equations, i.e. a model in which the<br />

number of non-fixed variables is equal to the number of constraints. Currently, CONOPT2<br />

and PATHCNS are the only solvers for this model class. A CNS model can be solved with<br />

a solve statement like<br />

SOLVE USING CNS;<br />

without an objective term. In some cases it may be convenient to solve aCNS model with a<br />

standard solve statement combined with an options file that has the statement "lssqrs =<br />

t". In the latter case, CONOPT2 will check that the number of non-fixed variables is equal<br />

to the number of constraints. In either case, CONOPT2 will attempt to solve the constraints<br />

with respect to the non-fixed variables using Newton’s method. The solution process will<br />

stop with an error message and the current intermediate infeasible solution will be returned<br />

if the Jacobian to be inverted is singular, or if one of the non-fixed variables tries to move<br />

outside their bounds.<br />

Slacks in inequalities are counted as non-fixed variables which effectively means that<br />

inequalities should not be binding. Bounds on the variables are allowed, especially to<br />

prevent function evaluation errors for functions that only are defined for some arguments,<br />

but the bounds should not be binding in the final solution.<br />

The solution returned to GAMS will in all cases have marginal values equal to 0 or EPS,<br />

both for the variables and the constraints.


48 CONOPT GAMS/CONOPT<br />

The termination messages for CNS models are different from the termination messages for<br />

optimization models. The message you hope for is<br />

** Feasible solution to a square system.<br />

that usually will be combined with model status 16 – Solved. If CONOPT2 in special cases<br />

can guarantie that the solution is unique, for example if the model is linear, then the model<br />

status will be 15 – Solved Unique.<br />

There are two potential error termination messages related to CNS models. A model with<br />

the following two constraints<br />

e1 .. x1 + x2 =e= 1;<br />

e2 .. 2*x1 + 2*x2 =e= 2;<br />

will result in the message<br />

** Error in Square System: Pivot too small.<br />

e2: Pivot too small.<br />

x1: Pivot too small.<br />

“Pivot too small” means that the set of constraints is linearly dependent and there cannot be<br />

a unique solution to the model. The message points to one variable and one constraint.<br />

However, this just indicates that the linearly dependent set of constraints and variables<br />

include the constraint and variable mentioned. The offending constraint and variable will<br />

also be labeled ‘DEPND’ for linearly dependent in the equation listing. The error will<br />

usually be combined with model status 5 – Locally Infeasible. In the cases where<br />

CONOPT2 can guarantie that the infeasibility is not caused by nonlinearities the model<br />

status will be 4 – Infeasible. If the constraints are linearly dependent but the current point<br />

satisfy the constraints then the solution status will be 17 – Solved Singular, indicating that<br />

the point is feasible, but there is probably a whole ray of feasible solution through the<br />

current point.<br />

A model with these two constraints and the bound<br />

e1 .. x1 + x2 =e= 2;<br />

e2 .. x1 - x2 =e= 0;<br />

x1.lo = 1.5;<br />

will result in the message<br />

** Error in Square System: A variable tries to exceed its bound.<br />

x1: The variable tries to exceed its bound.<br />

because the solution, (x1,x2) = (1,1) violates the bound on x1. This error case also be<br />

combined with model status 5 – Locally Infeasible. In the cases where CONOPT2 can<br />

guarantie that the infeasibility is not caused by nonlinearities the model status will be 4 –<br />

Infeasible. If you encounter problems with active bounds but you think it is caused by


GAMS/CONOPT CONOPT 49<br />

nonlinearities and that there is a solution, then you may try to use the bending linesearch<br />

with option “ lmmxsf = t”.<br />

The CNS facility can be used to generate an initial feasible solution in almost the same way<br />

as the triangular model facility: Fix a subset of the variables so the remaining model is<br />

uniquely solvable, solve this model with the CNS solver or with lssqrs = t, reset the<br />

bounds on the fixed variables, and solve the original model. The CNS facility can be used<br />

on a larger class of models that include simultaneous sets of equations. However, the<br />

square system must be non-singular and feasible; CONOPT cannot, like in the triangular<br />

case, add artificial variables to some of the constraints and solve the remaining system<br />

when a variable reaches one of its bounds.<br />

Additional information on CNS model can be found at GAMS web site at the address<br />

http://www.gams.com/docs/pdf/cns.pdf<br />

A12.3 Loss of Feasibility<br />

During the optimization you may sometimes see a phase 0 iteration and in rare cases you<br />

will see the message "Loss of Feasibility - Return to Phase 0". The background for this is<br />

as follows:<br />

To work efficiently, CONOPT uses dynamic tolerances for feasibility and during the initial<br />

part of the optimization where the objective changes rapidly fairly large infeasibilities may<br />

be acceptable. As the change in objective in each iteration becomes smaller it will be<br />

necessary to solve the constraints more accurately so the "noise" in objective value from<br />

the inaccurate constraints will remain smaller than the real change. The noise is measured<br />

as the scalar product of the constraint residuals with the constraint marginals.<br />

Sometimes it is necessary to revise the accuracy of the solution, for example because the<br />

algorithmic progress has slowed down or because the marginal of an inaccurate constraint<br />

has grown significantly after a basis change, e.g. when an inequality becomes binding. In<br />

these cases CONOPT will tighten the feasibility tolerance and perform one or more<br />

Newton iterations on the basic variables. This will usually be very quick and it happens<br />

silently. However, Newton’s method may fail, for example in cases where the model is<br />

degenerate and Newton tries to move a basic variable outside a bound. In this case<br />

CONOPT uses some special iteration similar to those discussed in section A6. Finding a<br />

Feasible Solution: Phase 0. and they are labeled Phase 0.<br />

These Phase 0 iterations may not converge, for example if the degeneracy is significant, if<br />

the model is very nonlinear locally, if the model has many product terms involving<br />

variables at zero, or if the model is poorly scaled and some constraints contain very large<br />

terms. If the iterations do not converge, CONOPT will issue the "Loss of feasibility ..."<br />

message, return to the real Phase 0 procedure, find a feasible solution with the smaller<br />

tolerance, and resume the optimization.<br />

In rare cases you will see that CONOPT cannot find a feasible solution after the tolerances<br />

have been reduced, even though it has declared the model feasible at an earlier stage. We


50 CONOPT GAMS/CONOPT<br />

are working on reducing this problem. Until a final solution has been implemented you are<br />

encouraged to (1) consider if bounds on some degenerate variables can be removed, (2)<br />

look at scaling of constraints with large terms, and (3) experiment with the two feasibility<br />

tolerances, rtnwma and rtnwmi (see Appendix B), if this happens with your model.<br />

A12.4 Stalling<br />

CONOPT will usually make steady progress towards the final solution. A degeneracy<br />

breaking strategy and the monotonicity of the objective function in other iterations should<br />

ensure that CONOPT cannot cycle. Unfortunately, there are a few places in the code where<br />

the objective function may move in the wrong direction and CONOPT may in fact cycle or<br />

move very slowly.<br />

The objective value used to compare two points, in the following called the adjusted<br />

objective value, is computed as the true objective plus a noise adjustment term equal to the<br />

scalar product of the residuals with the marginals (see section A12.3 where this noise term<br />

also is used). The noise adjustment term is very useful in allowing CONOPT to work<br />

smoothly with fairly inaccurate intermediate solutions. However, there is a disadvantage:<br />

the noise adjustment term can change even though the point itself does not change, namely<br />

when the marginals change in connection with a basis change. The adjusted objective is<br />

therefore not always monotone. When CONOPT looses feasibility and returns to Phase 0<br />

there is an even larger chance of non-monotone behavior.<br />

To avoid infinite loops and to allow the modeler to stop in cases with very slow progress<br />

CONOPT has an anti-stalling option. An iteration is counted as a stalled iteration if it is not<br />

degenerate and (1) the adjusted objective is worse than the best adjusted objective seen so<br />

far, or (2) the step length was zero without being degenerate (see OK = F in section A8).<br />

CONOPT will stop if the number of consecutive stalled iterations (again not counting<br />

degenerate iterations) exceeds lfstal and lfstal is positive. The default value of<br />

lfstal is 100. The message will be:<br />

** Feasible solution. The tolerances are minimal and<br />

there is no change in objective although the reduced<br />

gradient is greater than the tolerance.<br />

Large models with very flat optima can sometimes be stopped prematurely due to stalling.<br />

If it is important to find a local optimum fairly accurately then you may have to increase<br />

the value of lfstal.<br />

A12.5 External Functions<br />

Both CONOPT and CONOPT2 can be used with external functions written in a<br />

programming language such as Fortran or C. Additional information is available at<br />

GAMS’s web site at http://www.gams.com/docs/extfunc.htm.


GAMS/CONOPT CONOPT 51<br />

APPENDIX B - CR-CELLS<br />

The CR-Cells that ordinary GAMS users can access are listed below. CR-Cells starting on<br />

R assume real values, CR-Cells starting on LS assume logical values (TRUE, T, FALSE,<br />

or F), and all other CR-Cells starting on L assume integer values. Several CR-Cells are<br />

only used in CONOPT2 in which case it will be mentioned below. However, these CR-<br />

Cells can still be defined in an options file for the old CONOPT, but they will silently be<br />

ignored:<br />

lfilog Iteration Log frequency. A log line is printed to the screen every lfilog<br />

iterations (see also lfilos). The default value depends on the size of the<br />

model: it is 10 for models with less than 500 constraints, 5 for models between<br />

501 and 2000 constraints and 1 for larger models. The log itself can be turned<br />

on and off with the Logoption (LO) parameter on the GAMS call.<br />

Lfilos Iteration Log frequency for SLP iterations. A log line is printed to the screen<br />

every lfilos iterations while using the SLP mode. The default value<br />

depends on the size of the model: it is 5 for models with less than 500<br />

constraints, and 1 for larger models. Only CONOPT2.<br />

lfmxns Limit on new superbasics. When there has been a sufficient reduction in the<br />

reduced gradient in one subspace, CONOPT tests if any nonbasic variables<br />

should be made superbasic. The ones with largest reduced gradient of proper<br />

sign are selected, up to a limit of lfmxns. The default value of lfmxns is<br />

5. The limit is replaced by the square root of the number of structural variables<br />

if lfmxns is set to zero.<br />

lfnicr Limit for slow progress / no increase. The optimization is stopped with a<br />

"Slow Progress" message if the change in objective is less than 10 * rtobjr<br />

* max(1,abs(FOBJ)) for lfnicr consecutive iterations where FOBJ is the<br />

value of the current objective function. The default value of lfnicr is 12.<br />

lfnsup Maximum Hessian dimension. If the number of superbasics exceeds lfnsup<br />

CONOPT will no longer use the BFGS algorithm but will switch to a steepest<br />

descend approach, independent of the degree of nonlinearity of the model. The<br />

default value of lfnsup is 500. If lfnsup is increased beyond its default<br />

value the default memory allocation may not be sufficient, and you may have<br />

to include a ".WORKSPACE = xx.x;" statement in your GAMS<br />

source file, where "model" represent the GAMS name of the model. You<br />

should try to increase the value of lfnsup if CONOPT performs many<br />

iterations in Phase 4 with the number of superbasics (NSB) larger than<br />

lfnsup and without much progress. The new value should in this case be<br />

larger than the number of superbasics.<br />

lfscal Frequency for scaling. The scale factors are recomputed after lfscal<br />

recomputations of the Jacobian. The default value is 20. Only CONOPT2


52 CONOPT GAMS/CONOPT<br />

lfstal Maximum number of stalled iterations. If lfstal is positive then CONOPT<br />

will stop with a "No change in objective" message when the number of stalled<br />

iterations as defined in section A12.4 exceeds lfstal and lfstal is<br />

positive. The default value of lfstal is 100. Only CONOPT2.<br />

lmmxsf Method for finding the maximal step while searching for a feasible solution.<br />

The step in the Newton direction is usually either the ideal step of 1.0 or the<br />

step that will bring the first basic variable exactly to its bound. An alternative<br />

procedure uses "bending": All variables are moved a step s and the variables<br />

that are outside their bounds after this step are projected back to the bound.<br />

The step length is determined as the step where the sum of infeasibilities,<br />

computed from a linear approximation model, starts to increase again. The<br />

advantage of this method is that it often can make larger steps and therefore<br />

better reductions in the sum of infeasibilities, and it is not very sensitive to<br />

degeneracies. The alternative method is turned on by setting lmmxsf to 1,<br />

and it is turned off by setting lmmxsf to 0. Until the method has received<br />

additional testing it is by default turned off. Only CONOPT2.<br />

lsismp Logical switch for Ignoring Small Pivots. Lsismp is only used when<br />

lstria = t. If lsismp = t (default is f or false) then a triangular<br />

equations is accepted even if the pivot is almost zero (less than rtpivt for<br />

nonlinear elements and less than rtpiva for linear elements), provided the<br />

equation is feasible, i.e. with residual less than rtnwtr. Only CONOPT2.<br />

lslack Logical switch for slack basis. If lslack = t then the first basis after<br />

lsmxbs<br />

preprocessing will have slacks in all infeasible constraints and Phase 0 will<br />

usually be bypassed. This is sometimes useful together with lstcrs = t if<br />

the number of infeasible constraints after the crash procedure is small. This is<br />

especially true if the SLP procedure described in section A9 quickly can<br />

remove these remaining infeasibilities. It is necessary to experiment with the<br />

model to determine if this option is useful.<br />

Logical Switch for Maximal Basis. lsmxbs determines whether CONOPT<br />

should try to improve the condition number of the initial basis (t or true)<br />

before starting the Phase 0 iterations or just use the initial basis immediately<br />

(f or false). The default value is t, i.e. CONOPT tries to improve the basis.<br />

There is a computational cost associated with the procedure, but it will usually<br />

be saved because the better conditioning will give rise to fewer Phase 0<br />

iterations and often also to fewer large residuals at the end of Phase 0. The<br />

option is ignored if lslack is true. Only CONOPT2.<br />

lspost Logical switch for the Post-triangular preprocessor. If lspost = f (default<br />

is t or true) then the post-triangular preprocessor discussed in section A4.2<br />

is turned off.<br />

lspret Logical switch for the Pre-triangular preprocessor. If lspret = f (default<br />

is t or true) then the pre-triangular preprocessor discussed in section A4.1 is


GAMS/CONOPT CONOPT 53<br />

lsscal<br />

turned off.<br />

Logical switch for scaling. A logical switch that turns scaling on (with the<br />

value t or true) or off (with the value f or false). The default value is f,<br />

i.e. no scaling. Only CONOPT2.<br />

lssqrs Logical switch for Square Systems. If lssqrs = t (default is f or false),<br />

then the model must be a square system as discussed in section A12.2. Users<br />

are recommended to use the CNS model class in GAMS. Only CONOPT2.<br />

lstria Logical switch for triangular models. If lstria = t (default is f or<br />

rtmaxj<br />

false) then the model must be triangular as discussed in section A12.1.<br />

Only CONOPT2.<br />

Maximum Jacobian element. The optimization is stopped if a Jacobian<br />

element exceeds this value. rtmaxj is initialized to a value that depends on<br />

the machine precision. It is on most machines around 2.d5. The actual value is<br />

shown by CONOPT in connection with "Too large Jacobian element"<br />

messages. If you need a larger value then your model is poorly scaled and<br />

CONOPT may find it difficult to solve.<br />

rtmaxv Internal value of infinity. The model is considered unbounded if a variable<br />

exceeds rtmaxv in absolute value. rtmaxv is initialized to a value that<br />

depends on the machine precision. It is on most machines around 6.d7. The<br />

actual value is shown by CONOPT in connection with "Unbounded"<br />

messages. If you need a larger value then your model is poorly scaled and<br />

CONOPT may find it difficult to solve.<br />

rtmaxs Scale factors larger than rtmaxs are rounded down to rtmaxs. The default<br />

value is 1024. Only CONOPT2.<br />

rtminj All Jacobian elements with a value less than rtminj are rounded up to the<br />

value rtminj before scaling is started to avoid problems with zero and very<br />

small Jacobian elements. The default value is 1.E-5. Only CONOPT2.<br />

rtmins Scale factors smaller than rtmins are rounded up to rtmins. The default<br />

value is 1/1024. (All scale factors are powers of 2 to avoid round-off errors<br />

from the scaling procedure). Only CONOPT2.<br />

rtnwma Maximum feasibility tolerance. A constraint will only be considered feasible<br />

if the residual is less than rtnwma times MaxJac, independent on the dual<br />

variable. MaxJac is an overall scaling measure for the constraints computed as<br />

max(1,maximal Jacobian element/100). The default value of rtnwma is 1.e-3.<br />

rtnwmi Minimum feasibility tolerance. A constraint will always be considered feasible<br />

if the residual is less than rtnwmi times MaxJac (see above), independent of<br />

the dual variable. The default value depends on the machine precision. It is on<br />

most machines around 4.d-10. You should only increase this number if you<br />

have inaccurate function values and you get an infeasible solution with a very<br />

small sum of infeasibility, or if you have very large terms in some of your


54 CONOPT GAMS/CONOPT<br />

rtnwtr<br />

constraints (in which case scaling may be more appropriate). Square systems<br />

(see lssqrs and section A12.2) are always solved to the tolerance rtnwmi.<br />

Triangular feasibility tolerance. If you solve a model, fix some of the variables<br />

at their optimal value and solve again and the model then is reported infeasible<br />

in the pre-triangular part, then you should increase rtnwtr. The<br />

rtobjr<br />

infeasibilities in some unimportant constraints in the "Optimal" solution have<br />

been larger than rtnwtr. The default value depends on the machine<br />

precision. It is on most machines around 6.d-7.<br />

Relative objective tolerance. CONOPT assumes that the reduced objective<br />

function can be computed to an accuracy of rtobjr * max(1,abs(FOBJ))<br />

where FOBJ is the value of the current objective function. The default value<br />

of rtobjr is machine specific. It is on most machines around 3.d-13. The<br />

value is used in tests for "Slow Progress", see lfnicr.<br />

rtoned Relative accuracy of one-dimensional search. The one-dimensional search is<br />

stopped if the expected further decrease in objective estimated from a<br />

quadratic approximation is less than rtoned times the decrease obtained so<br />

far. The default value is 0.2. A smaller value will result in more accurate but<br />

more expensive line searches and this may result in an overall decrease in the<br />

number of iterations. Values above 0.7 or below 0.01 should not be used.<br />

rtpiva Absolute pivot tolerance. A pivot element is only considered acceptable if its<br />

absolute value is larger than rtpiva. The default value is 1.d-10. You may<br />

have to decrease this value towards 1.d-11 or 1.d-12 on poorly scaled models.<br />

rtpivr Relative pivot tolerance. A pivot element is only considered acceptable<br />

relative to other elements in the column if its absolute value is at least<br />

rtpivr * the largest absolute value in the column. The default value is 0.05.<br />

You may have to increase this value towards one on poorly scaled models.<br />

Increasing rtpivr will result in denser L and U factors of the basis.<br />

rtpivt Triangular pivot tolerance. A nonlinear triangular pivot element is considered<br />

acceptable if its absolute value is larger than rtpivt. The default value is<br />

1.e-7. Linear triangular pivot must be larger than rtpiva.<br />

rtredg Optimality tolerance. The reduced gradient is considered zero and the solution<br />

optimal if the largest superbasic component is less than rtredg. The default<br />

value depends on the machine, but is usually around 9.d-8. If you have<br />

problems with slow progress or stalling you may increase rtredg. Thisis<br />

especially relevant for very large models.<br />

rvspac A space allocation factor that sometime can speed up the solution of square<br />

systems. CONOPT will tell you if it is worth while to set this parameter to a<br />

non-default value for your class of model.<br />

rvstlm Step length multiplier. The step length in the one-dimensional line search is<br />

not allowed to increased by a factor of more than rvstlm between steps for<br />

models with nonlinear constraints and a factor of 100 * rvstlm for models


GAMS/CONOPT CONOPT 55<br />

with linear constraints. The default value is 4.<br />

APPENDIX C: REFERENCES<br />

J. Abadie and J. Carpentier, Generalization of the Wolfe Reduced Gradient Method to the<br />

case of Nonlinear Constraints, in Optimization, R. Fletcher (ed.), Academic Press, New<br />

York, 37-47 (1969).<br />

A. Drud, A GRG Code for Large Sparse Dynamic Nonlinear Optimization Problems,<br />

Mathematical Programming 31, 153-191 (1985).<br />

A.S. Drud, CONOPT - A Large-Scale GRG Code, ORSA Journal on Computing 6, 207-<br />

216 (1992).<br />

A.S. Drud, CONOPT: A System for Large Scale Nonlinear Optimization, Tutorial for<br />

CONOPT Subroutine Library, 16p, ARKI Consulting and Development A/S, Bagsvaerd,<br />

Denmark, 1995.<br />

A.S. Drud, CONOPT: A System for Large Scale Nonlinear Optimization, Reference<br />

Manual for CONOPT Subroutine Library, 69p, ARKI Consulting and Development A/S,<br />

Bagsvaerd, Denmark, 1996.<br />

J.K. Reid, A Sparsity Exploiting Variant of Bartels-Golub Decomposition for Linear<br />

Programming Bases, Mathematical Programming 24, 55-69 (1982).<br />

D.M. Ryan and M.R. Osborne, On the Solution of Highly Degenerate Linear Programmes,<br />

Mathematical Programming 41, 385-392 (1988).<br />

U.H. Suhl and L.M. Suhl, Computing Sparse LU Factorizations for Large-Scale Linear<br />

Programming Bases, ORSA Journal on Computing 2, 325-335 (1990).


GAMS/Cplex 7.5 User Notes<br />

Table of Contents<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

●<br />

Introduction<br />

How to Run a Model with Cplex<br />

Overview of Cplex<br />

❍<br />

❍<br />

Linear Programming<br />

Mixed-Integer Programming<br />

GAMS <strong>Options</strong><br />

Summary of Cplex <strong>Options</strong><br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

Preprocessing and General <strong>Options</strong><br />

Simplex Algorithmic <strong>Options</strong><br />

Simplex Limit <strong>Options</strong><br />

Simplex Tolerance <strong>Options</strong><br />

Barrier <strong>Specific</strong> <strong>Options</strong><br />

MIP Algorithmic <strong>Options</strong><br />

MIP Limit <strong>Options</strong><br />

MIP Tolerance <strong>Options</strong><br />

Output <strong>Options</strong><br />

Example of a GAMS/Cplex <strong>Options</strong> File<br />

Special Notes<br />

❍<br />

❍<br />

❍<br />

❍<br />

❍<br />

Physical Memory Limitations<br />

<strong>Using</strong> Special Ordered Sets<br />

<strong>Using</strong> Semi-Continuous and Semi-Integer Variables<br />

Running Out of Memory for MIP problems<br />

Failing to Prove Integer Optimality<br />

GAMS/Cplex Log file<br />

Detailed description of Cplex <strong>Options</strong><br />

1 Introduction<br />

GAMS/Cplex is a GAMS solver that allows users to combine the high level modeling capabilities of GAMS with the<br />

power of Cplex optimizers. Cplex optimizers are designed to solve large, difficult problems quickly and with minimal<br />

user intervention. Access is provided (subject to proper licensing) to Cplex solution algorithms including the Linear<br />

Optimizer and the Barrier and Mixed Integer <strong>Solver</strong>s. While numerous solving options are available, GAMS/Cplex<br />

automatically calculates and sets most options at the best values for specific problems.<br />

All Cplex options available through GAMS/Cplex are summarized at the end of this document.


2 How to run a model with Cplex<br />

The following statement can be used inside your GAMS program to specify using Cplex<br />

Option LP = Cplex ; { or RMIP or MIP }<br />

The above statement should appear before the Solve statement. The MIP capability is separately licensed, so you may<br />

not be able to use Cplex for MIP problems on your system. If Cplex was specified as the default solver during GAMS<br />

installation, the above statement is not necessary.<br />

If you have purchased the parallel option, it can be used by specifying<br />

Option LP = CplexPar ; { or RMIP or MIP }<br />

or by specifying CplexPar as the default solver during GAMS installation. Parallel GAMS/Cplex also requires<br />

installation of an ILOG license file before it will work. Contact support@gams.com for instructions.<br />

3 Overview of Cplex<br />

3.1 Linear Programming<br />

Cplex solves LP problems using several alternative algorithms. The majority of LP problems solve best using Cplex's<br />

state of the art modified primal simplex algorithm. Certain types of problems benefit from using the alternative dual<br />

simplex algorithm, the network optimizer, or the barrier algorithm.<br />

Solving linear programming problems is memory intensive. Even though Cplex manages memory very efficiently,<br />

insufficient physical memory is one of the most common problems when running large LPs. When memory is limited,<br />

Cplex will automatically make adjustments which may negatively impact performance. If you are working with large<br />

models, study the section entitled Physical Memory Limitations carefully.<br />

Cplex is designed to solve the majority of LP problems using default option settings. These settings usually provide<br />

the best overall problem optimization speed and reliability. However, there are occasionally reasons for changing<br />

option settings to improve performance, avoid numerical difficulties, control optimization run duration, or control<br />

output options.<br />

Some problems solve faster with the dual simplex algorithm rather than the default primal simplex algorithm. In<br />

particular, highly degenerate problems with little variability in the right-hand-side coefficients but significant<br />

variability in the cost coefficients often solve much faster using dual simplex. Also, very few problems exhibit poor<br />

numerical performance in both the primal and the dual. Therefore, consider trying dual simplex if numerical problems<br />

occur while using primal simplex.<br />

Cplex has a very efficient algorithm for network models. Network constraints have the following property:<br />

● each non-zero coefficient is either a +1 or a -1<br />

● each column appearing in these constraints has exactly 2 nonzero entries, one with a +1 coefficient and one<br />

with a -1 coefficient<br />

Cplex can also automatically extract networks that do not adhere to the above conventions as long as they can be<br />

transformed to have those properties.<br />

The barrier algorithm is an alternative to the simplex method for solving linear programs. It employs a primal-dual<br />

logarithmic barrier algorithm which generates a sequence of strictly positive primal and dual solutions. Specifying the<br />

barrier algorithm may be advantageous for large, sparse problems.


GAMS/Cplex also provides access to the Cplex Infeasibility Finder. The Infeasibility finder takes an infeasible linear<br />

program and produces an irreducibly inconsistent set of constraints (IIS). An IIS is a set of constraints and variable<br />

bounds which is infeasible but becomes feasible if any one member of the set is dropped. GAMS/Cplex reports the<br />

IIS in terms of GAMS equation and variable names and includes the IIS report as part of the normal solution listing.<br />

3.2 Mixed-Integer Programming<br />

The methods used to solve pure integer and mixed integer programming problems require dramatically more<br />

mathematical computation than those for similarly sized pure linear programs. Many relatively small integer<br />

programming models take enormous amounts of time to solve.<br />

For problems with integer variables, Cplex uses a branch and bound algorithm (with cuts) which solves a series of LP<br />

subproblems. Because a single mixed integer problem generates many LP subproblems, even small mixed integer<br />

problems can be very compute intensive and require significant amounts of physical memory.<br />

GAMS and GAMS/Cplex support Special Order Sets of type 1 and type 2 as well as semi-continuous and<br />

semi-integer variables.<br />

4 GAMS <strong>Options</strong><br />

The following GAMS options are used by GAMS/Cplex:<br />

OPTION Bratio = x; Determines whether or not to use an advanced basis. A value of 1.0 causes GAMS<br />

to instruct Cplex not to use an advanced basis. A value of 0.0 causes GAMS to<br />

construct a basis from whatever information is available. The default value of 0.25<br />

will nearly always cause GAMS to pass along an advanced basis if a solve<br />

statement has previously been executed.<br />

Option IterLim = n; Sets the iteration limit. The algorithm will terminate and pass on the current<br />

solution to GAMS. In case a pre-solve is done, the post-solve routine will be<br />

invoked before reporting the solution.<br />

Cplex handles the iteration limit for MIP problems differently than some other<br />

GAMS solvers. For MIP problems, controlling the length of the solution run by<br />

limiting the execution time (ResLim) is preferable.<br />

IterLim is not used at all when solving an LP with the barrier algorithm. By<br />

default, Cplex will use an iteration limit based on problem characteristics. To<br />

specify a barrier iteration limit, use Cplex parameter baritlim.<br />

OPTION ResLim = x; Sets the time limit in seconds. The algorithm will terminate and pass on the<br />

current solution to GAMS. In case a pre-solve is done, the post-solve routine will<br />

be invoked before reporting the solution.<br />

Option SysOut = On; Will echo Cplex messages to the GAMS listing file. This option may be useful in<br />

case of a solver failure.<br />

ModelName.Cheat = x; Cheat value: each new integer solution must be at least x better than the previous<br />

one. Can speed up the search, but you may miss the optimal solution. The cheat<br />

parameter is specified in absolute terms (like the OptCA option). The Cplex<br />

option objdif overrides the GAMS cheat parameter.<br />

ModelName.Cutoff = x; Cutoff value. When the branch and bound search starts, the parts of the tree with<br />

an objective worse than x are deleted. This can sometimes speed up the initial<br />

phase of the branch and bound algorithm.<br />

ModelName.NodLim = x; Maximum number of nodes to process for a MIP problem.<br />

ModelName.OptCA = x; Absolute optimality criterion for a MIP problem.


ModelName.OptCR = x; Relative optimality criterion for a MIP problem. Notice that Cplex uses a different<br />

definition than GAMS normally uses. The OptCR option asks Cplex to stop when<br />

(|BP - BF|)/(1.0e-10 + |BF|) < OptCR<br />

where BF is the objective function value of the current best integer solution while<br />

BP is the best possible integer solution. The GAMS definition is:<br />

(|BP - BF|)/(|BP|) < OptCR<br />

ModelName.OptFile = 1; Instructs Cplex to read the option file. The name of the option file is cplex.opt.<br />

ModelName.PriorOpt = 1; Instructs Cplex to use priority branching information passed by GAMS through<br />

the variable.prior parameters.<br />

ModelName.TryInt = x; Causes GAMS/Cplex to make use of current variable values when solving a MIP<br />

problem. If a variable value is within x of a bound, it will be moved to the bound<br />

and the preferred branching direction for that variable will be set toward the<br />

bound. Moving the value to the bound allows the variable to be part of an initial<br />

integer solution when using parameter mipstart. The preferred branching direction<br />

will only be effective when priorities are used.<br />

5 Summary of Cplex <strong>Options</strong><br />

The various Cplex options are listed here by category, with a few words about each to indicate its function. The<br />

options are listed again, in alphabetical order and with detailed descriptions, in the last section of this document.<br />

5.1 Preprocessing and General <strong>Options</strong><br />

advind advanced basis use<br />

aggfill aggregator fill parameter<br />

aggind aggregator on/off<br />

coeredind coefficient reduction on/off<br />

depind dependency checker on/off<br />

interactive allow interactive option setting after a Control-C<br />

names load GAMS names into Cplex<br />

objrng do objective ranging<br />

precompress presolve compression<br />

predual give dual problem to the optimizer<br />

preind turn presolver on/off<br />

prepass number of presolve applications to perform<br />

printoptions write values of all options to the GAMS listing file<br />

reduce primal and dual reduction type<br />

relaxpreind presolve for initial relaxation on/off<br />

rerun rerun problem if presolve infeasible or unbounded<br />

rhsrng do right-hand-side ranging<br />

rngrestart write GAMS readable ranging information file<br />

scaind matrix scaling on/off<br />

tilim overrides the GAMS ResLim option<br />

workdir directory for working files<br />

workmem memory available for working storage


5.2 Simplex Algorithmic <strong>Options</strong><br />

craind crash strategy (used to obtain starting basis)<br />

dpriind dual simplex pricing<br />

epper perturbation constant<br />

iis run the IIS finder if the problem is infeasible<br />

iisind IIS finder method to use<br />

lpmethod algorithm to be used for LP problems<br />

netfind attempt network extraction<br />

netppriind network simplex pricing<br />

perind force initial perturbation<br />

perlim number of stalled iterations before perturbation<br />

ppriind primal simplex pricing<br />

pricelim pricing candidate list<br />

reinv refactorization frequency<br />

simthreads number of threads for parallel dual simplex algorithm<br />

5.3 Simplex Limit <strong>Options</strong><br />

itlim iteration limit<br />

netitlim iteration limit for network simplex<br />

objllim objective function lower limit<br />

objulim objective function upper limit<br />

singlim limit on singularity repairs<br />

5.4 Simplex Tolerance <strong>Options</strong><br />

epmrk Markowitz pivot tolerance<br />

epopt optimality tolerance<br />

eprhs feasibility tolerance<br />

netepopt optimality tolerance for the network simplex method<br />

neteprhs feasibility tolerance for the network simplex method<br />

5.5 Barrier <strong>Specific</strong> <strong>Options</strong><br />

baralg algorithm selection<br />

barcolnz dense column handling<br />

barcrossalg barrier crossover method<br />

barepcomp convergence tolerance<br />

bargrowth unbounded face detection<br />

baritlim iteration limit<br />

barmaxcor maximum correction limit<br />

barobjrng maximum objective function


arooc out-of-core barrier<br />

barorder row ordering algorithm selection<br />

barthreads number of threads for parallel barrier algorithm<br />

barvarup variable upper limit<br />

5.6 MIP Algorithmic <strong>Options</strong><br />

bbinterval best bound interval<br />

bndstrenind bound strengthening<br />

brdir set branching direction<br />

bttol backtracking limit<br />

cliques clique cut generation<br />

covers cover cut generation<br />

cutlo lower cutoff for tree search<br />

cuts default cut generation<br />

cutsfactor cut limit<br />

cutup upper cutoff for tree search<br />

disjcuts disjunctive cuts generation<br />

flowcovers flow cover cut generation<br />

flowpaths flow path cut generation<br />

fraccuts Gomory fractional cut generation<br />

gubcovers GUB cover cut generation<br />

heurfreq heuristic frequency<br />

implbd implied bound cut generation<br />

mipemphasis MIP solution tactics<br />

mipordind priority list on/off<br />

mipordtype priority order generation<br />

mipstart use mip starting values<br />

mipthreads number of threads for parallel mip algorithm<br />

mircuts mixed integer rounding cut generation<br />

nodefileind node storage file indicator<br />

nodelim maximum number of nodes to process<br />

nodesel node selection strategy<br />

preslvnd node presolve selector<br />

probe perform probing before solving a MIP<br />

startalg algorithm for initial LP<br />

strongcandlim size of the candidates list for strong branching<br />

strongitlim limit on iterations per branch for strong branching<br />

strongthreadlim number of threads for strong branching<br />

subalg algorithm for subproblems<br />

varsel variable selection strategy at each node


5.7 MIP Limit <strong>Options</strong><br />

aggcutlim aggrigation limit for cut generation<br />

cutpass maximum number of cutting plane passes<br />

fraccand candidate limit for generating Gomory fractional cuts<br />

fracpass maximum number of passes for generating Gomory fractional cuts<br />

intsollim maximum number of integer solutions<br />

nodelim maximum number of nodes to solve<br />

trelim maximum space in memory for tree<br />

5.8 MIP Tolerance <strong>Options</strong><br />

epagap absolute stopping tolerance<br />

epgap relative stopping tolerance<br />

epint integrality tolerance<br />

objdif over-rides GAMS Cheat parameter<br />

relobjdif relative cheat parameter<br />

5.9 Output <strong>Options</strong><br />

bardisplay progress display level<br />

mipdisplay progress display level<br />

mipinterval progress display interval<br />

netdisplay network display level<br />

quality write solution quality statistics<br />

simdisplay simplex display level<br />

writebas produce a Cplex basis file<br />

writelp produce a Cplex LP file<br />

writemps produce a Cplex MPS file<br />

writeord produce a Cplex ord file<br />

writesav produce a Cplex binary problem file<br />

writesos produce a Cplex sos file<br />

5.10 The GAMS/Cplex <strong>Options</strong> File<br />

The GAMS/Cplex options file consists of one option or comment per line. An asterisk (*) at the beginning of a line<br />

causes the entire line to be ignored. Otherwise, the line will be interpreted as an option name and value separated by<br />

any amount of white space (blanks or tabs). Anything after the value will be ignored.<br />

Following is an example options file cplex.opt.<br />

scaind 1<br />

simdisplay 2<br />

It will cause Cplex to use a more aggressive scaling method than the default. The iteration log will have an entry for<br />

each iteration instead of an entry for each refactorization.


6 Special Notes<br />

6.1 Physical Memory Limitations<br />

For the sake of computational speed, Cplex should use only available physical memory rather than virtual or paged<br />

memory. When Cplex recognizes that a limited amount of memory is available it automatically makes algorithmic<br />

adjustments to compensate. These adjustments almost always reduce optimization speed. Learning to recognize when<br />

these automatic adjustments occur can help to determine when additional memory should be added to the computer.<br />

On virtual memory systems, if memory paging to disk is observed, a considerable performance penalty is incurred.<br />

Increasing available memory will speed the solution process dramatically.<br />

Cplex performs an operation called refactorization at a frequency determined by the reinv option setting. The longer<br />

Cplex works between refactorizations, the greater the amount of memory required to complete each iteration.<br />

Therefore, one means for conserving memory is to increase the refactorization frequency. Since refactorizing is an<br />

expensive operation, increasing the refactorization frequency by reducing the reinv option setting generally will slow<br />

performance. Cplex will automatically increase the refactorization frequency if it encounters low memory availability.<br />

This can be seen by watching the iteration log. The default log reports problem status at every refactorization. If the<br />

number of iterations between iteration log entries is decreasing, Cplex is increasing the refactorization frequency.<br />

Since Cplex might increase the frequency to once per iteration, the impact on performance can be dramatic. Providing<br />

additional memory should be beneficial.<br />

6.2 <strong>Using</strong> Special Ordered Sets<br />

For some models a special structure can be exploited. GAMS allows you to declare SOS1 and SOS2 variables<br />

(Special Ordered Sets of type 1 and 2).<br />

In Cplex the definition for SOS1 variables is:<br />

A set of variables for which at most one variable may be non-zero.<br />

The definition for SOS2 variables is:<br />

A set of variables for which at most two variables may be non-zero. If two variables are non-zero, they<br />

must be adjacent in the set.<br />

6.3 <strong>Using</strong> Semi-Continuous and Semi-Integer Variables<br />

GAMS allows the declaration of semi-continous and semi-integer variables. These variable types are directly<br />

supported by GAMS/Cplex. For example:<br />

SemiCont Variable x;<br />

x.lo = 3.2;<br />

x.up = 8.7;<br />

SemiInt Variable y;<br />

y.lo = 5;<br />

y.up = 10;<br />

Variable x will be allowed to take on a value of 0.0 or any value between 3.2 and 8.7. Variable y will be allowed to<br />

take on a value of 0 or any integral value between 5 and 10.<br />

Note that Cplex requires a finite upper bound for semi-continous and semi-integer variables.


6.4 Running Out of Memory for MIP problems<br />

The most common difficulty when solving MIP problems is running out of memory. This problem arises when the<br />

branch and bound tree becomes so large that insufficient memory is available to solve an LP subproblem. As memory<br />

gets tight, you may observe frequent warning messages while Cplex attempts to navigate through various operations<br />

within limited memory. If a solution is not found shortly the solution process will be terminated with an<br />

unrecoverable integer failure message.<br />

The tree information saved in memory can be substantial. Cplex saves a basis for every unexplored node. When<br />

utilizing the best bound method of node selection, the list of such nodes can become very long for large or difficult<br />

problems. How large the unexplored node list can become is entirely dependent on the actual amount of physical<br />

memory available and the actual size of the problem. Certainly increasing the amount of memory available extends<br />

the problem solving capability. Unfortunately, once a problem has failed because of insufficient memory, you can<br />

neither project how much further the process needed to go nor how much memory would be required to ultimately<br />

solve it.<br />

Memory requirements can be limited by using the workmem, option with the nodefileind option. Setting nodefileind<br />

to 2 or 3 will cause Cplex to store portions of the branch and bound tree on disk whenever it grows to larger than the<br />

size specified by option workmem. That size should be set to something less than the amount of physical memory<br />

available.<br />

Another approach is to modify the solution process to utilize less memory.<br />

● Set option bttol to a number closer to 1.0 to cause more of a depth-first-search of the tree.<br />

● Set option nodesel to use a best estimate strategy or, more drastically a depth-first-search. Depth first search<br />

rarely generates a large unexplored node list since Cplex will be diving deep into the branch and bound tree<br />

rather than jumping around within it.<br />

● Set option varsel to use strong branching. Strong branching spends extra computation time at each node to<br />

choose a better branching variable. As a result it generates a smaller tree. It is often faster overall, as well.<br />

● On some problems, a large number of cuts will be generated without a correspondingly large benefit in solution<br />

speed. Cut generation can be turned off using option cuts.<br />

6.5 Failing to Prove Integer Optimality<br />

One frustrating aspect of the branch and bound technique for solving MIP problems is that the solution process can<br />

continue long after the best solution has been found. Remember that the branch and bound tree may be as large as 2 n<br />

nodes, where n equals the number of binary variables. A problem containing only 30 binary variables could produce a<br />

tree having over one billion nodes! If no other stopping criteria have been set, the process might continue ad infinitum<br />

until the search is complete or your computer's memory is exhausted.<br />

In general you should set at least one limit on the optimization process before beginning an optimization. Setting<br />

limits ensures that an exhaustive tree search will terminate in reasonable time. Once terminated, you can rerun the<br />

problem using some different option settings. Consider some of the shortcuts described previously for improving<br />

performance including setting the options for mip gap, objective value difference, upper cutoff, or lower cutoff.<br />

7 GAMS/Cplex Log file<br />

Cplex reports its progress by writing to the GAMS log file as the problem solves. Normally the GAMS log file is<br />

directed to the computer screen.<br />

The log file shows statistics about the presolve and continues with an iteration log.<br />

For the primal simplex algorithm, the iteration log starts with the iteration number followed by the scaled infeasibility


value. Once feasibility has been attained, the objective function value is listed instead. At the default value for option<br />

simdisplay there is a log line for each refactorization. The screen log has the following appearance:<br />

Tried aggregator 1 time.<br />

LP Presolve eliminated 2 rows and 39 columns.<br />

Aggregator did 30 substitutions.<br />

Reduced LP has 243 rows, 335 columns, and 3912 nonzeros.<br />

Presolve time = 0.01 sec.<br />

<strong>Using</strong> conservative initial basis.<br />

Iteration log . . .<br />

Iteration: 1 Scaled infeas = 193998.067174<br />

Iteration: 29 Objective = -3484.286415<br />

Switched to devex.<br />

Iteration: 98 Objective = -1852.931117<br />

Iteration: 166 Objective = -349.706562<br />

Optimal solution found.<br />

Objective : 901.161538<br />

The iteration log for the dual simplex algorithm is similar, but the dual infeasibility and dual objective are reported<br />

instead of the corresponding primal values:<br />

Tried aggregator 1 time.<br />

LP Presolve eliminated 2 rows and 39 columns.<br />

Aggregator did 30 substitutions.<br />

Reduced LP has 243 rows, 335 columns, and 3912 nonzeros.<br />

Presolve time = 0.01 sec.<br />

Iteration log . . .<br />

Iteration: 1 Scaled dual infeas = 3.890823<br />

Iteration: 53 Dual objective = 4844.392441<br />

Iteration: 114 Dual objective = 1794.360714<br />

Iteration: 176 Dual objective = 1120.183325<br />

Iteration: 238 Dual objective = 915.143030<br />

Removing shift (1).<br />

Optimal solution found.<br />

Objective : 901.161538<br />

The log for the network algorithm adds statistics about the extracted network and a log of the network iterations. The<br />

optimization is finished by one of the simplex algorithms and an iteration log for that is produced as well.<br />

Tried aggregator 1 time.<br />

LP Presolve eliminated 2 rows and 39 columns.<br />

Aggregator did 30 substitutions.<br />

Reduced LP has 243 rows, 335 columns, and 3912 nonzeros.


Presolve time = 0.01 sec.<br />

Extracted network with 25 nodes and 116 arcs.<br />

Extraction time = -0.00 sec.<br />

Iteration log . . .<br />

Iteration: 0 Infeasibility = 1232.378800 (-1.32326e+12)<br />

Network - Optimal: Objective = 1.5716820779e+03<br />

Network time = 0.01 sec. Iterations = 26 (24)<br />

Iteration log . . .<br />

Iteration: 1 Scaled infeas = 212696.154729<br />

Iteration: 62 Scaled infeas = 10020.401232<br />

Iteration: 142 Scaled infeas = 4985.200129<br />

Switched to devex.<br />

Iteration: 217 Objective = -3883.782587<br />

Iteration: 291 Objective = -1423.126582<br />

Optimal solution found.<br />

Objective : 901.161538<br />

The log for the barrier algorithm adds various algorithm specific statistics about the problem before starting the<br />

iteration log. The iteration log includes columns for primal and dual objective values and infeasibility values. A<br />

special log follows for the crossover to a basic solution.<br />

Tried aggregator 1 time.<br />

LP Presolve eliminated 2 rows and 39 columns.<br />

Aggregator did 30 substitutions.<br />

Reduced LP has 243 rows, 335 columns, and 3912 nonzeros.<br />

Presolve time = 0.02 sec.<br />

Number of nonzeros in lower triangle of A*A' = 6545<br />

<strong>Using</strong> Approximate Minimum Degree ordering<br />

Total time for automatic ordering = 0.01 sec.<br />

Summary statistics for Cholesky factor:<br />

Rows in Factor = 243<br />

Integer space required = 578<br />

Total non-zeros in factor = 8491<br />

Total FP ops to factor = 410889<br />

Itn Primal Obj Dual Obj Prim Inf Upper Inf Dual Inf<br />

0 -1.2826603e+06 7.4700787e+08 2.25e+10 6.13e+06 4.00e+05<br />

1 -2.6426195e+05 6.3552653e+08 4.58e+09 1.25e+06 1.35e+05<br />

2 -9.9117854e+04 4.1669756e+08 1.66e+09 4.52e+05 3.93e+04<br />

3 -2.6624468e+04 2.1507018e+08 3.80e+08 1.04e+05 1.20e+04<br />

4 -1.2104334e+04 7.8532364e+07 9.69e+07 2.65e+04 2.52e+03<br />

5 -9.5217661e+03 4.2663811e+07 2.81e+07 7.67e+03 9.92e+02<br />

6 -8.6929410e+03 1.4134077e+07 4.94e+06 1.35e+03 2.16e+02<br />

7 -8.3726267e+03 3.1619431e+06 3.13e-07 6.84e-12 3.72e+01<br />

8 -8.2962559e+03 3.3985844e+03 1.43e-08 5.60e-12 3.98e-02<br />

9 -3.8181279e+03 2.6166059e+03 1.58e-08 9.37e-12 2.50e-02<br />

10 -5.1366439e+03 2.8102021e+03 3.90e-06 7.34e-12 1.78e-02<br />

11 -1.9771576e+03 1.5960442e+03 3.43e-06 7.02e-12 3.81e-03<br />

12 -4.3346261e+02 8.3443795e+02 4.99e-07 1.22e-11 7.93e-04


13 1.2882968e+02 5.2138155e+02 2.22e-07 1.45e-11 8.72e-04<br />

14 5.0418542e+02 5.3676806e+02 1.45e-07 1.26e-11 7.93e-04<br />

15 2.4951043e+02 6.5911879e+02 1.73e-07 1.43e-11 5.33e-04<br />

16 2.4666057e+02 7.6179064e+02 7.83e-06 2.17e-11 3.15e-04<br />

17 4.6820025e+02 8.1319322e+02 4.75e-06 1.78e-11 2.57e-04<br />

18 5.6081604e+02 7.9608915e+02 3.09e-06 1.98e-11 2.89e-04<br />

19 6.4517294e+02 7.7729659e+02 1.61e-06 1.27e-11 3.29e-04<br />

20 7.9603053e+02 7.8584631e+02 5.91e-07 1.91e-11 3.00e-04<br />

21 8.5871436e+02 8.0198336e+02 1.32e-07 1.46e-11 2.57e-04<br />

22 8.8146686e+02 8.1244367e+02 1.46e-07 1.84e-11 2.29e-04<br />

23 8.8327998e+02 8.3544569e+02 1.44e-07 1.96e-11 1.71e-04<br />

24 8.8595062e+02 8.4926550e+02 1.30e-07 2.85e-11 1.35e-04<br />

25 8.9780584e+02 8.6318712e+02 1.60e-07 1.08e-11 9.89e-05<br />

26 8.9940069e+02 8.9108502e+02 1.78e-07 1.07e-11 2.62e-05<br />

27 8.9979049e+02 8.9138752e+02 5.14e-07 1.88e-11 2.54e-05<br />

28 8.9979401e+02 8.9139850e+02 5.13e-07 2.18e-11 2.54e-05<br />

29 9.0067378e+02 8.9385969e+02 2.45e-07 1.46e-11 1.90e-05<br />

30 9.0112149e+02 8.9746581e+02 2.12e-07 1.71e-11 9.61e-06<br />

31 9.0113610e+02 8.9837069e+02 2.11e-07 1.31e-11 7.40e-06<br />

32 9.0113661e+02 8.9982723e+02 1.90e-07 2.12e-11 3.53e-06<br />

33 9.0115644e+02 9.0088083e+02 2.92e-07 1.27e-11 7.35e-07<br />

34 9.0116131e+02 9.0116262e+02 3.07e-07 1.81e-11 3.13e-09<br />

35 9.0116154e+02 9.0116154e+02 4.85e-07 1.69e-11 9.72e-13<br />

Barrier time = 0.39 sec.<br />

Primal crossover.<br />

Primal: Fixing 13 variables.<br />

12 PMoves: Infeasibility 1.97677059e-06 Objective 9.01161542e+02<br />

0 PMoves: Infeasibility 0.00000000e+00 Objective 9.01161540e+02<br />

Primal: Pushed 1, exchanged 12.<br />

Dual: Fixing 3 variables.<br />

2 DMoves: Infeasibility 1.28422758e-36 Objective 9.01161540e+02<br />

0 DMoves: Infeasibility 1.28422758e-36 Objective 9.01161540e+02<br />

Dual: Pushed 3, exchanged 0.<br />

<strong>Using</strong> devex.<br />

Total crossover time = 0.02 sec.<br />

Optimal solution found.<br />

Objective : 901.161540<br />

For MIP problems, during the branch and bound search, Cplex reports the node number, the number of nodes left, the<br />

value of the Objective function, the number of integer variables that have fractional values, the current best integer<br />

solution, the best relaxed solution at a node and an iteration count. The last column show the current optimality gap as<br />

a percentage.<br />

Tried aggregator 1 time.<br />

MIP Presolve eliminated 1 rows and 1 columns.<br />

Reduced MIP has 99 rows, 76 columns, and 419 nonzeros.<br />

Presolve time = 0.00 sec.<br />

Iteration log . . .


Iteration: 1 Dual objective = 0.000000<br />

Root relaxation solution time = 0.01 sec.<br />

Nodes Cuts/<br />

Node Left Objective IInf Best Integer Best Node ItCnt Gap<br />

0 0 0.0000 24 0.0000 40<br />

* 0+ 0 6.0000 0 6.0000 0.0000 40 100.00%<br />

* 50+ 50 4.0000 0 4.0000 0.0000 691 100.00%<br />

100 99 2.0000 15 4.0000 0.4000 1448 90.00%<br />

Fixing integer variables, and solving final LP..<br />

Tried aggregator 1 time.<br />

LP Presolve eliminated 100 rows and 77 columns.<br />

All rows and columns eliminated.<br />

Presolve time = 0.00 sec.<br />

Solution satisfies tolerances.<br />

MIP Solution : 4.000000 (2650 iterations, 185 nodes)<br />

Final LP : 4.000000 (0 iterations)<br />

Best integer solution possible : 1.000000<br />

Absolute gap : 3<br />

Relative gap : 1.5<br />

8 Detailed Descriptions of Cplex <strong>Options</strong><br />

These options should be entered in the options file after setting the GAMS ModelName.OptFile parameter to 1. The<br />

name of the options file is 'cplex.opt'. The options file is case insensitive and the keywords should be given in full.<br />

advind (integer)<br />

Use an Advanced Basis. GAMS/Cplex will automatically use an advanced basis from a previous solve statement. The<br />

GAMS Bratio option can be used to specify when not to use an advanced basis. The Cplex option advind can be used<br />

to ignore a basis passed on by GAMS (it overrides Bratio). Note that using an advanced basis will disable the Cplex<br />

presolve.<br />

(default is determined by GAMS Bratio)<br />

0 do not use advanced basis<br />

1 use advanced basis if available<br />

aggcutlim (integer)<br />

Limits the number of constraints that can be aggregated for generating flow cover and mixed integer rounding cuts.<br />

For most purposes, the default will be satisfactory.<br />

Range: non-negative<br />

(default = 3)<br />

aggfill (integer)<br />

Aggregator fill limit. If the net result of a single substitution is more non-zeros than the setting of the AGGFILL<br />

parameter, the substitution will not be made.


(default = 10)<br />

aggind (integer)<br />

This option, when set to a nonzero value, will cause the Cplex aggregator to use substitution where possible to reduce<br />

the number of rows and columns in the problem. If set to a positive value, the aggregator will be applied the specified<br />

number of times, or until no more reductions are possible. At the default value of -1, the aggregator is applied once<br />

for linear programs and an unlimited number of times for mixed integer problems.<br />

(default = -1)<br />

-1 once for LP, unlimited for MIP<br />

0 do not use<br />

baralg (integer)<br />

Selects which barrier algorithm to use. The default setting of 0 uses the infeasibility-estimate start algorithm for MIP<br />

subproblems and the standard barrier algorithm, option 3, for other cases. The standard barrier algorithm is almost<br />

always fastest. The alternative algorithms, options 1 and 2, may eliminate numerical difficulties related to<br />

infeasibility, but will generally be slower.<br />

(default = 0)<br />

0 Same as 1 for MIP subproblems, 3 otherwise<br />

1 Infeasibility-estimate start<br />

2 Infeasibility-constant start<br />

3 standard barrier algorithm<br />

barcolnz (integer)<br />

Determines whether or not columns are considered dense for special barrier algorithm handling. At the default setting<br />

of 0, this parameter is determined dynamically. Values above 0 specify the number of entries in columns to be<br />

considered as dense.<br />

(default = 0)<br />

barcrossalg (integer)<br />

Selects which, if any, crossover method is used at the end of a barrier optimization.<br />

(default = 0)<br />

-1 No crossover<br />

0 Automatic<br />

1 Primal crossover<br />

2 Dual crossover<br />

bardisplay (integer)<br />

Determines the level of progress information to be displayed while the barrier method is running.<br />

(default = 1)<br />

0 No progress information<br />

1 Display normal information<br />

2 Display diagnostic information<br />

barepcomp (real)<br />

Determines the tolerance on complementarity for convergence of the barrier algorithm. The algorithm will terminate


with an optimal solution if the relative complementarity is smaller than this value.<br />

(default = 1.0 e-8)<br />

bargrowth (real)<br />

Used by the barrier algorithm to detect unbounded optimal faces. At higher values, the barrier algorithm will be less<br />

likely to conclude that the problem has an unbounded optimal face, but more likely to have numerical difficulties if<br />

the problem does have an unbounded face.<br />

(default = 1.0 e6)<br />

baritlim (integer)<br />

Determines the maximum number of iterations for the barrier algorithm. The default value of -1 allows Cplex to<br />

automatically determine the limit based on problem characteristics.<br />

(default = -1)<br />

barmaxcor (integer)<br />

Specifies the maximum number of centering corrections that should be done on each iteration. Larger values may<br />

improve the numerical performance of the barrier algorithm at the expense of computation time. The default of -1<br />

means the number is automatically determined.<br />

(default = -1)<br />

barobjrng (real)<br />

Determines the maximum absolute value of the objective function. The barrier algorithm looks at this limit to detect<br />

unbounded problems.<br />

(default = 1.0 e20)<br />

barooc (integer)<br />

Specifies whether or not Cplex should use out-of-core storage (ie disk) for the Barrier Cholesky factorization. This<br />

factorization usually accounts for most of the memory usage on large models. Use of this option can therefore extend<br />

the range of models that can be solved on a computer with a given amount of RAM. Disk usage with this option is<br />

controlled by options workdir and workmem.<br />

(default = 0)<br />

0 off<br />

1 on<br />

barorder (integer)<br />

Determines the ordering algorithm to be used by the barrier method. By default, Cplex attempts to choose the most<br />

effective of the available alternatives. Higher numbers tend to favor better orderings at the expense of longer ordering<br />

runtimes.<br />

(default = 0)<br />

0 Automatic<br />

1 Approximate Minimum Degree (AMD)<br />

2 Approximate Minimum Fill (AMF)<br />

3 Nested Dissection (ND)<br />

barstartalg (integer)<br />

This option sets the algorithm to be used to compute the initial starting point for the barrier solver. The default


starting point is satisfactory for most problems. Since the default starting point is tuned for primal problems, using the<br />

other starting points may be worthwhile in conjunction with the predual parameter.<br />

(default = 1)<br />

1 default primal, dual is 0<br />

2 default primal, estimate dual<br />

3 primal average, dual is 0<br />

4 primal average, estimate dual<br />

barthreads (integer)<br />

This option sets a limit on the number of threads available to the parallel barrier algorithm. The actual number of<br />

processors used will be the minimum of this number and the number of available processors. The parallel option is<br />

separately licensed.<br />

(default = the number of threads licensed)<br />

barvarup (real)<br />

Determines an upper bound for all variables that have no finite upper bound. This number is used by the barrier<br />

algorithm to detect unbounded optimal faces.<br />

(default = 1.0 e20)<br />

bbinterval (integer)<br />

Set interval for selecting a best bound node when doing a best estimate search. Active only when nodesel is 2 (best<br />

estimate). Decreasing this interval may be useful when best estimate is finding good solutions but making little<br />

progress in moving the bound. Increasing this interval may help when the best estimate node selection is not finding<br />

any good integer solutions. Setting the interval to 1 is equivalent to setting nodesel to 1.<br />

(default = 7)<br />

bndstrenind (integer)<br />

Use bound strengthening when solving mixed integer problems. Bound strengthening tightens the bounds on<br />

variables, perhaps to the point where the variable can be fixed and thus removed from consideration during the branch<br />

and bound algorithm. This reduction is usually beneficial, but occasionaly, due to its iterative nature, takes a long<br />

time.<br />

(default = -1)<br />

-1 Determine automatically<br />

0 Don't use bound strengthening<br />

1 Use bound strengthening<br />

brdir (integer)<br />

Used to decide which branch (up or down) should be taken first at each node.<br />

(default = 0)<br />

-1 Down branch selected first<br />

0 Algorithm decides<br />

1 Up branch selected first<br />

bttol (real)<br />

This option controls how often backtracking is done during the branching process. At each node, Cplex compares the<br />

objective function value or estimated integer objective value to these values at parent nodes; the value of the bttol


parameter dictates how much relative degradation is tolerated before backtracking. Lower values tend to increase the<br />

amount of backtracking, making the search more of a pure best-bound search. Higher values tend to decrease the<br />

amount of backtracking, making the search more of a depth-first search. This parameter is used only once a first<br />

integer solution is found or when a cutoff has been specified.<br />

Range: [0.0, 1.0]<br />

(default = 0.01)<br />

cliques (integer)<br />

Determines whether or not clique cuts should be generated during optimization.<br />

(default = 0)<br />

-1 Do not generate clique cuts<br />

0 Determined automatically<br />

1 Generate clique cuts moderately<br />

2 Generate clique cuts aggressively<br />

coeredind (integer)<br />

Coefficient reduction is a technique used when presolving mixed integer programs. The benefit is to improve the<br />

objective value of the initial (and subsequent) linear programming relaxations by reducing the number of non-integral<br />

vertices. However, the linear programs generated at each node may become more difficult to solve.<br />

(default = 2)<br />

0 Do not use coefficient reduction<br />

1 Reduce only to integral coefficients<br />

2 Reduce all potential coefficients<br />

covers (integer)<br />

Determines whether or not cover cuts should be generated during optimization.<br />

(default = 0)<br />

-1 Do not generate cover cuts<br />

0 Determined automatically<br />

1 Generate cover cuts moderately<br />

2 Generate cover cuts aggressively<br />

craind (integer)<br />

The crash option biases the way Cplex orders variables relative to the objective function when selecting an initial<br />

basis.<br />

(default = 1)<br />

Primal:<br />

0 ignore objective coefficients during crash<br />

-1, 1 alternate ways of using objective coefficients<br />

Dual:<br />

0, -1 aggressive starting basis<br />

1 default starting basis<br />

cutlo (real)


Lower cutoff value for tree search in maximization problems. This is used to cut off any nodes that have an objective<br />

value below the lower cutoff value. This option overrides the GAMS Cutoff setting. A too restrictive value may result<br />

in no integer solutions being found.<br />

(default = -1 e +75)<br />

cutpass (integer)<br />

Sets the upper limit on the number of passes that will be performed when generating cutting planes on a mixed integer<br />

model.<br />

(default = 0)<br />

-1 None<br />

0 Automatically determined<br />

>0 Maximum passes to perform<br />

cuts (string)<br />

Allows generation of all optional cuts to be turned off at once. This is done by changing the meaning of the default<br />

value (0: automatic) for the various Cplex cut generation options. At the default value of yes, cuts whose generation is<br />

to be determined automatically are unaffected. If option cuts is set to no, cuts whose generation is to be determined<br />

automatically will not be generated. This allows cuts to be turned off in general while specific cuts are explicitly<br />

turned on. The options affected are cliques, covers, disjcuts, flowcovers, flowpaths, fraccuts, gubcovers, implbd, and<br />

mircuts.<br />

(default = yes)<br />

cutsfactor (real)<br />

This option limits the number of cuts that can be added. The number of rows in the problem with cuts added is limited<br />

to cutsfactor times the original (after presolve) number of rows.<br />

(default = 4.0)<br />

cutup (real)<br />

Upper Cutoff value for tree search in minimization problems. This is used to cut off any nodes that have an objective<br />

value above the upper cutoff value. This option overrides the GAMS Cutoff setting. A too restrictive value may result<br />

in no integer solutions being found.<br />

(default = 1 e +75)<br />

depind (integer)<br />

This option determines if the dependency checker will be used.<br />

(default = 0)<br />

0 Do not use the dependency checker<br />

1 Use the dependency checker<br />

disjcuts (integer)<br />

Determines whether or not to generate disjunctive cuts during optimization. At the default of 0, generation is<br />

continued only if it seems to be helping.<br />

(default = 0)<br />

-1 Do not generate disjunctive cuts<br />

0 Determined automatically<br />

1 Generate disjunctive cuts moderately


2 Generate disjunctive cuts aggressively<br />

3 Generate disjunctive cuts very aggressively<br />

dpriind (integer)<br />

Pricing strategy for dual simplex method. Consider using dual steepest-edge pricing. Dual steepest-edge is<br />

particularly efficient and does not carry as much computational burden as the primal steepest-edge pricing.<br />

(default = 0)<br />

0 Determined automatically<br />

1 Standard dual pricing<br />

2 Steepest-edge pricing<br />

3 Steepest-edge pricing in slack space<br />

4 Steepest-edge pricing, unit initial norms<br />

epagap (real)<br />

Absolute tolerance on the gap between the best integer objective and the objective of the best node remaining. When<br />

the value falls below the value of the epagap setting, the optimization is stopped. This option overrides GAMS OptCA<br />

which provides its initial value.<br />

(default = GAMS OptCA)<br />

epgap (real)<br />

Relative tolerance on the gap between the best integer objective and the objective of the best node remaining. When<br />

the value falls below the value of the epgap setting, the mixed integer optimization is stopped. Note the difference in<br />

the Cplex definition of the relative tolerance with the GAMS definition. This option overrides GAMS OptCR which<br />

provides its initial value.<br />

Range: [0.0, 1.0]<br />

(default = GAMS OptCR)<br />

epint (real)<br />

Integrality Tolerance. This specifies the amount by which an integer variable can be different than an integer and still<br />

be considered feasible.<br />

Range: [1.0e-9, 1.0]<br />

(default = 1.0e-5)<br />

epmrk (real)<br />

The Markowitz tolerance influences pivot selection during basis factorization. Increasing the Markowitz threshold<br />

may improve the numerical properties of the solution.<br />

Range - [0.0001, 0.99999]<br />

(default = 0.01)<br />

epopt (real)<br />

The optimality tolerance influences the reduced-cost tolerance for optimality. This option setting governs how closely<br />

Cplex must approach the theoretically optimal solution.<br />

Range - [1.0e-9, 1.0e-4]<br />

(default = 1.0e-6)<br />

epper (integer)


Perturbation setting. Highly degenerate problems tend to stall optimization progress. Cplex automatically perturbs the<br />

variable bounds when this occurs. Perturbation expands the bounds on every variable by a small amount thereby<br />

creating a different but closely related problem. Generally, the solution to the less constrained problem is easier to<br />

solve. Once the solution to the perturbed problem has advanced as far as it can go, Cplex removes the perturbation by<br />

resetting the bounds to their original values.<br />

If the problem is perturbed more than once, the perturbation constant is probably too large. Reduce the epper option<br />

to a level where only one perturbation is required. Any value greater than or equal to 1.0e-8 is valid.<br />

(default = 1.0e-6)<br />

eprhs (real)<br />

Feasibility tolerance. This specifies the degree to which a problem's basic variables may violate their bounds. This<br />

tolerance influences the selection of an optimal basis and can be reset to a higher value when a problem is having<br />

difficulty maintaining feasibility during optimization. You may also wish to lower this tolerance after finding an<br />

optimal solution if there is any doubt that the solution is truly optimal. If the feasibility tolerance is set too low, Cplex<br />

may falsely conclude that a problem is infeasible.<br />

Range - [1.0e-9, 1.0e-4]<br />

(default = 1.0e-6)<br />

flowcovers (integer)<br />

Determines whether or not flow cover cuts should be generated during optimization.<br />

(default = 0)<br />

-1 Do not generate flow cover cuts<br />

0 Determined automatically<br />

1 Generate flow cover cuts moderately<br />

2 Generate flow cover cuts aggressively<br />

flowpaths (integer)<br />

Determines whether or not flow path cuts should be generated during optimization. At the default of 0, generation is<br />

continued only if it seems to be helping.<br />

(default = 0)<br />

-1 Do not generate flow path cuts<br />

0 Determined automatically<br />

1 Generate flow path cuts moderately<br />

2 Generate flow path cuts aggressively<br />

fraccand (integer)<br />

Limits the number of candidate variables for generating Gomory fractional cuts.<br />

(default = 200)<br />

fraccuts (integer)<br />

Determines whether or not Gomory fractional cuts should be generated during optimization.<br />

(default = 0)<br />

-1 Do not generate Gomory fractional cuts<br />

0 Determined automatically<br />

1 Generate Gomory fractional cuts moderately<br />

2 Generate Gomory fractional cuts aggressively


fracpass (integer)<br />

Sets the upper limit on the number of passes that will be performed when generating Gomory fractional cuts on a<br />

mixed integer model. Ignored if parameter fraccuts is set to a nonzero value.<br />

(default = 0)<br />

0 Automatically determined<br />

>0 Maximum passes to perform<br />

gubcovers (integer)<br />

Determines whether or not GUB (Generalized Upper Bound) cover cuts should be generated during optimization. The<br />

default of 0 indicates that the attempt to generate GUB cuts should continue only if it seems to be helping.<br />

(default = 0)<br />

-1 Do not generate GUB cover cuts<br />

0 Determined automatically<br />

1 Generate GUB cover cuts moderately<br />

2 Generate GUB cover cuts aggressively<br />

heurfreq (integer)<br />

This option specifies how often to apply the node heuristic. Setting to a positive number applies the heuristic at the<br />

requested node interval.<br />

(default = 0)<br />

-1 Do not use the node heuristic<br />

0 Determined automatically<br />

iis (string)<br />

Find an IIS (Irreducably Inconsistent Set of constraints) and write an IIS report to the GAMS solution listing if the<br />

model is found to be infeasible. Legal option values are yes or no.<br />

(default = no)<br />

iisind (integer)<br />

This option specifies which method to use for finding an IIS (Irreducably Inconsistent Set of constraints). Note that<br />

the iis option must be set to cause an IIS computation. This option just specifies the method.<br />

(default = 0)<br />

0 Method with minimum computation time<br />

1 Method generating minimum size for the IIS<br />

implbd (integer)<br />

Determines whether or not implied bound cuts should be generated during optimization.<br />

(default = 0)<br />

-1 Do not generate implied bound cuts<br />

0 Determined automatically<br />

1 Generate implied bound cuts moderately<br />

2 Generate implied bound cuts aggressively


interactive (string)<br />

When set to yes, options can be set interactively after interrupting Cplex with a Control-C. <strong>Options</strong> are entered just as<br />

if they were being entered in the cplex.opt file. Control is returned to Cplex by entering "continue". The optimization<br />

can be aborted by entering "abort". This option can only be used when running from the command line. Legal option<br />

values are yes or no.<br />

(default = no)<br />

intsollim (integer)<br />

This option limits the MIP optimization to finding only this number of mixed integer solutions before stopping.<br />

(default = large -- varies by computer)<br />

itlim (integer)<br />

The iteration limit option sets the maximum number of iterations before the algorithm terminates, without reaching<br />

optimality. This Cplex option overrides the GAMS IterLim option. Any non-negative integer value is valid.<br />

(default = GAMS IterLim)<br />

lpmethod (integer)<br />

Specifies which LP algorithm to use. As of Cplex 7.0, the default of Automatic is always Dual Simplex. In future<br />

versions, the algorithm may be picked based on problem characteristics.<br />

(default = 0)<br />

0 Automatic<br />

1 Primal Simplex<br />

2 Dual Simplex<br />

3 Network Simplex<br />

4 Barrier<br />

mipdisplay (integer)<br />

The amount of information displayed during MIP solution increases with increasing values of this option.<br />

(default = 4)<br />

0 No display<br />

1 Display integer feasible solutions.<br />

2 Displays nodes under mipinterval control.<br />

3 Same as 2 but adds information on cuts.<br />

4 Same as 3 but adds LP display for the root node.<br />

5 Same as 3 but adds LP display for all nodes.<br />

mipemphasis (integer)<br />

This option controls whether the tactics for solving a mixed integer programming problem should emphasize<br />

feasibility or optimality.<br />

(default = 0)<br />

0 Emphasize finding a proven optimal solution as quickly as possible.<br />

1 Emphasize finding feasible solutions -- likely at the expense of prolonging the time required to find a proven<br />

optimal solution.<br />

mipinterval (integer)


The MIP interval option value determines the frequency of the node logging when the mipdisplay option is set higher<br />

than 1. If the value of the MIP interval option setting is n, then only every nth node, plus all integer feasible nodes,<br />

will be logged. This option is useful for selectively limiting the volume of information generated for a problem that<br />

requires many nodes to solve. Any non-negative integer value is valid.<br />

(default = 100)<br />

mipordind (integer)<br />

Use priorities. Priorities should be assigned based on your knowledge of the problem. Variables with higher priorities<br />

will be branched upon before variables of lower priorities. This direction of the tree search can often dramatically<br />

reduce the number of nodes searched. For example, consider a problem with a binary variable representing a yes/no<br />

decision to build a factory, and other binary variables representing equipment selections within that factory. You<br />

would naturally want to explore whether or not the factory should be built before considering what specific equipment<br />

to purchased within the factory. By assigning a higher priority to the build/nobuild decision variable, you can force<br />

this logic into the tree search and eliminate wasted computation time exploring uninteresting portions of the tree.<br />

When set at 0 (default), the mipordind option instructs Cplex not to use priorities for branching. When set to 1,<br />

priority orders are utilized.<br />

Note: Priorities are assigned to discrete variables using the .prior suffix in the GAMS model. Lower .prior values<br />

mean higher priority. The .prioropt model suffix has to be used to signal GAMS to export the priorities to the solver.<br />

(default = 1)<br />

0 Do not use priorities for branching<br />

1 Priority orders are utilized<br />

mipordtype (integer)<br />

This option is used to select the type of generic priority order to generate when no priority order is present.<br />

(default = 0)<br />

0 None<br />

1 decreasing cost magnitude<br />

2 increasing bound range<br />

3 increasing cost per coefficient count<br />

mipstart (integer)<br />

This option controls the use of advanced starting values for mixed integer programs. A setting of 1 indicates that the<br />

values should be checked to see if they provide an integer feasible solution before starting optimization.<br />

(default = 0)<br />

0 do not use the values<br />

1 use the values<br />

mipthreads (integer)<br />

This option sets a limit on the number of threads available to the parallel mip algorithm. The actual number of<br />

processors used will be the minimum of this number and the number of available processors. The parallel option is<br />

separately licensed.<br />

(default = the number of threads licensed)<br />

mircuts (integer)<br />

Determines whether or not to generate mixed integer rounding (MIR) cuts during optimization. At the default of 0,


generation is continued only if it seems to be helping.<br />

(default = 0)<br />

-1 Do not generate MIR cuts<br />

0 Determined automatically<br />

1 Generate MIR cuts moderately<br />

2 Generate MIR cuts aggressively<br />

names (string)<br />

This option causes GAMS names for the variables and equations to be loaded into Cplex. These names will then be<br />

used for error messages, log entries, and so forth. Setting names to no may help if memory is very tight.<br />

(default = yes)<br />

no Do not load GAMS names into Cplex<br />

yes Load GAMS names into Cplex<br />

netdisplay (integer)<br />

This option controls the log for network iterations.<br />

(default = 2)<br />

0 No network log.<br />

1 Displays true objective values.<br />

2 Displays penalized objective values.<br />

netepopt (real)<br />

This optimality tolerance influences the reduced-cost tolerance for optimality when using the network simplex<br />

method. This option setting governs how closely Cplex must approach the theoretically optimal solution.<br />

Range - [1.0e-11, 1.0e-4]<br />

(default = 1.0e-6)<br />

neteprhs (real)<br />

This feasibility tolerance determines the degree to which the network simplex algorithm will allow a flow value to<br />

violate its bounds.<br />

Range - [1.0e-11, 1.0e-4]<br />

(default = 1.0e-6)<br />

netfind (integer)<br />

Specifies the level of network extraction to be done.<br />

(default = 2)<br />

1 Extract pure network only<br />

2 Try reflection scaling<br />

3 Try general scaling<br />

netitlim (integer)<br />

Iteration limit for the network simplex method.<br />

(default = large -- varies by computer)<br />

netppriind (integer)


Network simplex pricing algorithm. The default of 0 (currently equivalent to 3) shows best performance for most<br />

problems.<br />

(default = 0)<br />

0 Automatic<br />

1 Partial pricing<br />

2 Multiple partial pricing<br />

3 Multiple partial pricing with sorting<br />

nodefileind (integer)<br />

Specifies how node files are handled during MIP processing. Used when parameter workmem has been exceeded by<br />

the size of the branch and cut tree. If set to 0 when the tree memory limit is reached, optimization is terminated.<br />

Otherwise a group of nodes is removed from the in-memory set as needed. By default, Cplex transfers nodes to node<br />

files when the in-memory set is larger than 128 MBytes, and it keeps the resulting node "files" in compressed form in<br />

memory. At settings 2 and 3, the node files are transferred to disk. They are stored under a directory specified by<br />

parameter workdir and Cplex actively manages which nodes remain in memory for processing.<br />

(default = 1)<br />

0 No node files<br />

1 Node files in memory and compressed<br />

2 Node files on disk<br />

3 Node files on disk and compressed<br />

nodelim (integer)<br />

The maximum number of nodes solved before the algorithm terminates, without reaching optimality. This option<br />

overrides the GAMS NodLim model suffix.<br />

nodesel (integer)<br />

This option is used to set the rule for selecting the next node to process when backtracking.<br />

(default = 1)<br />

0 Depth-first search. This chooses the most recently created node.<br />

1 Best-bound search. This chooses the unprocessed node with the best objective function for the associated LP<br />

relaxation.<br />

2 Best-estimate search. This chooses the node with the best estimate of the integer objective value that would be<br />

obtained once all integer infeasibilities are removed.<br />

3 Alternate best-estimate search.<br />

objdif (real)<br />

A means for automatically updating the cutoff to more restrictive values. Normally the most recently found integer<br />

feasible solution objective value is used as the cutoff for subsequent nodes. When this option is set to a positive value,<br />

the value will be subtracted from (added to) the newly found integer objective value when minimizing (maximizing).<br />

This forces the MIP optimization to ignore integer solutions that are not at least this amount better than the one found<br />

so far. The option can be adjusted to improve problem solving efficiency by limiting the number of nodes; however,<br />

setting this option at a value other than zero (the default) can cause some integer solutions, including the true integer<br />

optimum, to be missed. Negative values for this option will result in some integer solutions that are worse than or the<br />

same as those previously generated, but will not necessarily result in the generation of all possible integer solutions.<br />

This option overrides the GAMS Cheat parameter.<br />

(default = 0.0)


objllim (real)<br />

Setting a lower objective function limit will cause Cplex to halt the optimization process once the minimum objective<br />

function value limit has been exceeded.<br />

(default = -1.0e+75)<br />

objrng (string)<br />

Calculate sensitivity ranges for the specified GAMS variables. Unlike most options, objrng can be repeated multiple<br />

times in the options file. Sensitivity range information will be produced for each GAMS variable named. Specifying<br />

"all" will cause range information to be produced for all variables. Range information will be printed to the beginning<br />

of the solution listing in the GAMS listing file unless option rngrestart is specified.<br />

(default = no objective ranging is done)<br />

objulim (real)<br />

Setting an upper objective function limit will cause Cplex to halt the optimization process once the maximum<br />

objective function value limit has been exceeded.<br />

(default = 1.0e+75)<br />

perind (integer)<br />

Perturbation Indicator. If a problem automatically perturbs early in the solution process, consider starting the solution<br />

process with a perturbation by setting perind to 1. Manually perturbing the problem will save the time of first<br />

allowing the optimization to stall before activating the perturbation mechanism, but is useful only rarely, for<br />

extremely degenerate problems.<br />

(default = 0)<br />

0 not automatically perturbed<br />

1 automatically perturbed<br />

perlim (integer)<br />

Perturbation limit. The number of stalled iterations before perturbation is invoked. The default value of 0 means the<br />

number is determined automatically.<br />

(default = 0)<br />

ppriind (integer)<br />

Pricing algorithm. Likely to show the biggest impact on performance. Look at overall solution time and the number of<br />

Phase I and total iterations as a guide in selecting alternate pricing algorithms. If you are using the dual Simplex<br />

method use dpriind to select a pricing algorithm. If the number of iterations required to solve your problem is<br />

approximately the same as the number of rows in your problem, then you are doing well. Iteration counts more than<br />

three times greater than the number of rows suggest that improvements might be possible.<br />

(default = 0)<br />

-1 Reduced-cost pricing. This is less compute intensive and may be preferred if the problem is small or easy. This<br />

option may also be advantageous for dense problems (say 20 to 30 nonzeros per column).<br />

0 Hybrid reduced-cost and Devex pricing.<br />

1 Devex pricing. This may be useful for more difficult problems which take many iterations to complete Phase I.<br />

Each iteration may consume more time, but the reduced number of total iterations may lead to an overall reduction<br />

in time. Tenfold iteration count reductions leading to threefold speed improvements have been observed. Do not<br />

use devex pricing if the problem has many columns and relatively few rows. The number of calculations required<br />

per iteration will usually be disadvantageous.


2 Steepest edge pricing. If devex pricing helps, this option may be beneficial. Steepest-edge pricing is<br />

computationally expensive, but may produce the best results on exceptionally difficult problems.<br />

3 Steepest edge pricing with slack initial norms. This reduces the computationally intensive nature of steepest edge<br />

pricing.<br />

4 Full pricing<br />

precompress (integer)<br />

Compress the original model after presolve is performed. This can save a considerable amount of memory for large<br />

models. Under the default setting, Cplex will decide whether or not to perform the compression based on model<br />

characteristics.<br />

(default = 0)<br />

-1 off<br />

0 automatic<br />

1 on<br />

predual (integer)<br />

Solve the dual. Some linear programs with many more rows than columns may be solved faster by explicitly solving<br />

the dual. The predual option will cause Cplex to solve the dual while returning the solution in the context of the<br />

original problem. This option is ignored if presolve is turned off.<br />

(default = 0)<br />

-1 do not give dual to optimizer<br />

0 automatic<br />

1 give dual to optimizer<br />

preind (integer)<br />

Perform Presolve. This helps most problems by simplifying, reducing and eliminating redundancies. However, if<br />

there are no redundancies or opportunities for simplification in the model, if may be faster to turn presolve off to<br />

avoid this step. On rare occasions, the presolved model, although smaller, may be more difficult than the original<br />

problem. In this case turning the presolve off leads to better performance. Specifying 0 turns the aggregator off as<br />

well.<br />

(default = 1)<br />

0 Do not presolve the problem<br />

1 Perform the presolve<br />

prepass (integer)<br />

Number of MIP presolve applications to perform. By default, Cplex determines this automatically. Specifying 0 turns<br />

off the presolve but not the aggregator. Set preind to 0 to turn both off.<br />

(default = -1)<br />

-1 Determined automatically<br />

0 No presolve<br />

preslvnd (integer)<br />

Indicates whether node presolve should be performed at the nodes of a mixed integer programming solution. Node<br />

presolve can significantly reduce solution time for some models. The default setting is generally effective.<br />

(default = 0)


-1 No node presolve<br />

0 Automatic<br />

1 Force node presolve<br />

pricelim (integer)<br />

Size for the pricing candidate list. Cplex dynamically determines a good value based on problem dimensions. Only<br />

very rarely will setting this option manually improve performance. Any non-negative integer values are valid.<br />

(default = 0, in which case it is determined automatically)<br />

printoptions (string)<br />

Write the values of all options to the GAMS listing file. Valid values are no or yes.<br />

(default = no)<br />

probe (integer)<br />

Determines the amount of probing performed on a MIP. Probing can be both very powerful and very time consuming.<br />

Setting the value to 1 can result in dramatic reductions or dramatic increases in solution time depending on the<br />

particular model.<br />

(default = 0)<br />

-1 no probing<br />

0 automatic<br />

1 limited probing<br />

2 more probing<br />

3 full probing<br />

quality (string)<br />

Write solution quality statistics to the listing file. If set to yes, the statistics appear after the Solve Summary and<br />

before the Solution Listing.<br />

(default = no)<br />

reduce (integer)<br />

Determines whether primal reductions, dual reductions, or both, are performed during preprocessing. It is<br />

occasionally advisable to do only one or the other when diagnosing infeasible or unbounded models.<br />

(default = 3)<br />

0 No primal or dual reductions<br />

1 Only primal reductions<br />

2 Only dual reductions<br />

3 Both primal and dual reductions<br />

reinv (integer)<br />

Refactorization Frequency. This option determines the number of iterations between refactorizations of the basis<br />

matrix. The default should be optimal for most problems. Cplex's performance is relatively insensitive to changes in<br />

refactorization frequency. Only for extremely large, difficult problems should reducing the number of iterations<br />

between refactorizations be considered. Any non-negative integer value is valid.<br />

(default = 0, in which case it is determined automatically)<br />

relaxpreind (integer)


This option will cause the Cplex presolve to be invoked for the initial relaxation of a mixed integer program<br />

(according to the other presolve option settings). Sometimes, additional reductions can be made beyond any MIP<br />

presolve reductions that may already have been done.<br />

(default = 0)<br />

0 do not presolve initial relaxation<br />

1 use presolve on initial relaxation<br />

relobjdif (real)<br />

The relative version of the objdif option. Ignored if objdif is non-zero.<br />

(default = 0)<br />

rerun (string)<br />

The Cplex presolve can sometimes diagnose a problem as being infeasible or unbounded. When this happens,<br />

GAMS/Cplex can, in order to get better diagnostic information, rerun the problem with presolve turned off. The<br />

GAMS solution listing will then mark variables and equations as infeasible or unbounded according to the final<br />

solution returned by the simplex algorithm. The iis option can be used to get even more diagnostic information. The<br />

rerun option controls this behavior. Valid values are auto, yes or no. The default value of auto is equivalent to no if<br />

names are successfully loaded into Cplex. In that case the Cplex messages from presolve identify the cause of<br />

infeasibility or unboundedness in terms of GAMS variable and equation names. If names are not successfully loaded,<br />

rerun defaults to yes. Loading of GAMS names into Cplex is controlled by option names.<br />

(default = auto)<br />

rhsrng (string)<br />

Calculate sensitivity ranges for the specified GAMS equations. Unlike most options, rhsrng can be repeated multiple<br />

times in the options file. Sensitivity range information will be produced for each GAMS equation named. Specifying<br />

"all" will cause range information to be produced for all equations. Range information will be printed to the beginning<br />

of the solution listing in the GAMS listing file unless option rngrestart is specified.<br />

(default = no right-hand-side ranging is done)<br />

rngrestart (string)<br />

Write ranging information, in GAMS readable format, to the file named. <strong>Options</strong> objrng and rhsrng are used to<br />

specify which GAMS variables or equations are included.<br />

(default = ranging information is printed to the listing file)<br />

scaind (integer)<br />

This option influences the scaling of the problem matrix.<br />

(default = 0)<br />

-1 No scaling<br />

0 Standard scaling - An equilibration scaling method is implemented which is generally very effective.<br />

1 Modified, more aggressive scaling method that can produce improvements on some problems. This scaling should<br />

be used if the problem is observed to have difficulty staying feasible during the solution process.<br />

simdisplay (integer)<br />

This option controls what Cplex reports (normally to the screen) during optimization. The amount of information<br />

displayed increases as the setting value increases.<br />

(default = 1)


0 No iteration messages are issued until the optimal solution is reported.<br />

1 An iteration log message will be issued after each refactorization. Each entry will contain the iteration count and<br />

scaled infeasibility or objective value.<br />

2 An iteration log message will be issued after each iteration. The variables, slacks and artificials entering and<br />

leaving the basis will also be reported.<br />

simthreads (integer)<br />

This option sets a limit on the number of threads available to the parallel simplex algorithm. The actual number of<br />

processors used will be the minimum of this number and the number of available processors. The parallel option is<br />

separately licensed.<br />

(default = the number of threads licensed)<br />

singlim (integer)<br />

The singularity limit setting restricts the number of times Cplex will attempt to repair the basis when singularities are<br />

encountered. Once the limit is exceeded, Cplex replaces the current basis with the best factorizable basis that has been<br />

found. Any non-negative integer value is valid.<br />

(default = 10)<br />

startalg (integer)<br />

Selects the algorithm to use for the initial relaxation of a MIP.<br />

(default = 2)<br />

1 primal simplex<br />

2 dual simplex<br />

3 network followed by dual simplex<br />

4 barrier with crossover<br />

5 dual simplex to iteration limit, then barrier<br />

6 barrier without crossover<br />

strongcandlim (integer)<br />

Limit on the length of the candidate list for strong branching (varsel = 3).<br />

(default = 10)<br />

strongitlim (integer)<br />

Limit on the number of iterations per branch in strong branching (varsel = 3). The default value of 0 causes the limit<br />

to be chosen automatically which is normally satisfactory. Try reducing this value if the time per node seems<br />

excessive. Try increasing this value if the time per node is reasonable but Cplex is making little progress.<br />

(default = 0)<br />

strongthreadlim (integer)<br />

Controls the number of parallel threads available for strong branching (varsel = 3). This option does nothing if option<br />

mipthreads is greater than 1.<br />

(default = 1)<br />

subalg (integer)<br />

Strategy for solving linear sub-problems at each node.


(default = 2)<br />

1 primal simplex<br />

2 dual simplex<br />

3 network optimizer followed by dual simplex<br />

4 barrier with crossover<br />

5 dual simplex to iteration limit, then barrier<br />

6 barrier without crossover<br />

tilim (real)<br />

The time limit setting determines the amount of time in seconds that Cplex will continue to solve a problem. This<br />

Cplex option overrides the GAMS ResLim option. Any non-negative value is valid.<br />

(default = GAMS ResLim)<br />

trelim (real)<br />

Sets an absolute upper limit on the size (in megabytes) of the branch and cut tree. If this limit is exceeded, Cplex<br />

terminates optimization.<br />

(default = 1.0e+75)<br />

varsel (integer)<br />

This option is used to set the rule for selecting the branching variable at the node which has been selected for<br />

branching. The default value of 0 allows Cplex to select the best rule based on the problem and its progress.<br />

(default = 0)<br />

-1 Branch on variable with minimum infeasibility. This rule may lead more quickly to a first integer feasible solution,<br />

but will usually be slower overall to reach the optimal integer solution.<br />

0 Branch variable automatically selected.<br />

1 Branch on variable with maximum infeasibility. This rule forces larger changes earlier in the tree, which tends to<br />

produce faster overall times to reach the optimal integer solution.<br />

2 Branch based on pseudo costs. Generally, the pseudo-cost setting is more effective when the problem contains<br />

complex trade-offs and the dual values have an economic interpretation.<br />

3 Strong Branching. This setting causes variable selection based on partially solving a number of subproblems with<br />

tentative branches to see which branch is most promising. This is often effective on large, difficult problems.<br />

4 Branch based on pseudo reduced costs.<br />

workdir (character string)<br />

The name of an existing directory into which Cplex may store temporary working files. Used for MIP node files and<br />

by out-of-core Barrier.<br />

workmem (real)<br />

Upper limit on the amount of memory, in megabytes, that Cplex is permitted to use for working files. See parameter<br />

workdir.<br />

writebas (character string)<br />

Write a basis file.<br />

writelp (character string)


Write a file in Cplex LP format.<br />

writemps (character string)<br />

Write an MPS problem file.<br />

writeord (character string)<br />

Write a Cplex ord (containing priority and branch direction information) file.<br />

writesav (character string)<br />

Write a binary problem file.<br />

writesos (character string)<br />

Write a file containing the SOS structure. For models with SOS variables only.<br />

Last modified November 16, 2001 by PES


GAMS/DECIS ∗ USER’S GUIDE<br />

Gerd Infanger †<br />

1999<br />

Abstract<br />

This User’s Guide discusses how to run DECIS directly from GAMS. It describes<br />

how to set up stochastic problems using the GAMS modeling language and DECIS as<br />

the optimizer for their solution. DECIS is a system for solving large-scale stochastic<br />

programs. It includes a variety of solution strategies and can solve problems with<br />

numerous stochastic parameters. DECIS interfaces either with MINOS or CPLEX for<br />

solving subproblems. GAMS stands for General Algebraic Modeling Language, and is<br />

one of the most widely used modeling languages. <strong>Using</strong> DECIS directly from GAMS<br />

makes the formulation of stochastic problems easy and gives you all the power and<br />

flexibility of using an integrated modeling system solver environment.<br />

∗ Copyright c○ 1989 – 1999 by Gerd Infanger. All rights reserved. The GAMS/DECIS User’s Guide is<br />

copyrighted and all rights are reserved. Information in this document is subject to change without notice<br />

and does not represent a commitment on the part of Gerd Infanger. The DECIS software described in this<br />

document is furnished under a license agreement and may be used only in accordance with the terms of this<br />

agreement. The DECIS software can be licensed through Infanger Investment Technology, LLC or through<br />

<strong>Gams</strong> Development Corporation.<br />

† Dr. Gerd Infanger is an Associate Professor at Vienna University of Technology and a Consulting<br />

Professor at Stanford University.<br />

1


Contents<br />

1. DECIS 5<br />

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

1.2 What DECIS can do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

1.3 Representing uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

1.4 Solving the universe problem . . . . . . . . . . . . . . . . . . . . . . . . . . 7<br />

1.5 Solving the expected value problem . . . . . . . . . . . . . . . . . . . . . . 7<br />

1.6 <strong>Using</strong> Monte Carlo sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

1.7 Monte Carlo pre-sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

1.8 Regularized decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />

2. GAMS/DECIS 9<br />

2.1 Setting up a stochastic program using GAMS/DECIS . . . . . . . . . . . . 9<br />

2.2 Starting with the deterministic model . . . . . . . . . . . . . . . . . . . . . 10<br />

2.3 Setting the decision stages . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />

2.4 Specifying the stochastic model . . . . . . . . . . . . . . . . . . . . . . . . . 12<br />

2.4.1 Specifying independent random parameters . . . . . . . . . . . . . . 12<br />

2.4.2 Defining the distributions of the uncertain parameters in the model 13<br />

2.5 Setting DECIS as the optimizer . . . . . . . . . . . . . . . . . . . . . . . . 18<br />

2.5.1 Setting parameter options in the GAMS model . . . . . . . . . . . . 18<br />

2.5.2 Setting parameters in the DECIS options file . . . . . . . . . . . . . 19<br />

2.5.3 Setting MINOS parameters in the MINOS specification file . . . . . 22<br />

2.5.4 Setting CPLEX parameters using system environment variables . . 23<br />

2.6 GAMS/DECIS output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

2.6.1 The screen output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />

2.6.2 The solution output file . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.6.3 The debug output file . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.6.4 The optimizer output files . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

A. GAMS/DECIS illustrative Examples 27<br />

A.1 Example APL1P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

A.2 Example APL1PCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />

B. Error messages 32<br />

References 34<br />

License and Warranty 35<br />

3


1. DECIS<br />

1.1. Introduction<br />

DECIS is a system for solving large-scale stochastic programs, programs, which include<br />

parameters (coefficients and right-hand sides) that are not known with certainty, but are<br />

assumed to be known by their probability distribution. It employs Benders decomposition<br />

and allows using advanced Monte Carlo sampling techniques. DECIS includes a variety<br />

of solution strategies, such as solving the universe problem, the expected value problem,<br />

Monte Carlo sampling within the Benders decomposition algorithm, and Monte Carlo presampling.<br />

When using Monte Carlo sampling the user has the option of employing crude<br />

Monte Carlo without variance reduction techniques, or using as variance reduction techniques<br />

importance sampling or control variates, based on either an additive or a multiplicative<br />

approximation function. Pre-sampling is limited to using crude Monte Carlo only.<br />

For solving linear and nonlinear programs (master and subproblems arising from the decomposition)<br />

DECIS interfaces with MINOS or CPLEX. MINOS, see Murtagh and Saunders<br />

(1983) [5], is a state-of-the-art solver for large-scale linear and nonlinear programs,<br />

and CPLEX, see CPLEX Optimization, Inc. (1989–1997) [2], is one of the fastest linear<br />

programming solvers available.<br />

For details about the DECIS system consult the DECIS User’s Guide, see Infanger (1997)<br />

[4]. It includes a comprehensive mathematical description of the methods used by DECIS.<br />

In this Guide we concentrate on how to use DECIS directly from GAMS, see Brooke, A.,<br />

Kendrik, D. and Meeraus, A. (1988) [1], and especially on how to model stochastic programs<br />

using the GAMS/DECIS interface. First, however, in section 1.2 we give a brief description<br />

of what DECIS can do and what solution strategies it uses. This description has been<br />

adapted from the DECIS User’s Guide. In section 2 we discuss in detail how to set up a<br />

stochastic problem using GAMS/DECIS and give a description of the parameter setting<br />

and outputs obtained. In Appendix A we show the GAMS/DECIS formulation of two<br />

illustrative examples (APL1P and APL1PC) discussed in the DECIS User’s Guide. A list<br />

of DECIS error messages are represented in Appendix B.<br />

1.2. What DECIS can do<br />

DECIS solves two-stage stochastic linear programs with recourse:<br />

min z = cx + E f ω y ω<br />

s/t Ax = b<br />

−B ω x + D ω y ω = d ω<br />

x, y ω ≥ 0, ω ∈ Ω.<br />

where x denotes the first-stage, y ω the second-stage decision variables, c represents the<br />

first-stage and f ω the second-stage objective coefficients, A, b represent the coefficients and<br />

right hand sides of the first-stage constraints, and B ω , D ω , d ω represent the parameters of<br />

the second-stage constraints, where the transition matrix B ω couples the two stages. In<br />

the literature D ω is often referred to as the technology matrix or recourse matrix. The<br />

first stage parameters are known with certainty. The second stage parameters are random<br />

parameters that assume outcomes labeled ω with probability p(ω), where Ω denotes the set<br />

of all possible outcome labels.<br />

5


At the time the first-stage decision x has to be made, the second-stage parameters are<br />

only known by their probability distribution of possible outcomes. Later after x is already<br />

determined, an actual outcome of the second-stage parameters will become known, and<br />

the second-stage decision y ω is made based on knowledge of the actual outcome ω. The<br />

objective is to find a feasible decision x that minimizes the total expected costs, the sum of<br />

first-stage costs and expected second-stage costs.<br />

For discrete distributions of the random parameters, the stochastic linear program can<br />

be represented by the corresponding equivalent deterministic linear program:<br />

min z = cx + p1fy 1 + p2fy 2 + · · · + pW fyW s/t Ax = b<br />

−B1x + Dy1 = d1 −B2x + Dy2 = d2 .<br />

−B W x + Dy W = d W<br />

x, y 1 , y 2 , . . . , y W ≥ 0,<br />

which contains all possible outcomes ω ∈ Ω. Note that for practical problems W is very<br />

large, e.g., a typical number could be 10 20 , and the resulting equivalent deterministic linear<br />

problem is too large to be solved directly.<br />

In order to see the two-stage nature of the underlying decision making process the<br />

folowing representation is also often used:<br />

where<br />

z ω (x) = min f ω y ω<br />

. ..<br />

min cx + E z ω (x)<br />

Ax = b<br />

x ≥ 0<br />

D ω y ω = d ω + B ω x<br />

y ω ≥ 0, ω ∈ Ω = {1, 2, . . . , W }.<br />

DECIS employs different strategies to solve two-stage stochastic linear programs. It<br />

computes an exact optimal solution to the problem or approximates the true optimal solution<br />

very closely and gives a confidence interval within which the true optimal objective<br />

lies with, say, 95% confidence.<br />

1.3. Representing uncertainty<br />

It is favorable to represent the uncertain second-stage parameters in a structure. <strong>Using</strong><br />

V = (V1, . . . , Vh) an h-dimensional independent random vector parameter that assumes<br />

outcomes v ω = (v1, . . . , vh) ω with probability p ω = p(v ω ), we represent the uncertain secondstage<br />

parameters of the problem as functions of the independent random parameter V :<br />

f ω = f(v ω ), B ω = B(v ω ), D ω = D(v ω ), d ω = d(v ω ).<br />

Each component Vi has outcomes v ωi<br />

i , ωi ∈ Ωi, where ωi labels a possible outcome of<br />

component i, and Ωi represents the set of all possible outcomes of component i. An outcome<br />

6<br />

.


of the random vector<br />

v ω = (v ω1<br />

1 , . . . , vωh<br />

h )<br />

consists of h independent component outcomes. The set<br />

Ω = Ω1 × Ω2 × . . . × Ωh<br />

represents the crossing of sets Ωi. Assuming each set Ωi contains Wi possible outcomes,<br />

|Ωi| = Wi, the set Ω contains W = Wi elements, where |Ω| = W represents the number of<br />

all possible outcomes of the random vector V . Based on independence, the joint probability<br />

is the product<br />

p ω = p ω1<br />

1 pω2 2 · · · pωh<br />

h .<br />

Let η denote the vector of all second-stage random parameters, e.g., η = vec(f, B, D, d).<br />

The outcomes of η may be represented by the following general linear dependency model:<br />

η ω = vec(f ω , B ω , d ω , d ω ) = Hv ω , ω ∈ Ω<br />

where H is a matrix of suitable dimensions. DECIS can solve problems with such general<br />

linear dependency models.<br />

1.4. Solving the universe problem<br />

We refer to the universe problem if we consider all possible outcomes ω ∈ Ω and solve<br />

the corresponding problem exactly. This is not always possible, because there may be<br />

too many possible realizations ω ∈ Ω. For solving the problem DECIS employs Benders<br />

decomposition, splitting the problem into a master problem, corresponding to the first-stage<br />

decision, and into subproblems, one for each ω ∈ Ω, corresponding to the second-stage<br />

decision. The details of the algorithm and techniques used for solving the universe problem<br />

are discussed in The DECIS User’s Manual.<br />

Solving the universe problem is referred to as strategy 4. Use this strategy only if the<br />

number of universe scenarios is reasonably small. There is a maximum number of universe<br />

scenarios DECIS can handle, which depends on your particular resources.<br />

1.5. Solving the expected value problem<br />

The expected value problem results from replacing the stochastic parameters by their expectation.<br />

It is a linear program that can also easily be solved by employing a solver directly.<br />

Solving the expected value problem may be useful by itself (for example as a benchmark to<br />

compare the solution obtained from solving the stochastic problem), and it also may yield<br />

a good starting solution for solving the stochastic problem. DECIS solves the expected<br />

value problem using Benders decomposition. The details of generating the expected value<br />

problem and the algorithm used for solving it are discussed in the DECIS User’s Manual.<br />

To solve the expected value problem choose strategy 1.<br />

7


1.6. <strong>Using</strong> Monte Carlo sampling<br />

As noted above, for many practical problems it is impossible to obtain the universe solution,<br />

because the number of possible realizations |Ω| is way too large. The power of DECIS lies in<br />

its ability to compute excellent approximate solutions by employing Monte Carlo sampling<br />

techniques. Instead of computing the expected cost and the coefficients and the righthand<br />

sides of the Benders cuts exactly (as it is done when solving the universe problem),<br />

DECIS, when using Monte Carlo sampling, estimates the quantities in each iteration using<br />

an independent sample drawn from the distribution of the random parameters. In addition<br />

to using crude Monte Carlo, DECIS uses importance sampling or control variates as variance<br />

reduction techniques.<br />

The details of the algorithm and the different techniques used are described in the<br />

DECIS User’s Maual. You can choose crude Monte Carlo, referred to as strategy 6, Monte<br />

Carlo importance sampling, referred to as strategy 2, or control variates, referred to as<br />

strategy 10. Both Monte Carlo importance sampling and control variates have been shown<br />

for many problems to give a better approximation compared to employing crude Monte<br />

Carlo sampling.<br />

When using Monte Carlo sampling DECIS computes a close approximation to the true<br />

solution of the problem, and estimates a close approximation of the true optimal objective<br />

value. It also computes a confidence interval within which the true optimal objective of<br />

the problem lies, say with 95% confidence. The confidence interval is based on rigorous<br />

statistical theory. An outline of how the confidence interval is computed is given in the<br />

DECIS User’s Manual. The size of the confidence interval depends on the variance of the<br />

second-stage cost of the stochastic problem and on the sample size used for the estimation.<br />

You can expect the confidence interval to be very small, especially when you employ<br />

importance sampling or control variates as a variance reduction technique.<br />

When employing Monte Carlo sampling techniques you have to choose a sample size (set<br />

in the parameter file). Clearly, the larger the sample size the better will be the approximate<br />

solution DECIS computes, and the smaller will be the confidence interval for the true<br />

optimal objective value. The default value for the sample size is 100. Setting the sample<br />

size too small may lead to bias in the estimation of the confidence interval, therefore the<br />

sample size should be at least 30.<br />

1.7. Monte Carlo pre-sampling<br />

We refer to pre-sampling when we first take a random sample from the distribution of the<br />

random parameters and then generate the approximate stochastic problem defined by the<br />

sample. The obtained approximate problem is then solved exactly using decomposition.<br />

This is in contrast to the way we used Monte Carlo sampling in the previous section, where<br />

we used Monte Carlo sampling in each iteration of the decomposition.<br />

The details of the techniques used for pre-sampling are discussed in the DECIS User’s<br />

Manual. DECIS computes the exact solution of the sampled problem using decomposition.<br />

This solution is an approximate solution of the original stochastic problem. Besides this<br />

approximate solution, DECIS computes an estimate of the expected cost corresponding to<br />

this approximate solution and a confidence interval within which the true optimal objective<br />

of the original stochastic problem lies with, say, 95% confidence. The confidence interval is<br />

8


ased on statistical theory, its size depends on the variance of the second-stage cost of the<br />

stochastic problem and on the sample size used for generating the approximate problem. In<br />

conjunction with pre-sampling no variance reduction techniques are currently implemented.<br />

<strong>Using</strong> Monte Carlo pre-sampling you have to choose a sample size. Clearly, the larger<br />

the sample size you choose, the better will be the solution DECIS computes, and the smaller<br />

will be the confidence interval for the true optimal objective value. The default value for<br />

the sample size is 100. Again, setting the sample size as too small may lead to a bias in the<br />

estimation of the confidence interval, therefore the sample size should be at least 30.<br />

For using Monte Carlo pre-sampling choose strategy 8.<br />

1.8. Regularized decomposition<br />

When solving practical problems, the number of Benders iterations can be quite large.<br />

In order to control the decomposition, with the hope to reduce the iteration count and<br />

the solution time, DECIS makes use of regularization. When employing regularization, an<br />

additional quadratic term is added to the objective of the master problem, representing<br />

the square of the distance between the best solution found so far (the incumbent solution)<br />

and the variable x. <strong>Using</strong> this term, DECIS controls the distance of solutions in different<br />

decomposition iterations.<br />

For enabling regularization you have to set the corresponding parameter. You also<br />

have to choose the value of the constant rho in the regularization term. The default is<br />

regularization disabled. Details of how DECIS carries out regularization are represented in<br />

the DECIS User’s Manual.<br />

Regularization is only implemented when using MINOS as the optimizer for solving<br />

subproblems. Regularization has proven to be helpful for problems that need a large number<br />

of Benders iteration when solved without regularization. Problems that need only a small<br />

number of Benders iterations without regularization are not expected to improve much with<br />

regularization, and may need even more iterations with regularization than without.<br />

2. GAMS/DECIS<br />

GAMS stands for General Algebraic Modeling Language, and is one of the most widely used<br />

modeling languages. <strong>Using</strong> DECIS directly from GAMS spares you from worrying about<br />

all the details of the input formats. It makes the problem formulation much easier but still<br />

gives you almost all the flexibility of using DECIS directly.<br />

The link from GAMS to DECIS has been designed in such a way that almost no extensions<br />

to the GAMS modeling language were necessary for carrying out the formulation<br />

and solution of stochastic programs. In a next release of GAMS, however, additions to the<br />

language are planned that will allow you to model stochastic programs in an even more<br />

elegant way.<br />

2.1. Setting up a stochastic program using GAMS/DECIS<br />

The interface from GAMS to DECIS supports the formulation and solution of stochastic<br />

linear programs. DECIS solves them using two-stage decomposition. The GAMS/DECIS<br />

9


interface resembles closely the structure of the SMPS (stochastic mathematical programming<br />

interface) discussed in the DECIS User’s Manual. The specification of a stochastic<br />

problem using GAMS/DECIS uses the following components:<br />

• the deterministic (core) model,<br />

• the specification of the decision stages,<br />

• the specification of the random parameters, and<br />

• setting DECIS to be the optimizer to be used.<br />

2.2. Starting with the deterministic model<br />

The core model is the deterministic linear program where all random parameters are replaced<br />

by their mean or by a particular realization. One could also see it as a GAMS model<br />

model without any randomness. It could be a deterministic model that you have, which<br />

you intend to expand to a stochastic one. <strong>Using</strong> DECIS with GAMS allows you to easily<br />

extend a deterministic linear programming model to a stochastic one. For example, the following<br />

GAMS model represents the a deterministic version of the electric power expansion<br />

planning illustrative example discussed in Infanger (1994).<br />

* APL1P test model<br />

* Dr. Gerd Infanger, November 1997<br />

* Deterministic Program<br />

set g generators / g1, g2/;<br />

set dl demand levels /h, m, l/;<br />

parameter alpha(g) availability / g1 0.68, g2 0.64 /;<br />

parameter ccmin(g) min capacity / g1 1000, g2 1000 /;<br />

parameter ccmax(g) max capacity / g1 10000, g2 10000 /;<br />

parameter c(g) investment / g1 4.0, g2 2.5 /;<br />

table f(g,dl) operating cost<br />

h m l<br />

g1 4.3 2.0 0.5<br />

g2 8.7 4.0 1.0;<br />

parameter d(dl) demand / h 1040, m 1040, l 1040 /;<br />

parameter us(dl) cost of unserved demand / h 10, m 10, l 10 /;<br />

free variable tcost total cost;<br />

positive variable x(g) capacity of generators;<br />

positive variable y(g, dl) operating level;<br />

positive variable s(dl) unserved demand;<br />

equations<br />

cost total cost<br />

cmin(g) minimum capacity<br />

cmax(g) maximum capacity<br />

omax(g) maximum operating level<br />

demand(dl) satisfy demand;<br />

10


cost .. tcost =e= sum(g, c(g)*x(g))<br />

+ sum(g, sum(dl, f(g,dl)*y(g,dl)))<br />

+ sum(dl,us(dl)*s(dl));<br />

cmin(g) .. x(g) =g= ccmin(g);<br />

cmax(g) .. x(g) =l= ccmax(g);<br />

omax(g) .. sum(dl, y(g,dl)) =l= alpha(g)*x(g);<br />

demand(dl) .. sum(g, y(g,dl)) + s(dl) =g= d(dl);<br />

model apl1p /all/;<br />

option lp=minos5;<br />

solve apl1p using lp minimizing tcost;<br />

scalar ccost capital cost;<br />

scalar ocost operating cost;<br />

ccost = sum(g, c(g) * x.l(g));<br />

ocost = tcost.l - ccost;<br />

display x.l, tcost.l, ccost, ocost, y.l, s.l;<br />

2.3. Setting the decision stages<br />

Next in order to extend a deterministic model to a stochastic one you must specify the decision<br />

stages. DECIS solves stochastic programs by two-stage decomposition. Accordingly,<br />

you must specify which variables belong to the first stage and which to the second stage, as<br />

well as which constraints are first-stage constraints and which are second-stage constraints.<br />

First stage constraints involve only first-stage variables, second-stage constraints involve<br />

both first- and second-stage variables. You must specify the stage of a variable or a constraint<br />

by setting the stage suffix “.STAGE” to either one or two depending on if it is a<br />

first or second stage variable or constraint. For example, expanding the illustrative model<br />

above by<br />

* setting decision stages<br />

x.stage(g) = 1;<br />

y.stage(g, dl) = 2;<br />

s.stage(dl) = 2;<br />

cmin.stage(g) = 1;<br />

cmax.stage(g) = 1;<br />

omax.stage(g) = 2;<br />

demand.stage(dl) = 2;<br />

would make x(g) first-stage variables, y(g, dl) and s(dl) second-stage variables, cmin(g)<br />

and cmax(g) first-stage constraints, and omax(g) and demand(g) second-stage constraints.<br />

The objective is treated separately, you don’t need to set the stage suffix for the objective<br />

variable and objective equation.<br />

It is noted that the use of the .stage variable and equation suffix causes the GAMS<br />

scaling facility through the .scale suffices to be unavailable. Stochastic models have to be<br />

scaled manually.<br />

11


2.4. Specifying the stochastic model<br />

DECIS supports any linear dependency model, i.e., the outcomes of an uncertain parameter<br />

in the linear program are a linear function of a number of independent random parameter<br />

outcomes. DECIS considers only discrete distributions, you must approximate any continuous<br />

distributions by discrete ones. The number of possible realizations of the discrete<br />

random parameters determines the accuracy of the approximation. A special case of a linear<br />

dependency model arises when you have only independent random parameters in your<br />

model. In this case the independent random parameters are mapped one to one into the<br />

random parameters of the stochastic program. We will present the independent case first<br />

and then expand to the case with linear dependency. According to setting up a linear dependency<br />

model we present the formulation in GAMS by first defining independent random<br />

parameters and then defining the distributions of the uncertain parameters in your model.<br />

2.4.1. Specifying independent random parameters<br />

There are of course many different ways you can set up independent random parameters in<br />

GAMS. In the following we show one possible way that is generic and thus can be adapted<br />

for different models. The set-up uses the set stoch for labeling outcome named “out” and<br />

probability named “pro” of each independent random parameter. In the following we show<br />

how to define an independent random parameter, say, v1. The formulation uses the set<br />

omega1 as driving set, where the set contains one element for each possible realization the<br />

random parameter can assume. For example, the set omega1 has four elements according to<br />

a discrete distribution of four possible outcomes. The distribution of the random parameter<br />

is defined as the parameter v1, a two-dimensional array of outcomes “out” and corresponding<br />

probability “pro” for each of the possible realizations of the set omega1, “o11”, “o12”, “o13”,<br />

and “o14”. For example, the random parameter v1 has outcomes of −1.0, −0.9, −0.5, −0.1<br />

with probabilities 0.2, 0.3, 0.4, 0.1, respectively. Instead of using assignment statements for<br />

inputting the different realizations and corresponding probabilities you could also use the<br />

table statement. You could also the table statement would work as well. Always make sure<br />

that the sum of the probabilities of each independent random parameter adds to one.<br />

* defining independent stochastic parameters<br />

set stoch /out, pro /;<br />

set omega1 / o11, o12, o13, o14 /;<br />

table v1(stoch, omega1)<br />

o11 o12 o13 o14<br />

out -1.0 -0.9 -0.5 -0.1<br />

pro 0.2 0.3 0.4 0.1<br />

;<br />

Random parameter v1 is the first out of five independent random parameters of the<br />

illustrative model APL1P, where the first two represent the independent availabilities of<br />

the generators g1 and g2 and the latter three represent the independent demands of the<br />

demand levels h, m, and l. We also represent the definitions of the remaining four independent<br />

random parameters. Note that random parameters v3, v4, and v5 are identically<br />

distributed.<br />

12


set omega2 / o21, o22, o23, o24, o25 /;<br />

table v2(stoch, omega2)<br />

o21 o22 o23 o24 o25<br />

out -1.0 -0.9 -0.7 -0.1 -0.0<br />

pro 0.1 0.2 0.5 0.1 0.1<br />

;<br />

set omega3 / o31, o32, o33, o34 /;<br />

table v3(stoch, omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

set omega4 / o41, o42, o43, o44 /;<br />

table v4(stoch,omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

set omega5 / o51, o52, o53, o54 /;<br />

table v5(stoch,omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

2.4.2. Defining the distributions of the uncertain parameters in the model<br />

Having defined the independent stochastic parameters (you may copy the setup above and<br />

adapt it for your model), we next define the stochastic parameters in the GAMS model. The<br />

stochastic parameters of the model are defined by writing a file, the GAMS stochastic file,<br />

using the put facility of GAMS. The GAMS stochastic file resembles closely the stochastic<br />

file of the SMPS input format. The main difference is that we use the row, column, bounds,<br />

and right hand side names of the GAMS model and that we can write it in free format.<br />

Independent stochastic parameters<br />

First we describe the case where all stochastic parameters in the model are independent, see<br />

below the representation of the stochastic parameters for the illustrative example APL1P,<br />

which has five independent stochastic parameters.<br />

First define the GAMS stochastic file “MODEL.STG” (only the exact name in uppercase<br />

letters is supported) and set up GAMS to write to it. This is done by the first two<br />

statements. You may want to consult the GAMS manual for how to use put for writing<br />

files. The next statement “INDEP DISCRETE” indicates that a section of independent<br />

stochastic parameters follows. Then we write all possible outcomes and corresponding<br />

probabilities for each stochastic parameter best by using a loop statement. Of course one<br />

13


could also write each line separately, but this would not look nicely. Writing a “*” between<br />

the definitions of the independent stochastic parameters is merely for optical reasons and<br />

can be omitted.<br />

* defining distributions (writing file MODEL.STG)<br />

file stg /MODEL.STG/;<br />

put stg;<br />

put "INDEP DISCRETE" /;<br />

loop(omega1,<br />

put "x g1 omax g1 ", v1("out", omega1), " period2 ", v1("pro", omega1) /;<br />

);<br />

put "*" /;<br />

loop(omega2,<br />

put "x g2 omax g2 ", v2("out", omega2), " period2 ", v2("pro", omega2) /;<br />

);<br />

put "*" /;<br />

loop(omega3,<br />

put "RHS demand h ", v3("out", omega3), " period2 ", v3("pro", omega3) /;<br />

);<br />

put "*" /;<br />

loop(omega4,<br />

put "RHS demand m ", v4("out", omega4), " period2 ", v4("pro", omega4) /;<br />

)<br />

put "*" /;<br />

loop(omega5,<br />

put "RHS demand l ", v5("out", omega5), " period2 ", v5("pro", omega5) /;<br />

);<br />

putclose stg;<br />

In the example APL1P the first stochastic parameter is the availability of generator<br />

g1. In the model the parameter appears as the coefficient of variable x(g1) in equation<br />

omax(g1). The definition using the put statement first gives the stochastic parameter as<br />

the intersection of variable x(g1) with equation omax(g1), but without having to type<br />

the braces, thus x g1 omax g1, then the outcome v1(”out”, omega1) and the probability<br />

v1(”pro”, omega1) separated by “period2”. The different elements of the statement must be<br />

separated by blanks. Since the outcomes and probabilities of the first stochastic parameters<br />

are driven by the set omega1 we loop over all elements of the set omega1. We continue and<br />

define all possible outcomes for each of the five independent stochastic parameters.<br />

In the example of independent stochastic parameters, the specification of the distribution<br />

of the stochasic parameters using the put facility creates the following file “MODEL.STG”,<br />

which then is processed by the GAMS/DECIS interface:<br />

INDEP DISCRETE<br />

x g1 omax g1 -1.00 period2 0.20<br />

x g1 omax g1 -0.90 period2 0.30<br />

x g1 omax g1 -0.50 period2 0.40<br />

x g1 omax g1 -0.10 period2 0.10<br />

*<br />

x g2 omax g2 -1.00 period2 0.10<br />

x g2 omax g2 -0.90 period2 0.20<br />

x g2 omax g2 -0.70 period2 0.50<br />

14


x g2 omax g2 -0.10 period2 0.10<br />

x g2 omax g2 0.00 period2 0.10<br />

*<br />

RHS demand h 900.00 period2 0.15<br />

RHS demand h 1000.00 period2 0.45<br />

RHS demand h 1100.00 period2 0.25<br />

RHS demand h 1200.00 period2 0.15<br />

*<br />

RHS demand m 900.00 period2 0.15<br />

RHS demand m 1000.00 period2 0.45<br />

RHS demand m 1100.00 period2 0.25<br />

RHS demand m 1200.00 period2 0.15<br />

*<br />

RHS demand l 900.00 period2 0.15<br />

RHS demand l 1000.00 period2 0.45<br />

RHS demand l 1100.00 period2 0.25<br />

RHS demand l 1200.00 period2 0.15<br />

For defining stochastic parameters in the right-hand side of the model use the keyword<br />

RHS as the column name, and the equation name of the equation which right-hand side is<br />

uncertain, see for example the specification of the uncertain demands RHS demand h, RHS<br />

demand m, and RHS demand l. For defining uncertain bound parameters you would use<br />

the keywords UP, LO, or FX, the string bnd, and the variable name of the variable, which<br />

upper, lower, or fixed bound is uncertain.<br />

Note all the keywords for the definitions are in capital letters, i.e., “INDEP DISCRETE”,<br />

“RHS”, and not represented in the example “UP”, “LO”, and “FX”.<br />

It is noted that in GAMS equations, variables may appear in the right-hand side, e.g.<br />

“EQ.. X+1 =L= 2*Y”. When the coefficient 2 is a random variable, we need to be aware<br />

that GAMS will generate the following LP row X - 2*Y =L= -1. Suppose the probability<br />

distribution of this random variable is given by:<br />

set s scenario /pessimistic, average, optimistic/;<br />

parameter outcome(s) / pessimistic 1.5<br />

average 2.0<br />

optimistic 2.3 /;<br />

parameter prob(s) / pessimistic 0.2<br />

average 0.6<br />

optimistic 0.2 /;<br />

then the correct way of generating the entries in the stochastic file would be:<br />

loop(s,<br />

put "Y EQ ",(-outcome(s))," PERIOD2 ",prob(s)/;<br />

);<br />

Note the negation of the outcome parameter. Also note that expressions in a PUT statement<br />

have to be surrounded by parentheses. GAMS reports in the row listing section of the<br />

listing file how equations are generated. You are encouraged to inspect the row listing how<br />

coefficients appear in a generated LP row.<br />

15


Dependent stochastic parameters<br />

Next we describe the case of general linear dependency of the stochastic parameters in the<br />

model, see below the representation of the stochastic parameters for the illustrative example<br />

APL1PCA, which has three dependent stochastic demands driven by two independent stochastic<br />

random parameters. First we give the definition of the two independent stochastic<br />

parameters, which in the example happen to have two outcomes each.<br />

* defining independent stochastic parameters<br />

set stoch /out, pro/;<br />

set omega1 / o11, o12 /;<br />

table v1(stoch,omega1)<br />

o11 o12<br />

out 2.1 1.0<br />

pro 0.5 0.5 ;<br />

set omega2 / o21, o22 /;<br />

table v2(stoch, omega2)<br />

o21 o22<br />

out 2.0 1.0<br />

pro 0.2 0.8 ;<br />

We next define the parameters of the transition matrix from the independent stochastic<br />

parameters to the dependent stochastic parameters of the model. We do this by defining two<br />

parameter vectors, where the vector hm1 gives the coefficients of the independent random<br />

parameter v1 in each of the three demand levels and the vector hm2 gives the coefficients<br />

of the independent random parameter v2 in each of the three demand levels.<br />

parameter hm1(dl) / h 300., m 400., l 200. /;<br />

parameter hm2(dl) / h 100., m 150., l 300. /;<br />

Again first define the GAMS stochastic file “MODEL.STG” and set GAMS to write<br />

to it. The statement BLOCKS DISCRETE indicates that a section of linear dependent<br />

stochastic parameters follows.<br />

* defining distributions (writing file MODEL.STG)<br />

file stg / MODEL.STG /;<br />

put stg;<br />

put "BLOCKS DISCRETE" /;<br />

scalar h1;<br />

loop(omega1,<br />

put "BL v1 period2 ", v1("pro", omega1)/;<br />

loop(dl,<br />

h1 = hm1(dl) * v1("out", omega1);<br />

put "RHS demand ", dl.tl:1, " ", h1/;<br />

);<br />

);<br />

loop(omega2,<br />

put " BL v2 period2 ", v2("pro", omega2) /;<br />

loop(dl,<br />

h1 = hm2(dl) * v2("out", omega2);<br />

16


put "RHS demand ", dl.tl:1, " ", h1/;<br />

);<br />

);<br />

putclose stg;<br />

Dependent stochastic parameters are defined as functions of independent random parameters.<br />

The keyword BL labels a possible realization of an independent random parameter.<br />

The name besides the BL keyword is used to distinguish between different outcomes of the<br />

same independent random parameter or a different one. While you could use any unique<br />

names for the independent random parameters, it appears natural to use the names you<br />

have already defined above, e.g., v1 and v2. For each realization of each independent random<br />

parameter define the outcome of every dependent random parameter (as a function<br />

of the independent one). If a dependent random parameter in the GAMS model depends<br />

on two or more different independent random parameter the contributions of each of the<br />

independent parameters are added. We are therefore in the position to model any linear<br />

dependency model. (Note that the class of models that can be accommodated here is more<br />

general than linear. The functions, with which an independent random variable contributes<br />

to the dependent random variables can be any ones in one argument. As a general rule, any<br />

stochastic model that can be estimated by linear regression is supported by GAMS/DECIS.)<br />

Define each independent random parameter outcome and the probability associated with<br />

it. For example, the statement starting with BL v1 period2 indicates that an outcome of<br />

(independent random parameter) v1 is being defined. The name period2 indicates that it is<br />

a second-stage random parameter, and v1(”pro”, omega1) gives the probability associated<br />

with this outcome. Next list all random parameters dependent on the independent random<br />

parameter outcome just defined. Define the dependent stochastic parameter coefficients<br />

by the GAMS variable name and equation name, or “RHS” and variable name, together<br />

with the value of the parameter associated with this realization. In the example, we have<br />

three dependent demands. <strong>Using</strong> the scalar h1 for intermediately storing the results of<br />

the calculation, looping over the different demand levels dl we calculate h1 = hm1(dl) *<br />

v1(”out”, omega1) and define the dependent random parameters as the right-hand sides of<br />

equation demand(dl).<br />

When defining an independent random parameter outcome, if the block name is the same<br />

as the previous one (e.g., when BL v1 appears the second time), a different outcome of the<br />

same independent random parameter is being defined, while a different block name (e.g.,<br />

when BL v2 appears the first time) indicates that the first outcome of a different independent<br />

random parameter is being defined. You must ensure that the probabilities of the different<br />

outcomes of each of the independent random parameters add up to one. The loop over all<br />

elements of omega1 defines all realizations of the independent random parameter v1 and<br />

the loop over all elements of omega2 defines all realizations of the independent random<br />

parameter v2.<br />

Note for the first realization of an independent random parameter, you must define all<br />

dependent parameters and their realizations. The values entered serve as a base case. For<br />

any other realization of an independent random parameter you only need to define the<br />

dependent parameters that have different coefficients than have been defined in the base<br />

case. For those not defined in a particular realization, their values of the base case are<br />

automatically added.<br />

17


In the example of dependent stochastic parameters above, the specification of the distribution<br />

of the stochastic parameters using the put facility creates the following file “MOD-<br />

EL.STG”, which then is processed by the GAMS/DECIS interface:<br />

BLOCKS DISCRETE<br />

BL v1 period2 0.50<br />

RHS demand h 630.00<br />

RHS demand m 840.00<br />

RHS demand l 420.00<br />

BL v1 period2 0.50<br />

RHS demand h 300.00<br />

RHS demand m 400.00<br />

RHS demand l 200.00<br />

BL v2 period2 0.20<br />

RHS demand h 200.00<br />

RHS demand m 300.00<br />

RHS demand l 600.00<br />

BL v2 period2 0.80<br />

RHS demand h 100.00<br />

RHS demand m 150.00<br />

RHS demand l 300.00<br />

Again all the keywords for the definitions are in capital letters, i.e., “BLOCKS DISCRETE”,<br />

“BL”, “RHS”, and not represented in the example “UP”, “LO”, and “FX”.<br />

Note that you can only define random parameter coefficients that are nonzero in your<br />

GAMS model. When setting up the deterministic core model put a nonzero entry as a<br />

placeholder for any coefficient that you wish to specify as a stochastic parameter. Specifying<br />

a random parameter at the location of a zero coefficient in the GAMS model causes DECIS<br />

to terminate with an error message.<br />

2.5. Setting DECIS as the optimizer<br />

After having finished the stochastic definitions you must set DECIS as the optimizer. This<br />

is done by issuing the following statements:<br />

* setting DECIS as optimizer<br />

* DECISM uses MINOS, DECISC uses CPLEX<br />

option lp=decism;<br />

apl1p.optfile = 1;<br />

The statement option lp = decis sets DECIS as the optimizer to be used for solving the<br />

stochastic problem. Note that if you do not use DECIS, but instead use any other linear<br />

programming optimizer, your GAMS model will still run and optimize the deterministic core<br />

model that you have specified. The statement apl1p.optfile = 1 forces GAMS to process<br />

the file DECIS.OPT, in which you may define any DECIS parameters.<br />

2.5.1. Setting parameter options in the GAMS model<br />

The options iteration limit and resource limit can be set directly in your GAMS model file.<br />

For example, the following statements<br />

18


option iterim = 1000;<br />

option reslim = 6000;<br />

constrain the number of decomposition iterations to be less than or equal to 1000, and the<br />

elapsed time for running DECIS to be less than or equal to 6000 seconds or 100 minutes.<br />

2.5.2. Setting parameters in the DECIS options file<br />

In the DECIS options file DECIS.OPT you can specify parameters regarding the solution<br />

algorithm used and control the output of the DECIS program. There is a record for each<br />

parameter you want to specify. Each record consists of the value of the parameter you want<br />

to specify and the keyword identifying the parameter, separated by a blank character or a<br />

comma. You may specify parameters with the following keywords: “istrat”, “nsamples”,<br />

“nzrows”, “iwrite”, “ibug”, “iscratch”, “ireg”, “rho”, “tolben”, and “tolw” in any order.<br />

Each keyword can be specified in lower case or upper case text in the format (A10). Since<br />

DECIS reads the records in free format you don’t have to worry about the format, but some<br />

computers require that the text is inputted in quotes. Parameters that are not specified in<br />

the parameter file automatically assume their default values.<br />

istrat — Defines the solution strategy used. The default value is istrat = 3.<br />

istrat = 1 Solves the expected value problem. All stochastic parameters are replaced<br />

by their expected values and the corresponding deterministic problem is solved<br />

using decomposition.<br />

istrat = 2 Solves the stochastic problem using Monte Carlo importance sampling.<br />

You have to additionally specify what approximation function you wish to use,<br />

and the sample size used for the estimation, see below.<br />

istrat = 3 Refers to istrat = 1 plus istrat = 2. First solves the expected value<br />

problem using decomposition, then continues and solves the stochastic problem<br />

using importance sampling.<br />

istrat = 4 Solves the stochastic universe problem by enumerating all possible combinations<br />

of realizations of the second-stage random parameters. It gives you<br />

the exact solution of the stochastic program. This strategy may be impossible,<br />

because there may be way too many possible realizations of the random<br />

parameters.<br />

istrat = 5 Refers to istrat = 1 plus istrat = 4. First solves the expected value problem<br />

using decomposition, then continues and solves the stochastic universe problem<br />

by enumerating all possible combinations of realizations of second-stage random<br />

parameters.<br />

istrat = 6 Solves the stochastic problem using crude Monte Carlo sampling. No<br />

variance reduction technique is applied. This strategy is especially useful if you<br />

want to test a solution obtained by using the evaluation mode of DECIS. You<br />

have to specify the sample size used for the estimation. There is a maximum<br />

sample size DECIS can handle. However, this maximum sample size does not<br />

apply when using crude Monte Carlo. Therefore, in this mode you can specify<br />

very large sample sizes, which is useful when evaluating a particular solution.<br />

19


istrat = 7 Refers to istrat = 1 plus istrat = 6. First solves the expected value<br />

problem using decomposition, then continues and solves the stochastic problem<br />

using crude Monte Carlo sampling.<br />

istrat = 8 Solves the stochastic problem using Monte Carlo pre-sampling. A Monte<br />

Carlo sample out of all possible universe scenarios, sampled from the original<br />

probability distribution, is taken, and the corresponding “sample problem” is<br />

solved using decomposition.<br />

istrat = 9 Refers to istrat = 1 plus istrat = 8. First solves the expected value<br />

problem using decomposition, then continues and solves the stochastic problem<br />

using Monte Carlo pre-sampling.<br />

istrat = 10 Solves the stochastic problem using control variates. You also have to<br />

specify what approximation function and what sample size should be used for<br />

the estimation.<br />

istrat = 11 Refers to istrat = 1 plus istrat = 10. First solves the expected value<br />

problem using decomposition, then continues and solves the stochastic problem<br />

using control variates.<br />

nsamples — Sample size used for the estimation. It should be set greater or equal to 30<br />

in order to fulfill the assumption of large sample size used for the derivation of the<br />

probabilistic bounds. The default value is nsamples = 100.<br />

nzrows — Number of rows reserved for cuts in the master problem. It specifies the maximum<br />

number of different cuts DECIS maintains during the course of the decomposition<br />

algorithm. DECIS adds one cut during each iteration. If the iteration count<br />

exceeds nzrows, then each new cut replaces a previously generated cut, where the cut<br />

is replaced that has the maximum slack in the solution of the (pseudo) master. If<br />

nzrows is specified as too small then DECIS may not be able to compute a solution<br />

and stops with an error message. If nzrows is specified as too large the solution time<br />

will increase. As an approximate rule set nzrows greater than or equal to the number<br />

of first-stage variables of the problem. The default value is nzrows = 100.<br />

iwrite — Specifies whether the optimizer invoked for solving subproblems writes output<br />

or not. The default value is iwrite = 0.<br />

iwrite = 0 No optimizer output is written.<br />

iwrite = 1 Optimizer output is written to the file “MODEL.MO” in the case MINOS<br />

is used for solving subproblems or to the file MODEL.CPX in the case CPLEX is<br />

used for solving subproblems. The output level of the output can be specified using<br />

the optimizer options. It is intended as a debugging device. If you set iwrite<br />

= 1, for every master problem and for every subproblem solved the solution output<br />

is written. For large problems and large sample sizes the files“MODEL.MO”<br />

or “MODEL.CPX” may become very large, and the performance of DECIS may<br />

slow down.<br />

20


ibug — Specifies the detail of debug output written by DECIS. The output is written<br />

to the file “MODEL.SCR”, but can also be redirected to the screen by a separate<br />

parameter. The higher you set the number of ibug the more output DECIS will<br />

write. The parameter is intended to help debugging a problem and should be set to<br />

ibug = 0 for normal operation. For large problems and large sample sizes the file<br />

“MODEL.SCR” may become very large, and the performance of DECIS may slow<br />

down. The default value is ibug = 0.<br />

ibug = 0 This is the setting for which DECIS does not write any debug output.<br />

ibug = 1 In addition to the standard output, DECIS writes the solution of the master<br />

problem on each iteration of the Benders decomposition algorithm. Thereby<br />

it only writes out variable values which are nonzero. A threshold tolerance<br />

parameter for writing solution values can be specified, see below.<br />

ibug = 2 In addition to the output of ibug = 1, DECIS writes the scenario index and<br />

the optimal objective value for each subproblem solved. In the case of solving<br />

the universe problem, DECIS also writes the probability of the corresponding<br />

scenario.<br />

ibug = 3 In addition to the output of ibug = 2, DECIS writes information regarding<br />

importance sampling. In the case of using the additive approximation function,<br />

it reports the expected value for each i-th component of ¯ Γi, the individual sample<br />

sizes Ni, and results from the estimation process. In the case of using the<br />

multiplicative approximation function it writes the expected value of the approximation<br />

function ¯ Γ and results from the estimation process.<br />

ibug = 4 In addition to the output of ibug = 3, DECIS writes the optimal dual<br />

variables of the cuts on each iteration of the master problem.<br />

ibug = 5 In addition to the output of ibug = 4, DECIS writes the coefficients and<br />

the right-hand side of the cuts on each iteration of the decomposition algorithm.<br />

In addition it checks if the cut computed is a support to the recourse function<br />

(or estimated recourse function) at the solution ˆx k at which it was generated.<br />

If it turns out that the cut is not a support, DECIS writes out the value of the<br />

(estimated) cut and the value of the (estimated) second stage cost at ˆx k .<br />

ibug = 6 In addition to the output of ibug = 5, DECIS writes a dump of the<br />

master problem and the subproblem in MPS format after having decomposed<br />

the problem specified in the core file. The dump of the master problem is written<br />

to the file “MODEL.P01” and the dump of the subproblem is written to the file<br />

“MODEL.P02”. DECIS also writes a dump of the subproblem after the first<br />

iteration to the file “MODEL.S02”.<br />

iscratch — Specifies the internal unit number to which the standard and debug output is<br />

written. The default value is iscratch = 17, where the standard and debug output is<br />

written to the file “MODEL.SCR”. Setting iscratch = 6 redirects the output to the<br />

screen. Other internal unit numbers could be used, e.g., the internal unit number of<br />

the printer, but this is not recommended.<br />

21


ireg — Specifies whether or not DECIS uses regularized decomposition for solving the<br />

problem. This option is considered if MINOS is used as a master and subproblem<br />

solver, and is not considered if using CPLEX, since regularized decomposition uses a<br />

nonlinear term in the objective. The default value is ireg = 0.<br />

rho — Specifies the value of the ρ parameter of the regularization term in the objective<br />

function. You will have to experiment to find out what value of rho works best for<br />

the problem you want to solve. There is no rule of thumb as to what value should<br />

be chosen. In many cases it has turned out that regularized decomposition reduces<br />

the iteration count if standard decomposition needs a large number of iterations. The<br />

default value is rho = 1000.<br />

tolben — Specifies the tolerance for stopping the decomposition algorithm. The parameter<br />

is especially important for deterministic solution strategies, i.e., 1, 4, 5, 8, and 9.<br />

Choosing a very small value of tolben may result in a significantly increased number<br />

of iterations when solving the problem. The default value is 10 −7 .<br />

tolw — Specifies the nonzero tolerance when writing debug solution output. DECIS writes<br />

only variables whose values are nonzero, i.e., whose absolute optimal value is greater<br />

than or equal to tolw. The default value is 10 −9 .<br />

Example<br />

In the following example the parameters istrat = 7, nsamples = 200, and nzrows = 200<br />

are specified. All other parameters are set at their default values. DECIS first solves the<br />

expected value problem and then the stochastic problem using crude Monte Carlo sampling<br />

with a sample size of nsamples = 200. DECIS reserves space for a maximum of nzrows =<br />

50 cuts.<br />

7 "ISTRAT"<br />

200 "NSAMPLES"<br />

50 "NZROWS"<br />

2.5.3. Setting MINOS parameters in the MINOS specification file<br />

When you use MINOS as the optimizer for solving the master and the subproblems, you<br />

must specify optimization parameters in the MINOS specification file “MINOS.SPC”. Each<br />

record of the file corresponds to the specification of one parameter and consists of a keyword<br />

and the value of the parameter in free format. Records having a “*” as their first character<br />

are considered as comment lines and are not further processed. For a detailed description<br />

of these parameters, see the MINOS Users’ Guide (Murtagh and Saunders (1983) [5]. The<br />

following parameters should be specified with some consideration:<br />

AIJ TOLERANCE — Specifies the nonzero tolerance for constraint matrix elements of<br />

the problem. Matrix elements aij that have a value for which |aij| is less than “AIJ<br />

TOLERANCE” are considered by MINOS as zero and are automatically eliminated<br />

from the problem. It is wise to specify “AIJ TOLERANCE 0.0 ”<br />

22


SCALE — Specifies MINOS to scale the problem (“SCALE YES”) or not (“SCALE NO”).<br />

It is wise to specify “SCALE NO”.<br />

ROWS — Specifies the number of rows in order for MINOS to reserve the appropriate<br />

space in its data structures when reading the problem. “ROWS” should be specified<br />

as the number of constraints in the core problem or greater.<br />

COLUMNS — Specifies the number of columns in order for MINOS to reserve the appropriate<br />

space in its data structures when reading the problem. “COLUMNS” should<br />

be specified as the number of variables in the core problem or greater.<br />

ELEMENTS — Specifies the number of nonzero matrix coefficients in order for MINOS<br />

to reserve the appropriate space in its data structures when reading the problem.<br />

“ELEMENTS” should be specified as the number of nonzero matrix coefficients in<br />

the core problem or greater.<br />

Example<br />

The following example represents typical specifications for running DECIS with MINOS as<br />

the optimizer.<br />

BEGIN SPECS<br />

PRINT LEVEL 1<br />

LOG FREQUENCY 10<br />

SUMMARY FREQUENCY 10<br />

MPS FILE 12<br />

ROWS 20000<br />

COLUMNS 50000<br />

ELEMENTS 100000<br />

ITERATIONS LIMIT 30000<br />

*<br />

FACTORIZATION FREQUENCY 100<br />

AIJ TOLERANCE 0.0<br />

*<br />

SCALE NO<br />

END OF SPECS<br />

2.5.4. Setting CPLEX parameters using system environment variables<br />

When you use CPLEX as the optimizer for solving the master and the subproblems, optimization<br />

parameters must be specified through system environment variables. You can<br />

specify the parameters “CPLEXLICDIR”, “SCALELP”, “NOPRESOLVE”, “ITERLOG”,<br />

“OPTIMALITYTOL”, “FEASIBILIITYTOL”, and “DUALSIMPLEX”.<br />

CPLEXLICDIR — Contains the path to the CPLEX license directory. For example, on<br />

an Unix system with the CPLEX license directory in /usr/users/cplex/cplexlicdir you<br />

issue the command setenv CPLEXLICDIR /usr/users/cplex/cplexlicdir.<br />

SCALELP — Specifies CPLEX to scale the master and subproblems before solving them.<br />

If the environment variable is not set no scaling is used. Setting the environment<br />

variable, e.g., by issuing the command setenv SCALELP yes, scaling is switched on.<br />

23


NOPRESOLVE — Allows to switch off CPLEX’s presolver. If the environment variable<br />

is not set, presolve will be used. Setting the environment variable, e.g., by setting<br />

setenv NOPRESOLVE yes, no presolve will be used.<br />

ITERLOG — Specifies the iteration log of the CPLEX iterations to be printed to the file<br />

“MODEL.CPX”. If you do not set the environment variable no iteration log will be<br />

printed. Setting the environment variable, e.g., by setting setenv ITERLOG yes, the<br />

CPLEX iteration log is printed.<br />

OPTIMALITYTOL — Specifies the optimality tolerance for the CPLEX optimizer. If you<br />

do not set the environment variable the CPLEX default values are used. For example,<br />

setting setenv OPTIMALITYTOL 1.0E-7 sets the CPLEX optimality tolerance to<br />

0.0000001.<br />

FEASIBILIITYTOL — Specifies the feasibility tolerance for the CPLEX optimizer. If you<br />

do not set the environment variable the CPLEX default values are used. For example,<br />

setting setenv FEASIBILITYTOL 1.0E-7 sets the CPLEX optimality tolerance to<br />

0.0000001.<br />

DUALSIMPLEX — Specifies the dual simplex algorithm of CPLEX to be used. If the<br />

environment variable is not set the primal simplex algorithm will be used. This is the<br />

default and works beautifully for most problems. If the environment variable is set,<br />

e.g., by setting setenv DUALSIMPLEX yes, CPLEX uses the dual simplex algorithm<br />

for solving both master and subproblems.<br />

2.6. GAMS/DECIS output<br />

After successfully having solved a problem, DECIS returns the objective, the optimal primal<br />

and optimal dual solution, the status of variables (if basic or not), and the status of equations<br />

(if binding or not) to GAMS. In the case of first-stage variables and equations you have all<br />

information in GAMS available as if you used any other solver, just instead of obtaining the<br />

optimal values for deterministic core problem you actually obtained the optimal values for<br />

the stochastic problem. However, for second-stage variables and constraints the expected<br />

values of the optimal primal and optimal dual solution are reported. This saves space and<br />

is useful for the calculation of risk measures. However, the information as to what the<br />

optimal primal and dual solutions were in the different scenarios of the stochastic programs<br />

is not reported back to GAMS. In a next release of the GAMS/DECIS interface the GAMS<br />

language is planned to be extended to being able to handle the scenario second-stage optimal<br />

primal and dual values at least for selected variables and equations.<br />

While running DECIS outputs important information about the progress of the execution<br />

to your computer screen. After successfully having solved a problem, DECIS also<br />

outputs its optimal solution into the solution output file “MODEL.SOL”. The debug output<br />

file “MODEL.SCR” contains important information about the optimization run, and<br />

the optimizer output files “MODEL.MO” (when using DECIS with MINOS) or “MOD-<br />

EL.CPX” (when using DECIS with CPLEX) contain solution output from the optimizer<br />

used. In the DECIS User’s Guide you find a detailed discussion of how to interpret the<br />

screen output, the solution report and the information in the output files.<br />

24


2.6.1. The screen output<br />

The output to the screen allows you to observe the progress in the execution of a DECIS<br />

run. After the program logo and the copyright statement, you see four columns of output<br />

beeing written to the screen as long as the program proceeds. The first column (from left<br />

to right) represents the iteration count, the second column the lower bound (the optimal<br />

objective of the master problem), the third column the best upper bound (exact value or<br />

estimate of the total expected cost of the best solution found so far), and the fourth column<br />

the current upper bound (exact value or estimate of the total expected cost of current<br />

solution). After successful completion, DECIS quits with “Normal Exit”, otherwise, if an<br />

error has been encountered, the programs stops with the message “Error Exit”.<br />

Example<br />

When solving the illustrative example APL1P using strategy 5, we obtain the following<br />

report on the screen:<br />

T H E D E C I S S Y S T E M<br />

Copyright (c) 1989 -- 1999 by Dr. Gerd Infanger<br />

All rights reserved.<br />

iter lower best upper current upper<br />

0 -0.9935E+06<br />

1 -0.4626E+06 0.2590E+05 0.2590E+05<br />

2 0.2111E+05 0.2590E+05 0.5487E+06<br />

3 0.2170E+05 0.2590E+05 0.2697E+05<br />

4 0.2368E+05 0.2384E+05 0.2384E+05<br />

5 0.2370E+05 0.2384E+05 0.2401E+05<br />

6 0.2370E+05 0.2370E+05 0.2370E+05<br />

iter lower best upper current upper<br />

6 0.2370E+05<br />

7 0.2403E+05 0.2470E+05 0.2470E+05<br />

8 0.2433E+05 0.2470E+05 0.2694E+05<br />

9 0.2441E+05 0.2470E+05 0.2602E+05<br />

10 0.2453E+05 0.2470E+05 0.2499E+05<br />

11 0.2455E+05 0.2470E+05 0.2483E+05<br />

12 0.2461E+05 0.2467E+05 0.2467E+05<br />

13 0.2461E+05 0.2467E+05 0.2469E+05<br />

14 0.2461E+05 0.2465E+05 0.2465E+05<br />

15 0.2463E+05 0.2465E+05 0.2467E+05<br />

16 0.2463E+05 0.2465E+05 0.2465E+05<br />

17 0.2464E+05 0.2465E+05 0.2465E+05<br />

18 0.2464E+05 0.2464E+05 0.2464E+05<br />

19 0.2464E+05 0.2464E+05 0.2464E+05<br />

20 0.2464E+05 0.2464E+05 0.2464E+05<br />

21 0.2464E+05 0.2464E+05 0.2464E+05<br />

22 0.2464E+05 0.2464E+05 0.2464E+05<br />

Normal Exit<br />

25


2.6.2. The solution output file<br />

The solution output file contains the solution report from the DECIS run. Its name is<br />

“MODEL.SOL”. The file contains the best objective function value found, the corresponding<br />

values of the first-stage variables, the corresponding optimal second-stage cost, and a lower<br />

and an upper bound on the optimal objective of the problem. In addition, the number of<br />

universe scenarios and the settings for the stopping tolerance are reported. In the case of<br />

using a deterministic strategy for solving the problem, exact values are reported. When<br />

using Monte Carlo sampling, estimated values, their variances, and the sample size used for<br />

the estimation are reported. Instead of exact upper and lower bounds, probabilistic upper<br />

and lower bounds, and a 95% confidence interval, within which the true optimal solution<br />

lies with 95% confidence, are reported. A detailed description of the solution output file<br />

can be found in the DECIS User’s Guide.<br />

2.6.3. The debug output file<br />

The debug output file contains the standard output of a run of DECIS containing important<br />

information about the problem, its parameters, and its solution. It also contains any error<br />

messages that may occur during a run of DECIS. In the case that DECIS does not complete<br />

a run successfully, the cause of the trouble can usually be located using the information in<br />

the debug output file. If the standard output does not give enough information you can set<br />

the debug parameter ibug in the parameter input file to a higher value and obtain additional<br />

debug output. A detailed description of the debug output file can be found in the DECIS<br />

User’s Guide.<br />

2.6.4. The optimizer output files<br />

The optimizer output file “MODEL.MO” contains all the output from MINOS when called<br />

as a subroutine by DECIS. You can specify what degree of detail should be outputted by<br />

setting the appropriate “PRINT LEVEL” in the MINOS specification file. The optimizer<br />

output file “MODEL.CPX” reports messages and the iteration log (if switchwd on using<br />

the environment variable) from CPLEX when solving master and sub problems.<br />

26


A. GAMS/DECIS illustrative Examples<br />

A.1. Example APL1P<br />

* APL1P test model<br />

* Dr. Gerd Infanger, November 1997<br />

set g generators /g1, g2/;<br />

set dl demand levels /h, m, l/;<br />

parameter alpha(g) availability / g1 0.68, g2 0.64 /;<br />

parameter ccmin(g) min capacity / g1 1000, g2 1000 /;<br />

parameter ccmax(g) max capacity / g1 10000, g2 10000 /;<br />

parameter c(g) investment / g1 4.0, g2 2.5 /;<br />

table f(g,dl) operating cost<br />

h m l<br />

g1 4.3 2.0 0.5<br />

g2 8.7 4.0 1.0;<br />

parameter d(dl) demand / h 1040, m 1040, l 1040 /;<br />

parameter us(dl) cost of unserved demand / h 10, m 10, l 10 /;<br />

free variable tcost total cost;<br />

positive variable x(g) capacity of generators;<br />

positive variable y(g, dl) operating level;<br />

positive variable s(dl) unserved demand;<br />

equations<br />

cost total cost<br />

cmin(g) minimum capacity<br />

cmax(g) maximum capacity<br />

omax(g) maximum operating level<br />

demand(dl) satisfy demand;<br />

cost .. tcost =e= sum(g, c(g)*x(g))<br />

+ sum(g, sum(dl, f(g,dl)*y(g,dl)))<br />

+ sum(dl,us(dl)*s(dl));<br />

cmin(g) .. x(g) =g= ccmin(g);<br />

cmax(g) .. x(g) =l= ccmax(g);<br />

omax(g) .. sum(dl, y(g,dl)) =l= alpha(g)*x(g);<br />

demand(dl) .. sum(g, y(g,dl)) + s(dl) =g= d(dl);<br />

model apl1p /all/;<br />

* setting decision stages<br />

x.stage(g) = 1;<br />

y.stage(g, dl) = 2;<br />

s.stage(dl) = 2;<br />

cmin.stage(g) = 1;<br />

cmax.stage(g) = 1;<br />

omax.stage(g) = 2;<br />

demand.stage(dl) = 2;<br />

* defining independent stochastic parameters<br />

27


set stoch /out, pro /;<br />

set omega1 / o11, o12, o13, o14 /;<br />

table v1(stoch, omega1)<br />

o11 o12 o13 o14<br />

out -1.0 -0.9 -0.5 -0.1<br />

pro 0.2 0.3 0.4 0.1<br />

;<br />

set omega2 / o21, o22, o23, o24, o25 /;<br />

table v2(stoch, omega2)<br />

o21 o22 o23 o24 o25<br />

out -1.0 -0.9 -0.7 -0.1 -0.0<br />

pro 0.1 0.2 0.5 0.1 0.1<br />

;<br />

set omega3 / o31, o32, o33, o34 /;<br />

table v3(stoch, omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

set omega4 / o41, o42, o43, o44 /;<br />

table v4(stoch,omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

set omega5 / o51, o52, o53, o54 /;<br />

table v5(stoch,omega1)<br />

o11 o12 o13 o14<br />

out 900 1000 1100 1200<br />

pro 0.15 0.45 0.25 0.15<br />

;<br />

* defining distributions<br />

file stg /MODEL.STG/;<br />

put stg;<br />

put "INDEP DISCRETE" /;<br />

loop(omega1,<br />

put "x g1 omax g1 ", v1("out", omega1), " period2 ", v1("pro", omega1) /;<br />

);<br />

put "*" /;<br />

loop(omega2,<br />

put "x g2 omax g2 ", v2("out", omega2), " period2 ", v2("pro", omega2) /;<br />

);<br />

put "*" /;<br />

loop(omega3,<br />

put "RHS demand h ", v3("out", omega3), " period2 ", v3("pro", omega3) /;<br />

);<br />

put "*" /;<br />

28


loop(omega4,<br />

put "RHS demand m ", v4("out", omega4), " period2 ", v4("pro", omega4) /;<br />

);<br />

put "*" /;<br />

loop(omega5,<br />

put "RHS demand l ", v5("out", omega5), " period2 ", v5("pro", omega5) /;<br />

);<br />

putclose stg;<br />

* setting DECIS as optimizer<br />

* DECISM uses MINOS, DECISC uses CPLEX<br />

option lp=decism;<br />

apl1p.optfile = 1;<br />

solve apl1p using lp minimizing tcost;<br />

scalar ccost capital cost;<br />

scalar ocost operating cost;<br />

ccost = sum(g, c(g) * x.l(g));<br />

ocost = tcost.l - ccost;<br />

display x.l, tcost.l, ccost, ocost, y.l, s.l;<br />

29


A.2. Example APL1PCA<br />

* APL1PCA test model<br />

* Dr. Gerd Infanger, November 1997<br />

set g generators /g1, g2/;<br />

set dl demand levels /h, m, l/;<br />

parameter alpha(g) availability / g1 0.68, g2 0.64 /;<br />

parameter ccmin(g) min capacity / g1 1000, g2 1000 /;<br />

parameter ccmax(g) max capacity / g1 10000, g2 10000 /;<br />

parameter c(g) investment / g1 4.0, g2 2.5 /;<br />

table f(g,dl) operating cost<br />

h m l<br />

g1 4.3 2.0 0.5<br />

g2 8.7 4.0 1.0;<br />

parameter d(dl) demand / h 1040, m 1040, l 1040 /;<br />

parameter us(dl) cost of unserved demand / h 10, m 10, l 10 /;<br />

free variable tcost total cost;<br />

positive variable x(g) capacity of generators;<br />

positive variable y(g, dl) operating level;<br />

positive variable s(dl) unserved demand;<br />

equations<br />

cost total cost<br />

cmin(g) minimum capacity<br />

cmax(g) maximum capacity<br />

omax(g) maximum operating level<br />

demand(dl) satisfy demand;<br />

cost .. tcost =e= sum(g, c(g)*x(g))<br />

+ sum(g, sum(dl, f(g,dl)*y(g,dl)))<br />

+ sum(dl,us(dl)*s(dl));<br />

cmin(g) .. x(g) =g= ccmin(g);<br />

cmax(g) .. x(g) =l= ccmax(g);<br />

omax(g) .. sum(dl, y(g,dl)) =l= alpha(g)*x(g);<br />

demand(dl) .. sum(g, y(g,dl)) + s(dl) =g= d(dl);<br />

model apl1p /all/;<br />

* setting decision stages<br />

x.stage(g) = 1;<br />

y.stage(g, dl) = 2;<br />

s.stage(dl) = 2;<br />

cmin.stage(g) = 1;<br />

cmax.stage(g) = 1;<br />

omax.stage(g) = 2;<br />

demand.stage(dl) = 2;<br />

* defining independent stochastic parameters<br />

set stoch /out, pro/;<br />

30


set omega1 / o11, o12 /;<br />

table v1(stoch,omega1)<br />

o11 o12<br />

out 2.1 1.0<br />

pro 0.5 0.5 ;<br />

set omega2 / o21, o22 /;<br />

table v2(stoch, omega2)<br />

o21 o22<br />

out 2.0 1.0<br />

pro 0.2 0.8 ;<br />

parameter hm1(dl) / h 300., m 400., l 200. /;<br />

parameter hm2(dl) / h 100., m 150., l 300. /;<br />

* defining distributions (writing file MODEL.STG)<br />

file stg / MODEL.STG /;<br />

put stg;<br />

put "BLOCKS DISCRETE" /;<br />

scalar h1;<br />

loop(omega1,<br />

put "BL v1 period2 ", v1("pro", omega1)/;<br />

loop(dl,<br />

h1 = hm1(dl) * v1("out", omega1);<br />

put "RHS demand ", dl.tl:1, " ", h1/;<br />

);<br />

);<br />

loop(omega2,<br />

put " BL v2 period2 ", v2("pro", omega2) /;<br />

loop(dl,<br />

h1 = hm2(dl) * v2("out", omega2);<br />

put "RHS demand ", dl.tl:1, " ", h1/;<br />

);<br />

);<br />

putclose stg;<br />

* setting DECIS as optimizer<br />

* DECISM uses MINOS, DECISC uses CPLEX<br />

option lp=decism;<br />

apl1p.optfile = 1;<br />

solve apl1p using lp minimizing tcost;<br />

scalar ccost capital cost;<br />

scalar ocost operating cost;<br />

ccost = sum(g, c(g) * x.l(g));<br />

ocost = tcost.l - ccost;<br />

display x.l, tcost.l, ccost, ocost, y.l, s.l;<br />

31


B. Error messages<br />

1. ERROR in MODEL.STO: kwd, word1, word2 was not matched in first realization of<br />

block<br />

The specification of the stochastic parameters is incorrect. The stochastic parameter<br />

has not been specified in the specification of the first outcome of the block. When<br />

specifying the first outcome of a block always include all stochastic parameters corresponding<br />

to the block.<br />

2. Option word1 word2 not supported<br />

You specified an input distribution in the stochastic file that is not supported. Check<br />

the DECIS manual for supported distributions.<br />

3. Error in time file<br />

The time file is not correct. Check the file MODEL.TIM. Check the DECIS manual<br />

for the form of the time file.<br />

4. ERROR in MODEL.STO: stochastic RHS for objective, row name2<br />

The specification in the stochastic file is incorrect. You attempted to specify a stochastic<br />

right-hand side for the objective row (row name2). Check file MODEL.STO.<br />

5. ERROR in MODEL.STO: stochastic RHS in master, row name2<br />

The specification in the stochastic file is incorrect. You attempted to specify a stochastic<br />

right-hand side for the master problem (row name2). Check file MODEL.STO.<br />

6. ERROR in MODEL.STO: col not found, name1<br />

The specification in the stochastic file is incorrect. The entry in the stochastic file,<br />

name1, is not found in the core file. Check file MODEL.STO.<br />

7. ERROR in MODEL.STO: invalid col/row combination, (name1/name2)<br />

The stochastic file (MODEL.STO) contains an incorrect specification.<br />

8. ERROR in MODEL.STO: no nonzero found (in B or D matrix) for col/row (name1,<br />

name2)<br />

There is no nonzero entry for the combination of name1 (col) and name2(row) in the<br />

B-matrix or in the D-matrix. Check the corresponding entry in the stochastic file<br />

(MODEL.STO). You may want to include a nonzero coefficient for (col/row) in the<br />

core file (MODEL.COR).<br />

9. ERROR in MODEL.STO: col not found, name2<br />

The column name you specified in the stochastic file (MODEL.STO) does not exist<br />

in the core file (MODEL.COR). Check the file MODEL.STO.<br />

10. ERROR in MODEL.STO: stochastic bound in master, col name2<br />

You specified a stochastic bound on first-stage variable name2. Check file MOD-<br />

EL.STO.<br />

11. ERROR in MODEL.STO: invalid bound type (kwd) for col name2<br />

The bound type, kwd, you specified is invalid. Check file MODEL.STO.<br />

32


12. ERROR in MODEL.STO: row not found, name2<br />

The specification in the stochastic file is incorrect. The row name, name2, does not<br />

exist in the core file. Check file MODEL.STO.<br />

13. ERROR: problem infeasible<br />

The problem solved (master- or subproblem) turned out to be infeasible. If a subproblem<br />

is infeasible, you did not specify the problem as having the property of “complete<br />

recourse”. Complete recourse means that whatever first-stage decision is passed to<br />

a subproblem, the subproblem will have a feasible solution. It is the best way to<br />

specify a problem, especially if you use a sampling based solution strategy. If DECIS<br />

encounters a feasible subproblem, it adds a feasibility cut and continues the execution.<br />

If DECIS encounters an infeasible master problem, the problem you specified is<br />

infeasible, and DECIS terminates. Check the problem formulation.<br />

14. ERROR: problem unbounded<br />

The problem solved (master- or subproblem) turned out to be unbounded. Check the<br />

problem formulation.<br />

15. ERROR: error code: inform<br />

The solver returned with an error code from solving the problem (master- or subproblem).<br />

Consult the users’ manual of the solver (MINOS or CPLEX) for the meaning<br />

of the error code, inform. Check the problem formulation.<br />

16. ERROR: while reading SPECS file<br />

The MINOS specification file (MINOS.SPC) containes an error. Check the specification<br />

file. Consult the MINOS user’s manual.<br />

17. ERROR: reading mps file, mpsfile<br />

The core file mpsfile (i.e., MODEL.COR) is incorect. Consult the DECIS manual for<br />

instructions regarding the MPS format.<br />

18. ERROR: row 1 of problem ip is not a free row<br />

The first row of the problem is not a free row (i.e., is not the objective row). In order<br />

to make the fist row a free row, set the row type to be ’N’. Consult the DECIS manual<br />

for the MPS specification of the problem.<br />

19. ERROR: name not found = nam1, nam2<br />

There is an error in the core file (MODEL.COR). The problem cannot be decomposed<br />

correctly. Check the core file and check the model formulation.<br />

20. ERROR: matrix not in staircase form<br />

The constraint matrix of the problem as specified in core file (MODEL.COR) is not<br />

in staircase form. The first-stage rows and columns and the second-stage rows and<br />

columns are mixed within each other. Check the DECIS manual as to how to specify<br />

the core file. Check the core file and change the order of rows and columns.<br />

33


References<br />

[1] Brooke, A., Kendrik, D. and Meeraus, A. (1988): GAMS, A Users Guide, The Scientific Press, South<br />

San Francisco, California.<br />

[2] CPLEX Optimization, Inc. (1989–1997): <strong>Using</strong> the CPLEX Callable Library, 930 Tahoe Blvd. Bldg.<br />

802, Suite 279, Incline Village, NV 89451, USA.<br />

[3] Infanger, G. (1994): Planning Under Uncertainty – Solving Large-Scale Stochastic Linear Programs,<br />

The Scientific Press Series, Boyd and Fraser.<br />

[4] Infanger, G. (1997): DECIS User’s Guide, Dr. Gerd Infanger, 1590 Escondido Way, Belmont, CA<br />

94002.<br />

[5] Murtagh, B.A. and Saunders, M.A. (1983): MINOS User’s Guide, SOL 83-20, Department of Operations<br />

Research, Stanford University, Stanford CA 94305.<br />

34


DECIS License and Warranty<br />

The software, which accompanies this license (the “Software”) is the property of Gerd Infanger and is protected by<br />

copyright law. While Gerd Infanger continues to own the Software, you will have certain rights to use the Software<br />

after your acceptance of this license. Except as may be modified by a license addendum, which accompanies this<br />

license, your rights and obligations with respect to the use of this Software are as follows:<br />

• You may<br />

1. Use one copy of the Software on a single computer,<br />

2. Make one copy of the Software for archival purposes, or copy the software onto the hard disk of your<br />

computer and retain the original for archival purposes,<br />

3. Use the Software on a network, provided that you have a licensed copy of the Software for each computer<br />

that can access the Software over that network,<br />

4. After a written notice to Gerd Infanger, transfer the Software on a permanent basis to another person<br />

or entity, provided that you retain no copies of the Software and the transferee agrees to the terms of<br />

this agreement.<br />

• You may not<br />

1. Copy the documentation, which accompanies the Software,<br />

2. Sublicense, rent or lease any portion of the Software,<br />

3. Reverse engineer, de-compile, disassemble, modify, translate, make any attempt to discover the source<br />

code of the Software, or create derivative works from the Software.<br />

Limited Warranty:<br />

Gerd Infanger warrants that the media on which the Software is distributed will he free from defects for a period<br />

of thirty (30) days from the date of delivery of the Software to you. Your sole remedy in the event of a breach of<br />

the warranty will be that Gerd Infanger will, at his option, replace any defective media returned to Gerd Infanger<br />

within the warranty period or refund the money you paid for the Software. Gerd Infanger does not warrant that the<br />

Software will meet your requirements or that operation of the Software will be uninterrupted or that the Software will<br />

be error-free.<br />

THE ABOVE WARRANTY IS EXCLUSIVE AND IN LIEU OF ALL OTHER WARRANTIES, WHETHER<br />

EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR<br />

A PARTICULAR PURPOSE AND NONINFRINGEMENT.<br />

Disclaimer of Damages:<br />

REGARDLESS OF WHETHER ANY REMEDY SET FORTH HEREIN FAILS OF ITS ESSENTIAL PURPOSE,<br />

IN NO EVENT WILL GERD INFANGER BE LIABLE TO YOU FOR ANY SPECIAL, CONSEQUENTIAL, INDI-<br />

RECT OR SIMILAR DAMAGES, INCLUDING ANY LOST PROFITS OR LOST DATA ARISING OUT OF THE<br />

USE OR INABILITY TO USE THE SOFTWARE EVEN IF GERD INFANGER HAS BEEN ADVISED OF THE<br />

POSSIBILITY OF SUCH DAMAGES.<br />

IN NO CASE SHALL GERD INFANGER’S LIABILITY EXCEED THE PURCHASE PRICE FOR THE SOFT-<br />

WARE. The disclaimers and limitations set forth above will apply regardless of whether you accept the Software.<br />

General:<br />

This Agreement will be governed by the laws of the State of California. This Agreement may only be modified by a<br />

license addendum, which accompanies this license or by a written document, which has been signed by both you and<br />

Gerd Infanger. Should you have any questions concerning this Agreement, or if you desire to contact Gerd Infanger<br />

for any reason, please write:<br />

Gerd Infanger, 1590 Escondido Way, Belmont, CA 94002, USA.<br />

35


GAMS/DICOPT: A Discrete Continuous<br />

Optimization Package<br />

IGNACIO E. GROSSMANN ∗<br />

JAGADISAN VISWANATHAN ∗ ALDO VECCHIETTI ∗<br />

RAMESH RAMAN † ERWIN KALVELAGEN †<br />

1 Introduction<br />

March 22, 2002<br />

DICOPT is a program for solving mixed-integer nonlinear programming (MINLP)<br />

problems that involve linear binary or integer variables and linear and nonlinear<br />

continuous variables. While the modeling and solution of these MINLP optimization<br />

problems has not yet reached the stage of maturity and reliability as<br />

linear, integer or non-linear programming modeling, these problems have a rich<br />

area of applications. For example, they often arise in engineering design, management<br />

sciences, and finance. DICOPT (DIscrete and Continuous OPTimizer)<br />

was developed by J. Viswanathan and Ignacio E. Grossmann at the Engineering<br />

Design Research Center (EDRC) at Carnegie Mellon University. The program<br />

is based on the extensions of the outer-approximation algorithm for the equality<br />

relaxation strategy. The MINLP algorithm inside DICOPT solves a series<br />

of NLP and MIP sub-problems. These sub-problems can be solved using any<br />

NLP (Nonlinear Programming) or MIP (Mixed-Integer Programming) solver<br />

that runs under GAMS.<br />

Although the algorithm has provisions to handle non-convexities, it does not<br />

necessarily obtain the global optimum.<br />

The GAMS/DICOPT system has been designed with two main goals in<br />

mind:<br />

• to build on existing modeling concepts and to introduce a minimum of<br />

extensions to the existing modeling language and provide upward compatibility<br />

to ensure easy transition from existing modeling applications to<br />

nonlinear mixed-integer formulations<br />

∗ Engineering Research Design Center, Carnegie Mellon University, Pittsburgh, PA<br />

† GAMS Development Corporation, Washington D.C.<br />

1


• to use existing optimizers to solve the DICOPT sub-problems. This allows<br />

one to match the best algorithms to the problem at hand and guarantees<br />

that any new development and enhancements in the NLP and MIP solvers<br />

become automatically and immediate available to DICOPT.<br />

2 Requirements<br />

In order to use DICOPT you will need to have access to a licensed GAMS BASE<br />

system as well as at least one licensed MIP solver and one licensed NLP solver.<br />

For difficult models it is advised to have access to multiple solvers. Free student/demo<br />

systems are available from GAMS Development Corporation. These<br />

systems are restricted in the size of models that can be solved.<br />

3 How to run a model with GAMS/DICOPT<br />

DICOPT is capable of solving only MINLP models. If you did not specify<br />

DICOPT as the default solver, then you can use the following statement in<br />

your GAMS model:<br />

option minlp = dicopt;<br />

It should appear before the solve statement. DICOPT automatically uses<br />

the default MIP and NLP solver to solve its sub-problems. One can override<br />

this with the GAMS statements like:<br />

option nlp = conopt2; { or any other nlp solver }<br />

option mip = cplex; { or any other mip solver }<br />

These options can also be specified on the command line, like:<br />

> gams mymodel minlp=dicopt nlp=conopt2 mip=cplex<br />

In the IDE (Integrated Development Enviroment) the command line option can<br />

be specified in the edit line in the right upper corner of the main window.<br />

Possible NLP solvers include minos5, minos, conopt, conopt2, and snopt.<br />

Possible MIP solvers are cplex, osl, osl2, osl3, xpress, and xa.<br />

With an option file it is even possible to use alternate solvers in different<br />

cycles. Section 8 explains this is in detail.<br />

4 Overview of DICOPT<br />

DICOPT solves models of the form:<br />

2


MINLP min or max f(x, y)<br />

subject to g(x, y) ∼ b<br />

ℓx ≤ x ≤ ux<br />

y ∈ ⌈ℓy⌉, ..., ⌊uy⌋<br />

where x are the continuous variables and y are the discrete variables. The<br />

symbol ∼ is used to denote a vector of relational operators {≤, =, ≥}. The<br />

constraints can be either linear or non-linear. Bounds ℓ and u on the variables<br />

are handled directly. ⌈x⌉ indicates the smallest integer, greater than or equal<br />

to x. Similarly, ⌊x⌋ indicates the largest integer, less than or equal to x. The<br />

discrete variables can be either integer variables or binary variables.<br />

5 The algorithm<br />

The algorithm in DICOPT is based on three key ideas:<br />

• Outer Approximation<br />

• Equality Relaxation<br />

• Augmented Penalty<br />

Outer Approximation refers to the fact that the surface described by a convex<br />

function lies above the tangent hyper-plane at any interior point of the<br />

surface. (In 1-dimension, the analogous geometrical result is that the tangent<br />

to a convex function at an interior point lies below the curve). In the algorithm<br />

outer-approximations are attained by generating linearizations at each iterations<br />

and accumulating them in order to provide successively improved linear<br />

approximations of nonlinear convex functions that underestimate the objective<br />

function and overestimate the feasible region.<br />

Equality Relaxation is based on the following result from non-linear programming.<br />

Suppose the MINLP problem is formulated in the form:<br />

minimize or maximize f(x) + c T y<br />

subject to G(x) + Hy ∼ b<br />

ℓ ≤ x ≤ u<br />

y ∈ {0, 1}<br />

i.e. the discrete variables are binary variables and they appear linearly in the<br />

model.<br />

If we reorder the equations into equality and inequality equations, and convert<br />

the problem into a minimization problem, we can write:<br />

3<br />

(1)


minimize c T y + f(x)<br />

subject to Ay + h(x) = 0<br />

By + g(x) ≤ 0<br />

ℓ ≤ x ≤ u<br />

y ∈ {0, 1}<br />

Let y (0) be any fixed binary vector and let x (0) be the solution of the corresponding<br />

NLP subproblem:<br />

Further let<br />

minimize c T y (0) + f(x)<br />

subject to Ay (0) + h(x) = 0<br />

By (0) + g(x) ≤ 0<br />

ℓ ≤ x ≤ u<br />

T (0) = diag(ti,i)<br />

ti,i = sign(λi)<br />

where λi is the Lagrange multiplier of the i-th equality constraint.<br />

If f is pseudo-convex, h is quasi-convex, and g is quasi-convex, then x 0 is<br />

also the solution of the following NLP:<br />

minimize c T y (0) + f(x)<br />

subject to T (0) (Ay (0) + h(x)) ≤ 0<br />

By (0) + g(x) ≤ 0<br />

ℓ ≤ x ≤ u<br />

In colloquial terms, under certain assumptions concerning the convexity of<br />

the nonlinear functions, an equality constraint can be “relaxed” to be an inequality<br />

constraint. This property is used in the MIP master problem to accumulate<br />

linear approximations.<br />

Augmented Penalty refers to the introduction of (non-negative) slack variables<br />

on the right hand sides of the just described inequality constraints and the<br />

modification of the objective function when assumptions concerning convexity<br />

do not hold.<br />

The algorithm underlying DICOPT starts by solving the NLP in which the<br />

0-1 conditions on the binary variables are relaxed. If the solution to this problem<br />

yields an integer solution the search stops. Otherwise, it continues with<br />

an alternating sequence of nonlinear programs (NLP) called subproblems and<br />

4<br />

(2)<br />

(3)<br />

(4)<br />

(5)


mixed-integer linear programs (MIP) called master problems. The NLP subproblems<br />

are solved for fixed 0-1 variables that are predicted by the MIP master<br />

problem at each (major) iteration. For the case of convex problems, the master<br />

problem also provides a lower bound on the objective function. This lower<br />

bound (in the case of minimization) increases monotonically as iterations proceed<br />

due to the accumulation of linear approximations. Note that in the case<br />

of maximization this bound is an upper bound. This bound can be used as a<br />

stopping criterion through a DICOPT option stop 1 (see section 8). Another<br />

stopping criterion that tends to work very well in practice for non-convex problems<br />

(and even on convex problems) is based on the heuristic: stop as soon as<br />

the NLP subproblems start worsening (i.e. the current NLP subproblem has an<br />

optimal objective function that is worse than the previous NLP subproblem).<br />

This stopping criterion relies on the use of the augmented penatly and is used<br />

in the description of the algorithm below. This is also the default stopping criterion<br />

in the implementation of DICOPT. The algorithm can be stated briefly<br />

as follows:<br />

1. Solve the NLP relaxation of the MINLP program. If y (0) = y is integer,<br />

stop(“integer optimum found”). Else continue with step 2.<br />

2. Find an integer point y (1) with an MIP master problem that features an<br />

augmented penalty function to find the minimum over the convex hull<br />

determined by the half-spaces at the solution (x (0) , y (0) ).<br />

3. Fix the binary variables y = y (1) and solve the resulting NLP. Let (x (1) , y (1) )<br />

be the corresponding solution.<br />

4. Find an integer solution y (2) with a MIP master problem that corresponds<br />

to the minimization over the intersection of the convex hulls described by<br />

the half-spaces of the KKT points at y (0) and y (1) .<br />

5. Repeat steps 3 and 4 until there is an increase in the value of the NLP<br />

objective function. (Repeating step 4 means augmenting the set over<br />

which the minimization is performed with additional linearizations - i.e.<br />

half-spaces - at the new KKT point).<br />

In the MIP problems integer cuts are added to the model to exclude previously<br />

determined integer vectors y (1) , y (2) , ..., y (K) .<br />

For a detailed description of the theory and references to earlier work, see<br />

[5, 3, 1].<br />

The algorithm has been extended to handle general integer variables and<br />

integer variables appearing nonlinearly in the model.<br />

5


6 Modeling<br />

6.1 Relaxed model<br />

Before solving a model with DICOPT, it is strongly advised to experiment with<br />

the relaxed model where the integer restrictions are ignored. This is the RMINLP<br />

model. As the DICOPT will start solving the relaxed problem and can use an<br />

existing relaxed optimal solution, it is a good idea to solve the RMINLP always<br />

before attempting to solve the MINLP model. I.e. the following fragment is not<br />

detrimental with respect to performance:<br />

model m /all/;<br />

option nlp=conopt2;<br />

option mip=cplex;<br />

option rminlp=conopt2;<br />

option minlp=dicopt;<br />

*<br />

* solve relaxed model<br />

*<br />

solve m using rminlp minimizing z;<br />

abort$(m.modelstat > 2.5) "Relaxed model could not be solved";<br />

*<br />

* solve minlp model<br />

*<br />

solve m using minlp minimizing z;<br />

The second SOLVE statement will only be executed if the first SOLVE was<br />

succesful, i.e. if the model status was one (optimal) or two (locally optimal).<br />

In general it is not a good idea to try to solve an MINLP model if the<br />

relaxed model can not be solved reliably. As the RMINLP model is a normal<br />

NLP model, some obvious points of attention are:<br />

• Scaling. If a model is poorly scaled, an NLP solver may not be able find<br />

the optimal or even a feasible solution. Some NLP solvers have automatic<br />

scaling algorithms, but often it is better to attack this problem on the<br />

modeling level. The GAMS scaling facility can help in this respect.<br />

• Starting point. If a poor starting point is used, the NLP solver may not<br />

be able to find a feasible or optimal solution. A starting point can be set<br />

by setting level values, e.g. X.L = 1;. The GAMS default levels are zero,<br />

with is often not a good choice.<br />

• Adding bounds. Add bounds so that all functions can be properly evaluated.<br />

If you have a function √ x or log(x) in the model, you may want to<br />

add a bound X.LO=0.001;. If a function like log(f(x)) is used, you may<br />

want to introduce an auxiliary variable and equation y = f(x) with an<br />

appropriate bound Y.LO=0.001;.<br />

In some cases the relaxed problem is the most difficult model. If you have<br />

more than one NLP solver available, you may want to try a sequence of them:<br />

6


model m /all/;<br />

option nlp=conopt2;<br />

option mip=cplex;<br />

option rminlp=conopt2;<br />

option minlp=dicopt;<br />

*<br />

* solve relaxed model<br />

*<br />

solve m using rminlp minimizing z;<br />

if (m.modelstat > 2.5,<br />

option rminlp=minos;<br />

solve m using rminlp minimizing z;<br />

);<br />

if (m.modelstat > 2.5,<br />

option rminlp=snopt;<br />

solve m using rminlp minimizing z;<br />

);<br />

*<br />

* solve minlp model<br />

*<br />

solve m using minlp minimizing z;<br />

In this fragment, we first try to solve the relaxed model using CONOPT2. If<br />

that fails we try MINOS, and if that solve also fails, we try SNOPT.<br />

It is worthwhile to spend some time in getting the relaxed model to solve<br />

reliably and speedily. In most cases, modeling improvements in the relaxed<br />

model, such as scaling, will also benefit the subsequent NLP sub-problems. In<br />

general these modeling improvements turn out to be rather solver independent:<br />

changes that improve the performance with CONOPT will also help solving the<br />

model with MINOS.<br />

6.2 OPTCR and OPTCA<br />

The DICOPT algorithm assumes that the integer sub-problems are solved to<br />

optimality. The GAMS options for OPTCR and OPTCA are therefore ignored:<br />

subproblems are solved with both tolerances set to zero. If you really want to<br />

solve a MIP sub-problem with an optimality tolerance, you can use the DICOPT<br />

option file to set OPTCR or OPTCA in there. For more information see section 8.<br />

For models with many discrete variables, it may be necessary to introduce<br />

an OPTCR or OPTCA option in order to solve the model in acceptable time. For<br />

models with a limited number of integer variables the default to solve MIP<br />

sub-models to optimality may be acceptable.<br />

6.3 Integer formulations<br />

A number of MIP formulations are not very obvious and pose a demand on the<br />

modeler with respect to knowledge and experience. A good overview of integer<br />

programming modeling is given in [6].<br />

Many integer formulations use a so-called big-M construct. It is important<br />

to choose small values for those big-M numbers. As an example consider the<br />

7


fixed charge problem where yi ∈ {0, 1} indicate if facility i is open or closed, and<br />

where xi is the production at facility i. Then the cost function can be modeled<br />

as:<br />

Ci = fiyi + vixi<br />

xi ≤ Miyi<br />

yi ∈ {0, 1}<br />

0 ≤ xi ≤ capi<br />

where fi is the fixed cost and vi the variables cost of operating facility i. In<br />

this case Mi should be chosen large enough that xi is not restricted if yi = 1.<br />

On the other hand, we want it as small as possible. This leads to the choice to<br />

have Mi equal to the (tight) upperbound of variable xi (i.e. the capacity capi<br />

of facility i).<br />

6.4 Non-smooth functions<br />

NLP modelers are alerted by GAMS against the use of non-smooth functions<br />

such as min(), max(), smin(), smax() and abs(). In order to use these functions,<br />

a non-linear program needs to be declared as a DNLP model instead of a<br />

regular NLP model:<br />

option dnlp=conopt2;<br />

model m /all/;<br />

solve m minimizing z using dnlp;<br />

This construct is to warn the user that problems may arise due to the use<br />

of non-smooth functions.<br />

A possible solution is to use a smooth approximation. For instance, the function<br />

f(x) = |x| can be approximated by g(x) = √ x 2 + ε for some ε > 0. This<br />

approximation does not contain the point (0, 0). An alternative approximation<br />

can be devised that has this property:<br />

f(x) ≈<br />

(6)<br />

2x<br />

− x (7)<br />

1 + e−x/h For more information see [2].<br />

For MINLP models, there is not such a protection against non-smooth functions.<br />

However, the use of such functions is just as problematic here. However,<br />

with MINLP models we have the possibility to use discrete variables, in order<br />

to model if-then-else situations. For the case of the absolute value for instance<br />

we can replace x by x + − x − and |x| by x + + x − by using:<br />

8


where δ is a binary variable.<br />

7 GAMS <strong>Options</strong><br />

x = x + − x −<br />

|x| = x + + x −<br />

x + ≤ δM<br />

x − ≤ (1 − δ)M<br />

x + , x − ≥ 0<br />

δ ∈ {0, 1}<br />

GAMS options are specified in the GAMS model source, either using the option<br />

statement or using a model suffix.<br />

7.1 The OPTION statement<br />

An option statement sets a global parameter. An option statement should<br />

appear before the solve statement, as in:<br />

model m /all/;<br />

option iterlim=100;<br />

solve m using minlp minimizing z;<br />

Here follows a list of option statements that affect the behavior of DICOPT:<br />

option domlim = n;<br />

This option sets a limit on the total accumulated number of non-linear<br />

function evaluation errors that are allowed while solving the NLP subproblems<br />

or inside DICOPT itself. An example of a function evaluation<br />

error or domain error is taking the square root of a negative number. This<br />

situations can be prevented by adding proper bounds. The default is zero,<br />

i.e. no function evaluation errors are allowed.<br />

In case a domain error occurs, the listing file will contain an appropriate<br />

message, including the equation that is causing the problem, for instance:<br />

**** ERRORS(S) IN EQUATION loss(cc,sw)<br />

2 instance(s) of - UNDEFINED REAL POWER (RETURNED 0.0E+00)<br />

If such errors appear you can increase the DOMLIM limit, but often it is<br />

better to prevent the errors to occur. In many cases this can be accomplished<br />

by adding appropriate bounds. Sometimes you will need to add<br />

extra variables and equations to accomplish this. For instance with an<br />

expression like log(x − y), you may want to introduce a variable z > ε and<br />

an equation z = x − y, so that the expression can be rewritten as log(z).<br />

9<br />

(8)


option iterlim = n;<br />

This option sets a limit on the total accumulated (minor) iterations performed<br />

in the MIP and NLP subproblems. The default is 1000.<br />

option minlp = dicopt;<br />

Selects DICOPT to solve MINLP problems.<br />

option mip = s;<br />

This option sets the MIP solver to be used for the MIP master problems.<br />

Note that changing from one MIP solver to another can lead to different<br />

results, and may cause DICOPT to follow a different path.<br />

option nlp = s;<br />

This option sets the NLP solver to be used for the NLP sub-problems.<br />

Note that changing from one NLP solver to another can lead to different<br />

results, and may cause DICOPT to follow a different path.<br />

option optca = x;<br />

This option is ignored. MIP master problems are solved to optimality<br />

unless specified differently in the DICOPT option file.<br />

option optcr = x;<br />

This option is ignored. MIP master problems are solved to optimality<br />

unless specified differently in the DICOPT option file.<br />

option reslim = x;<br />

This option sets a limit on the total accumulated time (in seconds) spent<br />

inside DICOPT and the subsolvers. The default is 1000 seconds.<br />

option sysout = on;<br />

This option will print extra information to the listing file.<br />

In the list above (and in the following) n indicates an integer number. GAMS<br />

will also accept fractional values: they will be rounded. <strong>Options</strong> marked with<br />

an x parameter expect a real number. <strong>Options</strong> with an s parameter, expect a<br />

string argument.<br />

7.2 The model suffix<br />

Some options are set by assigning a value to a model suffix, as in:<br />

model m /all/;<br />

m.optfile=1;<br />

solve m using minlp minimizing z;<br />

Here follows a list of model suffices that affect the behavoir of DICOPT:<br />

m.dictfile = 1;<br />

This option tells GAMS to write a dictionary file containing information<br />

about GAMS identifiers (equation and variables names). This information<br />

is needed when the DICOPT option nlptracelevel is used. Otherwise<br />

this option can be ignored.<br />

10


m.iterlim = n;<br />

Sets the total accumulated (minor) iteration limit. This option overrides<br />

the global iteration limit set by an option statement. E.g.,<br />

model m /all/;<br />

m.iterlim = 100;<br />

option iterlim = 1000;<br />

solve m using minlp minimizing z;<br />

will cause DICOPT to use an iteration limit of 100.<br />

m.optfile = 1;<br />

This option instructs DICOPT to read an option file dicopt.opt. This file<br />

should be located in the current directory (or the project directory when<br />

using the GAMS IDE). The contents of the option file will be echoed to<br />

the listing file and to the screen (the log file):<br />

--- DICOPT: Reading option file D:\MODELS\SUPPORT\DICOPT.OPT<br />

> maxcycles 10<br />

--- DICOPT: Starting major iteration 1<br />

If the option file does not exist, the algorithm will proceed using its default<br />

settings. An appropriate message will be displayed in the listing file and<br />

in the log file:<br />

--- DICOPT: Reading option file D:\MODELS\SUPPORT\DICOPT.OPT<br />

--- DICOPT: File does not exist, using defaults...<br />

--- DICOPT: Starting major iteration 1<br />

m.optfile = n;<br />

If n > 1 then the option file that is read is called dicopt.opn (for n =<br />

2, ..., 9) or dicopt.on (for n = 10, ..., 99). E.g. m.optfile=2; will cause<br />

DICOPT to read dicop.op2.<br />

m.prioropt = 1;<br />

This option will turn on the use of priorities on the discrete variables.<br />

Priorities influence the branching order chosen by the MIP solver during<br />

solution of the MIP master problems. The use of priorities can greatly<br />

impact the performance of the MIP solver. The priorities themselves have<br />

to be specified using the .prior variables suffix, e.g. x.prior(i,j) =<br />

ord(i);. Contrary to intuition, variables with a lower value for their<br />

priority are branched on before variables with a higher priority. I.e. the<br />

most important variables should get lower priority values.<br />

m.reslim = x;<br />

Sets the total accumulated time limit. This option overrides the global<br />

time limit set by an option statement.<br />

11


8 DICOPT options<br />

This sections describes the options that can be specified in the DICOPT option<br />

file. This file is usually called dicopt.opt. In order to tell DICOPT to read this<br />

file, you will need to set the optfile model suffix, as in:<br />

model m /all/;<br />

m.optfile=1;<br />

solve m using minlp minimizing z;<br />

The option file is searched for in the current directory, or in case the IDE<br />

(Integrated Development Environment) is used, in the project directory.<br />

The option file is a standard text file, with a single option on each line. All<br />

options are case-insensitive. A line is a comment line if it starts with an asterisk,<br />

*, in column one. A valid option file can look like:<br />

* stop only on infeasible MIP or hitting a limit<br />

stop 0<br />

* use minos to solve first NLP sub problem<br />

* and conopt2 for all subsequent ones<br />

nlpsolver minos conopt2<br />

Here follows a list of available DICOPT options:<br />

continue n<br />

This option can be used to let DICOPT continue in case of NLP solver<br />

failures. The preferred approach is to fix the model, such that NLP subproblems<br />

solve without problems. However, in some cases we can ignore<br />

(partial) failures of an NLP solver in solving the NLP subproblems as DI-<br />

COPT may recover later on. During model debugging, you may therefore<br />

add the option continue 0, in order for DICOPT to function in a more<br />

finicky way.<br />

continue 0<br />

Stop on solver failure. DICOPT will terminate when an NLP subproblem<br />

can not be solved to optimality. Some NLP solvers terminate<br />

with a status other than optimal if not all of the termination<br />

criteria are met. For instance, the change in the objective function<br />

is negligable (indicating convergence) but the reduced gradients are<br />

not within the required tolerance. Such a solution may or may not<br />

be close the (local) optimum. <strong>Using</strong> continue 0 will cause DICOPT<br />

not to accept such a solution.<br />

continue 1<br />

NLP subproblem failures resulting in a non-optimal but feasible solutions<br />

are accepted. Sometimes an NLP solver can not make further<br />

progress towards meeting all optimality conditions, although the<br />

current solution is feasible. Such a solution can be accepted by this<br />

option.<br />

12


continue 2<br />

NLP subproblem failures resulting in a non-optimal but feasible solution<br />

are accepted (as in option continue 1). NLP subproblem<br />

failures resulting in an infeasible solution are ignored. The corresponding<br />

configuration of discrete variables is forbidden to be used<br />

again. An integer cut to accomplish this, is added to subsequent<br />

MIP master problems. Note that the relaxed NLP solution should<br />

be feasible. This setting is the default.<br />

domlim i1 i2 . . . in<br />

Sets a limit of the number of function and derivative evaluation errors for<br />

a particular cycle. A number of −1 means that the global GAMS option<br />

domlim is used. The last number in sets a domain error limit for all cycles<br />

n, n + 1, . . . .<br />

Example: domlim 0 100 0<br />

The NLP solver in the second cycle is allowed to make up to 100<br />

evaluation errors, while all other cycles must be solved without evaluation<br />

errors.<br />

The default is to use the global GAMS domlim option.<br />

epsmip x<br />

This option can be used to relax the test on MIP objective functions.<br />

The objective function values of the MIP master problems should form<br />

a monotonic worsening curve. This is not the case if the MIP master<br />

problems are not solved to optimality. Thus, if the options OPTCR or<br />

OPTCA are set to a nonzero value, this test is bypassed. If the test fails,<br />

DICOPT will fail with a message:<br />

The MIP solution became better after adding integer cuts.<br />

Something is wrong. Please check if your model is properly<br />

scaled. Also check your big M formulations -- the value<br />

of M should be relatively small.<br />

This error can also occur if you used a MIP solver option<br />

file with a nonzero OPTCR or OPTCA setting. In that case<br />

you may want to increase the EPSMIP setting using a<br />

DICOPT option file.<br />

The value of<br />

PreviousObj − CurrentObj<br />

1 + |PreviousObj|<br />

is compared against epsmip. In case the test fails, but you want DI-<br />

COPT to continue anyway, you may want to increase the value of epsmip.<br />

The current values used in the test (previous and current MIP objective,<br />

epsmip) are printed along with the message above, so you will have information<br />

about how much you should increase epsmip to pass the test. Normally,<br />

you should not have to change this value. The default is x = 1.0e−6.<br />

13<br />

(9)


epsx x<br />

This tolerance is used to distinguish integer variables that are set to an<br />

integer value by the user, or integer variables that are fractional. See the<br />

option relaxed. Default: x = 1.0e − 3.<br />

keepfiles<br />

This option is for debugging purposes. By default DICOPT will remove<br />

any scratch files that are written. With this option, you can tell DICOPT<br />

not to delete those temporary files. This option is especially useful in<br />

combination with the GAMS main driver program gamskeep, which will<br />

not delete the GAMS scratch directory after the job has finished.<br />

maxcycles n<br />

The maximum number of cycles or major iterations performed by DI-<br />

COPT. The default is n = 20.<br />

mipiterlim i1 i2 . . . in<br />

Sets an iteration limit on individual MIP master problems. The last number<br />

in is valid for all subsequent cycles n, n + 1, . . . . A number of −1 indicates<br />

that there is no (individual) limit on the corresponding MIP master<br />

problem. A global iteration limit is maintained through the GAMS option<br />

iterlim.<br />

Example: mipiterlim 10000 -1<br />

The first MIP master problem can not use more than 10000 iterations,<br />

while subsequent MIP master problems are not individually<br />

restricted.<br />

Example: mipiterlim 10000<br />

Sets an iteration limit of 10000 on all MIP master problems.<br />

When this option is used it is advised to have the option continue set<br />

to its default of 2. The default for this option is not to restrict iteration<br />

counts on individual solves of MIP master problems.<br />

mipoptfile s1 s2 . . . sn<br />

Specifies the option file to be used for the MIP master problems. Several<br />

option files can be specified, separated by a blank. If a digit 1 is entered,<br />

the default option file for the MIP solver in question is being used. The<br />

digit 0 indicates: no option file is to be used. The last option file is also<br />

used for subsequent MIP master problems.<br />

Example: mipoptfile mip.opt mip2.opt 0<br />

This option will cause the first MIP master problem solver to read the<br />

option file mip.opt, the second one to read the option file mip2.opt<br />

and subsequent MIP master problem solvers will not use any option<br />

file.<br />

14


Example: mipoptfile 1<br />

This will cause the MIP solver for all MIP subproblems to read a<br />

default option file (e.g. cplex.opt, xpress.opt, osl2.opt etc.).<br />

Option files are located in the current directory (or the project directory<br />

when using the IDE). The default is not to use an option file.<br />

mipreslim x1 x2 . . . xn<br />

Sets a resource (time) limit on individual MIP master problems. The last<br />

number xn is valid for all subsequent cycles n, n + 1, . . . . A number −1.0<br />

means that the corresponding MIP master problem is not individually<br />

time restricted. A global time limit is maintained through the GAMS<br />

option reslim.<br />

Example: mipreslim -1 10000 -1<br />

The MIP master problem in cycle 2 can not use more than 100 seconds,<br />

while subsequent MIP master problems are not individually<br />

restricted.<br />

Example: mipreslim 1000<br />

Sets a time limit on all MIP master problems of 1000 seconds.<br />

When this option is used it is advised to have the option continue set to<br />

its default of 2. The default for this option is not to restrict individually<br />

the time a solver can spent on the MIP master problem.<br />

mipsolver s1 s2 . . . sn<br />

This option specifies with MIP solver to use for the MIP master problems.<br />

Example: mipsolver cplex osl2<br />

This instructs DICOPT to use Cplex for the first MIP and OSL2 for<br />

the second and subsequent MIP problems. The last entry may be<br />

used for more than one problem.<br />

The names to be used for the solvers are the same as one uses in the<br />

GAMS statement OPTION MIP=....;. The default is to use the default<br />

MIP solver.<br />

Note that changing from one MIP solver to another can lead to different<br />

results, and may cause DICOPT to follow a different path.<br />

nlpiterlim i1 i2 . . . in<br />

Sets an iteration limit on individual NLP subproblems. The last number<br />

in is valid for all subsequent cycles n, n + 1, . . . . A number of −1 indicates<br />

that there is no (individual) limit on the corresponding NLP subproblem.<br />

A global iteration limit is maintained through the GAMS option iterlim.<br />

Example: nlpiterlim 1000 -1<br />

The first (relaxed) NLP subproblem can not use more than 1000<br />

iterations, while subsequent NLP subproblems are not individually<br />

restricted.<br />

15


Example: nlpiterlim 1000<br />

Sets an iteration limit of 1000 on all NLP subproblems.<br />

When this option is used it is advised to have the option continue set<br />

to its default of 2. The default is not to restrict the amount of iterations<br />

an NLP solver can spend on an NLP subproblem, other than the global<br />

iteration limit.<br />

nlpoptfile s1 s2 . . . sn<br />

Specifies the option file to be used for the NLP subproblems. Several<br />

option files can be specified, separated by a blank. If a digit 1 is entered,<br />

the default option file for the NLP solver in question is being used. The<br />

digit 0 indicates: no option file is to be used. The last option file is also<br />

used for subsequent NLP subproblems.<br />

Example: nlpoptfile nlp.opt nlp2.opt 0<br />

This option will cause the first NLP subproblem solver to read the<br />

option file nlp.opt, the second one to read the option file nlp2.opt<br />

and subsequent NLP subproblem solvers will not use any option file.<br />

Example: nlpoptfile 1<br />

This will cause the NLP solver for all NLP subproblems to read a<br />

default option file (e.g. conopt2.opt, minos.opt, snopt.opt etc.).<br />

Option files are located in the current directory (or the project directory<br />

when using the IDE). The default is not to use an option file.<br />

nlpreslim x1 x2 . . . xn<br />

Sets a resource (time) limit on individual NLP subproblems. The last<br />

number xn is valid for all subsequent cycles n, n + 1, . . . . A number −1.0<br />

means that the corresponding NLP subproblem is not individually time<br />

restricted. A global time limit is maintained through the GAMS option<br />

reslim.<br />

Example: nlpreslim 100 -1<br />

The first (relaxed) NLP subproblem can not use more than 100 seconds,<br />

while subsequent NLP subproblems are not individually restricted.<br />

Example: nlpreslim 1000<br />

Sets a time limit of 1000 seconds on all NLP subproblems.<br />

When this option is used it is advised to have the option continue set to<br />

its default of 2. The default for this option is not to restrict individually<br />

the time an NLP solver can spend on an NLP subproblem (other than the<br />

global resource limit).<br />

nlpsolver s1 s2 . . . sn<br />

This option specifies which NLP solver to use for the NLP subproblems.<br />

16


Example: nlpsolver conopt2 minos snopt<br />

tells DICOPT to use CONOPT2 for the relaxed NLP, MINOS for the<br />

second NLP subproblem and SNOPT for the third and subsequent<br />

ones. The last entry is used for more than one subproblem: for all<br />

subsequent ones DICOPT will use the last specified solver.<br />

The names to be used for the solvers are the same as one uses in the<br />

GAMS statement OPTION NLP=....;. The default is to use the default<br />

NLP solver. Note that changing from one NLP solver to another can lead<br />

to different results, and may cause DICOPT to follow a different path.<br />

nlptracefile s<br />

Name of the files written if the option nlptracelevel is set. Only the<br />

stem is needed: if the name is specified as nlptracefile nlptrace, then<br />

files of the form nlptrace.001, nlptrace.002, etc. are written. These<br />

files contain the settings of the integer variables so that NLP subproblems<br />

can be investigated independently of DICOPT. Default: nlptrace.<br />

nlptracelevel n<br />

This sets the level for NLP tracing, which writes a file for each NLP<br />

sub-problem, so that NLP sub-problems can be investigated outside the<br />

DICOPT environment. See also the option nlptracefile.<br />

nlptracelevel 0<br />

No trace files are written. This is the default.<br />

nlptracelevel 1<br />

A GAMS file for each NLP subproblem is written which fixes the<br />

discrete variables.<br />

nlptracelevel 2<br />

As nlptracelevel 1, but in addition level values of the continuous<br />

variables are written.<br />

nlptracelevel 3<br />

As nlptracelevel 2, but in addition marginal values for the equations<br />

and variables are written.<br />

By including a trace file to your original problem, and changing it into an<br />

MINLP problem, the subproblem will be solved directly by an NLP solver.<br />

This option only works if the names in the model (names of variables and<br />

equations) are exported by GAMS. This can be accomplished by using<br />

the m.dictfile model suffix, as in m.dictfile=1;. In general it is more<br />

convenient to use the CONVERT solver to generate isolated NLP models (see<br />

section 10.4).<br />

optca x1 x2 . . . xn<br />

The absolute optimality criterion for the MIP master problems. The<br />

GAMS option optca is ignored, as by default DICOPT wants to solve<br />

MIP master problems to optimality. To allow to solve large problem, it is<br />

17


possible to stop the MIP solver earlier, by specifying a value for optca or<br />

optcr in a DICOPT option file. With setting a value for optca, the MIP<br />

solver is instructed to stop as soon as the gap between the best possible<br />

integer solution and the best found integer solution is less than x, i.e. stop<br />

as soon as<br />

|BestFound − BestPossible| ≤ x (10)<br />

It is possible to specify a different optca value for each cycle. The last<br />

number xn is valid for all subsequent cycles n, n + 1, . . . .<br />

Example: optca 10<br />

Stop the search in all MIP problems as soon as the absolute gap is<br />

less than 10.<br />

Example: optca 0 10 0<br />

Sets a nonzero optca value of 10 for cycle 2, while all other MIP<br />

master problems are solved to optimality.<br />

The default is zero.<br />

optcr x1 x2 . . . xn<br />

The relative optimality criterion for the MIP master problems. The GAMS<br />

option optca is ignored, as by default DICOPT wants to solve MIP master<br />

problems to optimality. To allow to solve large problem, it is possible to<br />

stop the MIP solver earlier, by specifying a value for optca or optcr in<br />

a DICOPT option file. With setting a value for optcr, the MIP solver is<br />

instructed to stop as soon as the relative gap between the best possible<br />

integer solution and the best found integer solution is less than x, i.e. stop<br />

as soon as<br />

|BestFound − BestPossible|<br />

≤ x (11)<br />

|BestPossible|<br />

Note that the relative gap can not be evaluated if the best possible integer<br />

solution is zero. In those cases the absolute optimality criterion optca can<br />

be used. It is possible to specify a different optcr value for each cycle.<br />

The last number xn is valid for all subsequent cycles n, n + 1, . . . .<br />

Example: optcr 0.1<br />

Stop the search in all the MIP problems as soon as the relative gap<br />

is smaller than 10%.<br />

Example: optcr 0 0.01 0<br />

Sets a nonzero optcr value of 1% for cycle 2, while all other MIP<br />

master problems are solved to optimality.<br />

The default is zero.<br />

relaxed n<br />

In some cases it may be possible to use a known configuration of the<br />

discrete variables. Some users have very difficult problems, where the<br />

relaxed problem can not be solved, but where NLP sub-problems with the<br />

18


integer variables fixed are much easier. In such a case, if a reasonable<br />

integer configuration is known in advance, we can bypass the relaxed NLP<br />

and tell DICOPT to directly start with this integer configuration. The<br />

integer variables need to be specified by the user before the solve statement<br />

by assigning values to the levels, as in Y.L(I) = INITVAL(I);.<br />

relaxed 0<br />

The first NLP sub-problem will be executed with all integer variables<br />

fixed to the values specified by the user. If you don’t assign a value<br />

to an integer variable, it will retain it’s current value, which is zero<br />

by default.<br />

relaxed 1<br />

The first NLP problem is the relaxed NLP problem: all integer variables<br />

are relaxed between their bounds. This is the default.<br />

relaxed 2<br />

The first NLP subproblem will be executed with some variables fixed<br />

and some relaxed. The program distinguishes the fixed from the<br />

relaxed variables by comparing the initial values against the bounds<br />

and the tolerance allowed EPSX. EPSX has a default value of 1.e-3.<br />

This can be changed in through the option file.<br />

stop n<br />

This option defines the stopping criterion to be used. The search is always<br />

stopped when the (minor) iteration limit (the iterlim option), the<br />

resource limit (the reslim option), or the major iteration limit (see maxcycles)<br />

is hit or when the MIP master problem becomes infeasible.<br />

stop 0<br />

Do not stop unless an iteration limit, resource limit, or major iteration<br />

limit is hit or an infeasible MIP master problem becomes<br />

infeasible. This option can be used to verify that DICOPT does not<br />

stop too early when using one of the other stopping rules. In general<br />

it should not be used on production runs, as in general DICOPT<br />

will find often the optimal solution using one of the more optimistic<br />

stopping rules.<br />

stop 1<br />

Stop as soon as the bound defined by the objective of the last MIP<br />

master problem is worse than the best NLP solution found (a “crossover”<br />

occurred). For convex problems this gives a global solution, provided<br />

the weights are large enough. This stopping criterion should only be<br />

used if it is known or it is very likely that the nonlinear functions are<br />

convex. In the case of non-convex problems the bounds of the MIP<br />

master problem are not rigorous. Therefore, the global optimum can<br />

be cut-off with the setting stop 1.<br />

stop 2<br />

Stop as soon as the NLP subproblems stop to improve. This“worsening”<br />

19


criterion is a heuristic. For non-convex problems in which valid<br />

bounds can not be obtained the heuristic works often very well. Even<br />

on convex problems, in many cases it terminates the search very early<br />

while providing an optimal or a very good integer solution. The criterion<br />

is not checked before major iteration three.<br />

stop 3<br />

Stop as soon as a crossover occurs or when the NLP subproblems<br />

start to worsen. (This is a combination of 1 and 2).<br />

Note: In general a higher number stops earlier, although in some cases<br />

stopping rule 2 may terminate the search earlier than rule 1. Section VI<br />

shows some experiments with these stopping criteria.<br />

weight x<br />

The value of the penalty coefficients. Default x = 1000.0.<br />

9 DICOPT output<br />

DICOPT generates lots of output on the screen. Not only does DICOPT itself<br />

writes messages to the screen, but also the NLP and MIP solvers that handle the<br />

sub-problems. The most important part is the last part of the screen output.<br />

In this section we will discuss the output that DICOPT writes to the screen<br />

and the listing file using the model procsel.gms (this model is part of the GAMS<br />

model library). A DICOPT log is written there and the reason why DICOPT<br />

terminated.<br />

--- DICOPT: Checking convergence<br />

--- DICOPT: Search stopped on worsening of NLP subproblems<br />

--- DICOPT: Log File:<br />

Major Major Objective CPU time Itera- Evaluation <strong>Solver</strong><br />

Step Iter Function (Sec) tions Errors<br />

NLP 1 5.35021 0.05 8 0 conopt2<br />

MIP 1 2.48869 0.28 7 0 cplex<br />

NLP 2 1.72097< 0.00 3 0 conopt2<br />

MIP 2 2.17864 0.22 10 0 cplex<br />

NLP 3 1.92310< 0.00 3 0 conopt2<br />

MIP 3 1.42129 0.22 12 0 cplex<br />

NLP 4 1.41100 0.00 8 0 conopt2<br />

--- DICOPT: Terminating...<br />

--- DICOPT: Stopped on NLP worsening<br />

The search was stopped because the objective function<br />

of the NLP subproblems started to deteriorate.<br />

--- DICOPT: Best integer solution found: 1.923099<br />

--- Restarting execution<br />

--- PROCSEL.GMS(98) 0 Mb<br />

--- Reading solution for model process<br />

*** Status: Normal completion<br />

20


Notice that the integer solutions are provided by the NLP’s except for major<br />

iteration one (the first NLP is the relaxed NLP). For all NLP’s except the<br />

relaxed one, the binary variables are fixed, according to a pattern determined<br />

by the previous MIP which operates on a linearized model. The integer solutions<br />

marked with a ’


MIP 3 1.42129 0.22 12 0 cplex<br />

NLP 4 1.41100 0.00 8 0 conopt2<br />

------------------------------------------------------------------<br />

Total solver times : NLP = 0.05 MIP = 0.72<br />

Perc. of total : NLP = 6.59 MIP = 93.41<br />

------------------------------------------------------------------<br />

In case the DICOPT run was not successful, or if one of the subproblems<br />

could not be solved, the listing file will contain all the status information provided<br />

by the solvers of the subproblems. Also for each iteration the configuration<br />

of the binary variables will be printed. This extra information can also be<br />

requested via the GAMS option:<br />

option sysout = on ;<br />

10 Special notes<br />

This section covers some special topics of interest to users of DICOPT.<br />

10.1 Stopping rule<br />

Although the default stopping rule behaves quite well in practice, there some<br />

cases where it terminates too early. In this section we discuss the use of the<br />

stopping criteria.<br />

When we run the example procsel.gms with stopping criterion 0, we see<br />

the following DICOPT log:<br />

--- DICOPT: Starting major iteration 10<br />

--- DICOPT: Search terminated: infeasible MIP master problem<br />

--- DICOPT: Log File:<br />

Major Major Objective CPU time Itera- Evaluation <strong>Solver</strong><br />

Step Iter Function (Sec) tions Errors<br />

NLP 1 5.35021 0.06 8 0 conopt2<br />

MIP 1 2.48869 0.16 7 0 cplex<br />

NLP 2 1.72097< 0.00 3 0 conopt2<br />

MIP 2 2.17864 0.10 10 0 cplex<br />

NLP 3 1.92310< 0.00 3 0 conopt2<br />

MIP 3 1.42129 0.11 12 0 cplex<br />

NLP 4 1.41100 0.00 8 0 conopt2<br />

MIP 4 0.00000 0.22 23 0 cplex<br />

NLP 5 0.00000 0.00 3 0 conopt2<br />

MIP 5 -0.27778 0.16 22 0 cplex<br />

NLP 6 -0.27778 0.00 3 0 conopt2<br />

MIP 6 -1.00000 0.16 21 0 cplex<br />

NLP 7 -1.00000 0.00 3 0 conopt2<br />

MIP 7 -1.50000 0.22 16 0 cplex<br />

NLP 8 -1.50000 0.00 3 0 conopt2<br />

MIP 8 -2.50000 0.11 16 0 cplex<br />

NLP 9 -2.50000 0.00 3 0 conopt2<br />

MIP 9 *Infeas* 0.11 0 0 cplex<br />

--- DICOPT: Terminating...<br />

--- DICOPT: Stopped on infeasible MIP<br />

22


The search was stopped because the last MIP problem<br />

was infeasible. DICOPT will not be able to find<br />

a better integer solution.<br />

--- DICOPT: Best integer solution found: 1.923099<br />

--- Restarting execution<br />

--- PROCSEL.GMS(98) 0 Mb<br />

--- Reading solution for model process<br />

*** Status: Normal completion<br />

This example shows some behavioral features that are not uncommon for<br />

other MINLP models. First, DICOPT finds often the best integer solution in<br />

the first few major iterations. Second, in many cases as soon as the NLP’s start<br />

to give worse integer solution, no better integer solution will be found anymore.<br />

This observation is the motivation to make stopping option 2 where DICOPT<br />

stops as soon as the NLP’s start to deteriorate the default stopping rule. In this<br />

example DICOPT would have stopped in major iteration 4 (you can verify this<br />

in the previous section). In many cases this will indeed give the best integer<br />

solution. For this problem, DICOPT has indeed found the global optimum.<br />

Based on experience with other models we find that the default stopping rule<br />

(stop when the NLP becomes worse) performs well in practice. In many cases<br />

it finds the global optimum solution, for both convex and non-convex problems.<br />

In some cases however, it may provide a sub-optimal solution. In case you want<br />

more reassurance that no good integer solutions are missed you can use one of<br />

the other stopping rules.<br />

Changing the MIP or NLP solver can change the path that DICOPT follows<br />

since the sub-problems may have non-unique solutions. The optimum stopping<br />

rule for a particular problem depends on the MIP and NLP solvers used.<br />

In the case of non-convex problems the bounds of the MIP master problem<br />

are not rigorous. Therefore, the global optimum can be cut-off with stop 1.<br />

This option is however the best stopping criterion for convex problems.<br />

10.2 Solving the NLP problems<br />

In case the relaxed NLP and/or the other NLP sub-problems are very difficult,<br />

using a combination of NLP solvers has been found to be effective. For example,<br />

MINOS has much more difficulties to establish if a model is infeasible, so one<br />

would like to use CONOPT for NLP subproblems that are either infeasible or<br />

barely feasible. The nlpsolver option can be used to specify the NLP solver<br />

to be used for each iteration.<br />

Infeasible NLP sub-problems can be problematic for DICOPT. Those subproblems<br />

can not be used to form a new linearization. Effectively only the<br />

current integer configuration is excluded from further consideration by adding<br />

appropriate integer cuts, but otherwise an infeasible NLP sub-problem provides<br />

no useful information to be used by the DICOPT algorithm. If your model shows<br />

many infeasible NLP sub-problems you can try to add slack variables and add<br />

them with a penalty to the objective function.<br />

23


Assume your model is of the form:<br />

min f(x, y)<br />

g(x, y) ∼ b<br />

ℓ ≤ x ≤ u<br />

y ∈ {0, 1}<br />

(12)<br />

where ∼ is a vector of relational operators {≤, =, ≥}. x are continuous variables<br />

and y are the binary variables. If many of the NLP subproblems are infeasible,<br />

we can try the following “elastic” formulation:<br />

min f(x, y) + M <br />

(s +<br />

i<br />

i<br />

y = y B + s + − s −<br />

g(x, y) ∼ b<br />

ℓ ≤ x ≤ u<br />

0 ≤ y ≤ 1<br />

0 ≤ s + , s − ≤ 1<br />

y B ∈ {0, 1}<br />

+ s−<br />

i )<br />

(13)<br />

I.e. the variables y are relaxed to be continuous with bounds [0, 1], and binary<br />

variables y B are introduced, that are related to the variables y through a set of<br />

the slack variables s + , s − . The slack variables are added to the objective with<br />

a penalty parameter M. The choice of a value for M depends on the size of<br />

f(x, y), on the behavior of the model, etc. Typical values are 100, or 1000.<br />

10.3 Solving the MIP master problems<br />

When there are many discrete variables, the MIP master problems may become<br />

expensive to solve. One of the first thing to try is to see if a different MIP solver<br />

can solve your particular problems more efficiently.<br />

Different formulations can have dramatic impact on the performance of MIP<br />

solvers. Therefore it is advised to try out several alternative formulations. The<br />

use of priorities can have a big impact on some models. It is possible to specify a<br />

nonzero value for OPTCA and OPTCR in order to prevent the MIP solver to spend<br />

an unacceptable long time in proving optimality of MIP master problems.<br />

If the MIP master problem is infeasible, the DICOPT solver will terminate.<br />

In this case you may want to try the same reformulation as discussed in the<br />

previous paragraph.<br />

10.4 Model debugging<br />

In this paragraph we discuss a few techniques that can be helpful in debugging<br />

your MINLP model.<br />

24


• Start with solving the model as an RMINLP model. Make sure this model<br />

solves reliably before solving it as a proper MINLP model. If you have access<br />

to different NLP solvers, make sure the RMINLP model solves smoothly with<br />

all NLP solvers. Especially CONOPT can generate useful diagnostics such<br />

as Jacobian elements (i.e. matrix elements) that become too large.<br />

• Try different NLP and MIP solvers on the subproblems. Example: use the<br />

GAMS statement “OPTION NLP=CONOPT3;” to solve all NLP subproblem<br />

using the solver CONOPT version 3.<br />

• The GAMS option statement “OPTION SYSOUT = ON;” can generate extra<br />

solver information that can be helpful to diagnose problems.<br />

• If many of the NLP subproblems are infeasible, add slacks as described in<br />

section 10.2.<br />

• Run DICOPT in pedantic mode by using the DICOPT option: “CONTINUE<br />

0.” Make sure all NLP subproblems solve to optimality.<br />

• Don’t allow any nonlinear function evaluation errors, i.e. keep the DOMLIM<br />

limit at zero. See the discussion on DOMLIM in section 7.1.<br />

• If you have access to another MINLP solver such as SBB, try to use a<br />

different solver on your model. To select SBB use the following GAMS<br />

option statement: “OPTION MINLP=SBB;.”<br />

• Individual NLP or MIP subproblems can be extracted from the MINLP by<br />

using the CONVERT solver. It will write a model in scalar GAMS notation,<br />

which can then be solved using any GAMS NLP or MIP solver. E.g. to<br />

generate the second NLP subproblem, you can use the following DICOPT<br />

option: “NLPSOLVER CONOPT2 CONVERT.” The model will be written to<br />

the file GAMS.GMS. A disadvantage of this technique is that some precision<br />

is lost due to the fact that files are being written in plain ASCII. The<br />

advantage is that you can visually inspect these files and look for possible<br />

problems such as poor scaling.<br />

References<br />

[1] M. A. Duran and I. E. Grossmann, An Outer-Approximation Algorithm<br />

for a Class of Mixed-Integer Nonlinear Programs, Mathematical Programming,<br />

36 (1986), pp. 307–339.<br />

[2] E. Kalvelagen, Model building with GAMS, to appear.<br />

[3] G. R. Kocis and I. E. Grossmann, Relaxation Strategy for the Structural<br />

Optimization of Process Flowsheets, Industrial and Engineering Chemistry<br />

Research, 26 (1987), pp. 1869–1880.<br />

25


[4] G. R. Kocis and I. E. Grossmann, Computational Experience with DI-<br />

COPT solving MINLP Problems in Process Systems Engineering, Computers<br />

and Chemical Engineering, 13 (1989), pp. 307–315.<br />

[5] J. Viswanathan and I. E. Grossmann, A combined Penalty Function<br />

and Outer Approximation Method for MINLP Optimization, Computers<br />

and Chemical Engineering, 14 (1990), pp. 769–782.<br />

[6] H. P. Williams, Model Building in Mathematical Programming, 4-th edition<br />

(1999), Wiley.<br />

26


GAMBAS: A Program for Saving an Advanced Basis for GAMS<br />

Version 1.0<br />

by<br />

Bruce McCarl<br />

Professor<br />

Texas A&M University<br />

College Station, TX 77843-2124<br />

McCarl@Tamu.edu<br />

© Bruce A. McCarl<br />

June 25,1996


GAMSBAS: A Program for Saving an Advanced Basis for GAMS<br />

A program (GAMSBAS) has been written which saves information providing an advanced<br />

basis for a GAMS model. This information contains the shadow price, variable levels, and<br />

reduced costs, and is saved in a GAMS readable file. The file can, in turn, be included in<br />

subsequent GAMS models providing an advanced basis.<br />

General Notes<br />

Use of this procedure is relevant in cases where there are alterations in the data before a<br />

SOLVE statement in a large already solved model. The model should usually have saved restart<br />

files and the alterations should require rerunning the model from scratch. The general use of the<br />

procedure requires restarting GAMS after a solve and executing with a procedure which saves the<br />

basis information. The resultant data will be written on the file *.BAS, where "*" is the name of<br />

the GAMS input file. This basis then can be included in subsequent runs. An example of this<br />

procedure is given below.<br />

The basis file should never be used for one time solution of a problem and rarely for<br />

solution of a file without use of restart files. One should only use this procedure with large<br />

models when one has to manipulate some of the original data sets equations, or variables before<br />

the first solve statement such that the model has to be restarted from scratch. One might also<br />

wish to preserve a basis from an alternative run.<br />

The implementation of GAMSBAS causes it to go through several steps. When the<br />

procedure is first called GAMS generates the model and sends it out to the solver. In turn,<br />

GAMSBAS examines the problem and selects the solver to be used. Ordinarily, the default solver<br />

for the problem type (whether linear, nonlinear, or integer) is used. Users may exercise control<br />

over this process by using the options file (GAMSBAS.OPT) as described below. In turn, the<br />

solver is invoked and then GAMSBAS writes the basis.<br />

During execution the program includes the line $OFFLISTING as the sixth line in the<br />

*.BAS file. This suppresses the listing of all but the first five lines of the basis in the file that<br />

includes it. Users wishing the full listing should delete this entry.<br />

GAMS constructs a basis using information from the optimal solution. This ordinarily<br />

involves the level and marginal value of all variables plus an indicator of whether or not an<br />

equation has a shadow price. Degeneracies and alternative optimals complicate this process.<br />

GAMSBAS tries to overcome this by inserting EPS to indicate when a variable is basic or<br />

nonbasic.<br />

Once the GAMSBAS information has been placed into GAMS the basis may not always<br />

be adequate. For example, a model which took over 100,000 iterations to get an initial solution<br />

2


equired 1200 iterations to reach optimality when restarted from its GAMSBAS basis. However,<br />

this reflected a considerable time saving.<br />

Program Usage<br />

There are three steps involved in using GAMSBAS. The first step involves changing the<br />

solver name in the GAMS file. This is done using the command:<br />

OPTION LP=GAMSBAS<br />

or<br />

OPTION NLP=GAMSBAS<br />

or<br />

OPTION IP=GAMSBAS<br />

The solver in this case is named GAMSBAS.<br />

Second, restart the model and generate the basis file. Let's assume that the model name is<br />

BLOCKDIA. One would then execute the command GAMS BLOCKDIA with the solve option<br />

inserted before the solve command, as is done in Table 1, line 147. In turn, the file<br />

BLOCKDIA.BAS is generated. This file is listed in Table 2. Note, this file is just a set of GAMS<br />

replacement commands which inserts marginal values for the equations and marginal and level<br />

values for the variables (See the chapter on Basis formation in McCarl et al for an explanation of<br />

GAMS basis formation).<br />

Third, an include command is entered right before the solve in the model to be restarted<br />

and the option selecting GAMSBAS as the solving program is normally eliminated. This is done<br />

in Table 3 in lines 147-8 (note the OPTION LP=GAMSBAS is commented out). Use of this<br />

procedure results in the model in Table 1 solving in 0 iterations after inclusion of the basis file as<br />

opposed to 23 iterations before inclusion of the basis file.<br />

One may find that when a basis from one model is included in another model that the<br />

compiler may detect domain errors because the variables are defined over sets with different<br />

structures across the two models. One can suppress the domain errors by using the GAMS<br />

command $OFFUNI just before inclusion of the basis file and $ONUNI just after.<br />

The OPTION FILE<br />

GAMSBAS internally selects the solver to use. Users may override this choice by the use<br />

of the options file. There are keywords allowed in the options file. These are as follows<br />

3


OPTION Name Purpose<br />

LP Gives name of solver for LP problem<br />

NLP Gives name of solver for NLP problem<br />

MIP Gives name of solver for MIP problem<br />

DNLP Gives name of solver for DNLP problem<br />

SOLVERNAME Gives name of solver for problem to be used<br />

In each case the option name is followed by the name of one of a licensed solvers. If the options<br />

file is empty then the default solvers will be used provided it matches the name of a solver GAMS<br />

knows about.<br />

The GAMSBAS option file is called GAMSBAS.OPT. An example of a file could look<br />

like the following 2 lines<br />

LP OSL<br />

MIP LAMPS<br />

One other important point regarding the option file involves the name of the active solver<br />

options file. As seen above the GAMSBAS.OPTION file does not include options commands<br />

such as those which should be submitted to MINOS for example. In all cases the program uses<br />

the default option filename for the particular solver. Thus if MINOS5 is being used the program<br />

looks for the solver option file on MINOS5.OPT.<br />

4


References<br />

Brooke, A., D. Kendrick, and A. Meeraus. GAMS: A User's Guide. The Scientific Press, South<br />

San Francisco, CA, 1988.<br />

McCarl, B.A. “So Your GAMS Model Didn’t Work Right: A Guide to Model Repair.” Texas<br />

A&M University, College Station, TX, 1994.<br />

McCarl, B.A., and T.H. Spreen. "Applied Mathematical Programming <strong>Using</strong> Algebraic Systems."<br />

Draft Book, Department of Agricultural Economics, Texas A&M University, College<br />

Station, TX, 1996.<br />

5


Table 1. Example File<br />

16 * SECTION A SET DEFINITION<br />

17<br />

18 SET PRODUCT TABLES CHAIRSSETS /TABLES, CHAIRS, DINSETS/<br />

19 TYPE TYPES OF PRODUCT /FUNCTIONAL ,FANCY/<br />

20 RESOURCE TYPES OF RESOURCES /SMLLATHE,LRGLATHE,CARVER,LABOR,<br />

TOP/<br />

21 METHOD PRODUCTION METHODS /NORMAL,MAXSML,MAXLRG/<br />

22 PLANT DIFFERENT PLANTS /PLANT1, PLANT2, PLANT3/<br />

23 SUBPRODUCT(PRODUCT) /TABLES, CHAIRS/;<br />

24<br />

25 * SECTION B DATA DEFINITION<br />

26<br />

27 PARAMETER SETCHAIR(TYPE) CHAIRS CONTAINED IN EACH SET<br />

28 / FUNCTIONAL 4, FANCY 6/<br />

29 TABLECOST(TYPE) TABLECOST /FUNCTIONAL 80,FANCY 100/;<br />

30<br />

31 TABLE CHAIRCOST(METHOD,TYPE) CHAIR COST FOR DIFFERENT METHOD<br />

32 FUNCTIONAL FANCY<br />

33 NORMAL 15 25<br />

34 MAXSML 16 25.7<br />

35 MAXLRG 16.5 26.6 ;<br />

36<br />

37 TABLE TB1(RESOURCE,TYPE,METHOD) USE OF RESOURCES IN CHAIR PRODUCTION<br />

38<br />

39 FUNCTIONAL.NORMAL FUNCTIONAL.MAXSML FUNCTIONAL.MAXLRG<br />

40 SMLLATHE 0.8 1.30 0.20<br />

41 LRGLATHE 0.5 0.20 1.30<br />

42 CARVER 0.4 0.40 0.40<br />

43 LABOR 1.0 1.05 1.10<br />

44 +<br />

45<br />

46 FANCY.NORMAL FANCY.MAXSML FANCY.MAXLRG<br />

47 SMLLATHE 1.2 1.7 0.50<br />

48 LRGLATHE 0.7 0.30 1.50<br />

49 CARVER 1.0 1.00 1.00<br />

50 LABOR 0.8 0.82 0.84;<br />

51<br />

52 TABLE TB2(RESOURCE,TYPE) USE OF RESOURCES IN TABLE PRODUCTION<br />

53<br />

54 FUNCTIONAL FANCY<br />

55 LABOR 3 5<br />

56 TOP 1 1;<br />

57<br />

58 TABLE TRANSCOST(SUBPRODUCT, PLANT,TYPE) TRANSPORT COST TO PLANT1<br />

59<br />

60 PLANT1.FUNCTIONAL PLANT2.FUNCTIONAL PLANT3.FUNCTIONAL<br />

61 CHAIRS 5 7<br />

62 TABLES 20<br />

63 +<br />

64 PLANT1.FANCY PLANT2.FANCY PLANT3.FANCY<br />

65 CHAIRS 5 7<br />

66 TABLES 20;<br />

67<br />

68 TABLE PRICE(PRODUCT,TYPE) PRICE OF CHAIRS<br />

69 FUNCTIONAL FANCY<br />

70 CHAIRS 82 105<br />

71 TABLES 200 300<br />

72 DINSETS 600 1100;<br />

73<br />

74 TABLE RESORAVAIL(RESOURCE,PLANT) RESOURCES AVAILABLE<br />

75 PLANT1 PLANT2 PLANT3<br />

76 TOP 50 40<br />

77 SMLLATHE 140 130<br />

78 LRGLATHE 90 100<br />

79 CARVER 120 110<br />

80 LABOR 175 125 210;<br />

6


Table 1. Example File (Continued)<br />

82 TABLE ACTIVITY(PRODUCT,PLANT) TELLS IF A PLANT SELLS A PRODUCT<br />

83 PLANT1 PLANT2 PLANT3<br />

84 TABLES 1 1<br />

85 CHAIRS 1 1<br />

86 DINSETS 1 ;<br />

87<br />

88 * SECTION C MODEL DEFINITION<br />

89<br />

90 POSITIVE VARIABLES<br />

91 MAKECHAIR(PLANT, TYPE,METHOD) NUMBER OF CHAIRS MADE<br />

92 MAKETABLE(PLANT, TYPE) NUMBER OF TABLES MADE<br />

93 TRNSPORT(PLANT,SUBPRODUCT,TYPE) NUMBER OF ITEMS TRANSPORTED<br />

94 SELL(PLANT,PRODUCT,TYPE) NUMBER OF ITEMS SOLD;<br />

95<br />

96 VARIABLES<br />

97 NETINCOME NET REVENUE (PROFIT);<br />

98 EQUATIONS<br />

99 OBJT OBJECTIVE FUNCTION ( NET REVENUE )<br />

100 RESOUREQ(PLANT,RESOURCE)<br />

101 LINKTABLE(TYPE) OVERALL FIRM TABLE LINKAGE CONSTRAINTS<br />

102 LINKCHAIR(TYPE) OVERALL FIRM CHAIR LINKAGE CONSTRAINTS<br />

103 TRNCHAIREQ(PLANT,TYPE) CHAIRS BALANCE FOR A PLANT<br />

104 TRNTABLEEQ(PLANT,TYPE) TABLES BALANCE FOR A PLANT;<br />

105<br />

106 OBJT.. NETINCOME =E=<br />

107 SUM((TYPE, PRODUCT, PLANT)$ACTIVITY(PRODUCT,PLANT),<br />

108 PRICE(PRODUCT,TYPE) * SELL(PLANT,PRODUCT,TYPE))<br />

109 - SUM((PLANT,TYPE)$ACTIVITY("TABLES",PLANT),<br />

110 MAKETABLE(PLANT,TYPE)*TABLECOST(TYPE))<br />

111 - SUM((PLANT,TYPE,METHOD)$ACTIVITY("CHAIRS",PLANT),<br />

112 MAKECHAIR(PLANT,TYPE,METHOD)* CHAIRCOST(METHOD,TYPE))<br />

113 - SUM((PLANT,TYPE,SUBPRODUCT)$TRANSCOST(SUBPRODUCT,PLANT,TYPE),<br />

114 TRANSCOST(SUBPRODUCT,PLANT,TYPE) * TRNSPORT(PLANT,SUBPRODUCT,<br />

115<br />

116 RESOUREQ(PLANT,RESOURCE)..<br />

117 SUM((TYPE,METHOD)$ACTIVITY("CHAIRS",PLANT), TB1(RESOURCE, TYPE,METHOD)<br />

118 * MAKECHAIR(PLANT,TYPE,METHOD)) + SUM(TYPE$TB2(RESOURCE, TYPE),<br />

119 TB2(RESOURCE,TYPE) * MAKETABLE(PLANT,TYPE))<br />

120 =L= RESORAVAIL(RESOURCE,PLANT) ;<br />

121<br />

122 LINKTABLE(TYPE)..<br />

123 SUM(PRODUCT$ACTIVITY(PRODUCT,"PLANT1"), SELL("PLANT1", PRODUCT,TYPE))<br />

124 =L= MAKETABLE("PLANT1",TYPE) +<br />

125 SUM(PLANT$TRANSCOST("TABLES",PLANT,TYPE),<br />

126 TRNSPORT(PLANT,"TABLES",TYPE));<br />

127<br />

128 LINKCHAIR(TYPE)..<br />

129 SELL("PLANT1","DINSETS",TYPE) * SETCHAIR(TYPE)<br />

130 =L= SUM(PLANT$TRANSCOST("CHAIRS",PLANT,TYPE),<br />

131 TRNSPORT(PLANT,"CHAIRS",TYPE));<br />

132<br />

133 TRNCHAIREQ(PLANT,TYPE)..<br />

134 (TRNSPORT(PLANT,"CHAIRS",TYPE) + SELL(PLANT,"CHAIRS",TYPE))<br />

135 $TRANSCOST("CHAIRS",PLANT,TYPE)<br />

136 =L= SUM(METHOD$ACTIVITY("CHAIRS",PLANT),<br />

137 MAKECHAIR(PLANT,TYPE,METHOD));<br />

138<br />

139 TRNTABLEEQ(PLANT,TYPE)..<br />

140 (TRNSPORT(PLANT,"TABLES",TYPE) + SELL(PLANT,"TABLES",TYPE)<br />

141 - MAKETABLE(PLANT,TYPE))$TRANSCOST("TABLES",PLANT,TYPE)<br />

142 =L= 0 ;<br />

143<br />

144 MODEL Furn /ALL/;<br />

145<br />

146 * SECTION D SOLVE THE PROBLEM<br />

147 option lp=gamsbas<br />

148 SOLVE Furn USING LP MAXIMIZING NETINCOME; 149<br />

7


Table 2. Basis File<br />

OBJT.m = 1.00000000000 ;<br />

RESOUREQ.m ("PLANT1","LABOR") = 44.0000000000 ;<br />

RESOUREQ.m ("PLANT2","SMLLATHE") = 47.7696526508 ;<br />

RESOUREQ.m ("PLANT2","LRGLATHE") = 38.8299817185 ;<br />

RESOUREQ.m ("PLANT2","LABOR") = 19.3692870201 ;<br />

$ OFFLISTING;<br />

RESOUREQ.m ("PLANT3","SMLLATHE") = 18.4975609756 ;<br />

RESOUREQ.m ("PLANT3","LRGLATHE") = 12.1853658537 ;<br />

RESOUREQ.m ("PLANT3","CARVER") = 35.2731707317 ;<br />

RESOUREQ.m ("PLANT3","LABOR") = 40.0000000000 ;<br />

LINKTABLE.m ("FUNCTIONAL") = 212.000000000 ;<br />

LINKTABLE.m ("FANCY") = 320.000000000 ;<br />

LINKCHAIR.m ("FUNCTIONAL") = 97.0000000000 ;<br />

LINKCHAIR.m ("FANCY") = 130.000000000 ;<br />

TRNCHAIREQ.m ("PLANT2","FUNCTIONAL") = 92.0000000000 ;<br />

TRNCHAIREQ.m ("PLANT2","FANCY") = 125.000000000 ;<br />

TRNCHAIREQ.m ("PLANT3","FUNCTIONAL") = 90.0000000000 ;<br />

TRNCHAIREQ.m ("PLANT3","FANCY") = 123.000000000 ;<br />

TRNTABLEEQ.m ("PLANT3","FUNCTIONAL") = 200.000000000 ;<br />

TRNTABLEEQ.m ("PLANT3","FANCY") = 300.000000000 ;<br />

MAKECHAIR.l ("PLANT2","FUNCTIONAL","NORMAL") = 62.2333942718 ;<br />

MAKECHAIR.m ("PLANT2","FUNCTIONAL","MAXSML") = -14.2042961609 ;<br />

MAKECHAIR.m ("PLANT2","FUNCTIONAL","MAXLRG") = -5.83912248629 ;<br />

MAKECHAIR.l ("PLANT2","FANCY","NORMAL") = 73.0195003047 ;<br />

MAKECHAIR.m ("PLANT2","FANCY","MAXSML") = -9.44021937843 ;<br />

MAKECHAIR.l ("PLANT2","FANCY","MAXLRG") = 5.17976843388 ;<br />

MAKECHAIR.l ("PLANT3","FUNCTIONAL","NORMAL") = 35.3658536585 ;<br />

MAKECHAIR.m ("PLANT3","FUNCTIONAL","MAXSML") = -8.59317073171 ;<br />

MAKECHAIR.m ("PLANT3","FUNCTIONAL","MAXLRG") = -4.14975609756 ;<br />

MAKECHAIR.l ("PLANT3","FANCY","NORMAL") = 76.8292682927 ;<br />

MAKECHAIR.m ("PLANT3","FANCY","MAXSML") = -5.87463414634 ;<br />

MAKECHAIR.l ("PLANT3","FANCY","MAXLRG") = 19.0243902439 ;<br />

MAKETABLE.l ("PLANT1","FUNCTIONAL") = 24.3998119826 ;<br />

MAKETABLE.l ("PLANT1","FANCY") = 20.3601128105 ;<br />

MAKETABLE.m ("PLANT2","FUNCTIONAL") = -58.1078610603 ;<br />

MAKETABLE.m ("PLANT2","FANCY") = -96.8464351005 ;<br />

MAKETABLE.l ("PLANT3","FANCY") = 19.4380487805 ;<br />

MAKETABLE.m ("PLANT3", "FUNCTIONAL") = EPS;<br />

TRNSPORT.l ("PLANT2","CHAIRS","FUNCTIONAL") = 62.2333942718 ;<br />

TRNSPORT.l ("PLANT2","CHAIRS","FANCY") = 78.1992687386 ;<br />

TRNSPORT.m ("PLANT3","TABLES","FUNCTIONAL") = -8.00000000000 ;<br />

TRNSPORT.l ("PLANT3","TABLES","FANCY") = 8.64870840207 ;<br />

TRNSPORT.l ("PLANT3","CHAIRS","FUNCTIONAL") = 35.3658536585 ;<br />

TRNSPORT.l ("PLANT3","CHAIRS","FANCY") = 95.8536585366 ;<br />

SELL.m ("PLANT1","TABLES","FUNCTIONAL") = -12.0000000000 ;<br />

SELL.m ("PLANT1","TABLES","FANCY") = -20.0000000000 ;<br />

SELL.l ("PLANT1","DINSETS","FUNCTIONAL") = 24.3998119826 ;<br />

SELL.l ("PLANT1","DINSETS","FANCY") = 29.0088212125 ;<br />

SELL.m ("PLANT2","CHAIRS","FUNCTIONAL") = -10.00000000000 ;<br />

SELL.m ("PLANT2","CHAIRS","FANCY") = -20.0000000000 ;<br />

SELL.l ("PLANT3","TABLES","FANCY") = 10.7893403784 ;<br />

SELL.m ("PLANT3","CHAIRS","FUNCTIONAL") = -8.00000000000 ;<br />

SELL.m ("PLANT3","CHAIRS","FANCY") = -18.0000000000 ;<br />

NETINCOME.l = 36206.8788960 ;<br />

8


Table 3. Example with Basis File Included<br />

16 * SECTION A SET DEFINITION<br />

17<br />

18 SET PRODUCT TABLES CHAIRSSETS /TABLES, CHAIRS, DINSETS/<br />

19 TYPE TYPES OF PRODUCT /FUNCTIONAL ,FANCY/<br />

20 RESOURCE TYPES OF RESOURCES /SMLLATHE,LRGLATHE,CARVER,LABOR, TOP/<br />

21 METHOD PRODUCTION METHODS /NORMAL,MAXSML,MAXLRG/<br />

22 PLANT DIFFERENT PLANTS /PLANT1, PLANT2, PLANT3/<br />

23 SUBPRODUCT(PRODUCT) /TABLES, CHAIRS/;<br />

24<br />

25 * SECTION B DATA DEFINITION<br />

26<br />

27 PARAMETER SETCHAIR(TYPE) CHAIRS CONTAINED IN EACH SET<br />

28 / FUNCTIONAL 4, FANCY 6/<br />

29 TABLECOST(TYPE) TABLECOST /FUNCTIONAL 80,FANCY 100/;<br />

30<br />

31 TABLE CHAIRCOST(METHOD,TYPE) CHAIR COST FOR DIFFERENT METHOD<br />

32 FUNCTIONAL FANCY<br />

33 NORMAL 15 25<br />

34 MAXSML 16 25.7<br />

35 MAXLRG 16.5 26.6 ;<br />

36<br />

37 TABLE TB1(RESOURCE,TYPE,METHOD) USE OF RESOURCES IN CHAIR PRODUCTION<br />

38<br />

39 FUNCTIONAL.NORMAL FUNCTIONAL.MAXSML FUNCTIONAL.MAXLRG<br />

40 SMLLATHE 0.8 1.30 0.20<br />

41 LRGLATHE 0.5 0.20 1.30<br />

42 CARVER 0.4 0.40 0.40<br />

43 LABOR 1.0 1.05 1.10<br />

44 +<br />

45<br />

46 FANCY.NORMAL FANCY.MAXSML FANCY.MAXLRG<br />

47 SMLLATHE 1.2 1.7 0.50<br />

48 LRGLATHE 0.7 0.30 1.50<br />

49 CARVER 1.0 1.00 1.00<br />

50 LABOR 0.8 0.82 0.84;<br />

51<br />

52 TABLE TB2(RESOURCE,TYPE) USE OF RESOURCES IN TABLE PRODUCTION<br />

53<br />

54 FUNCTIONAL FANCY<br />

55 LABOR 3 5<br />

56 TOP 1 1;<br />

57<br />

58 TABLE TRANSCOST(SUBPRODUCT, PLANT,TYPE) TRANSPORT COST TO PLANT1<br />

60 PLANT1.FUNCTIONAL PLANT2.FUNCTIONAL PLANT3.FUNCTIONAL<br />

61 CHAIRS 5 7<br />

62 TABLES 20<br />

63 +<br />

64 PLANT1.FANCY PLANT2.FANCY PLANT3.FANCY<br />

65 CHAIRS 5 7<br />

66 TABLES 20;<br />

67<br />

68 TABLE PRICE(PRODUCT,TYPE) PRICE OF CHAIRS<br />

69 FUNCTIONAL FANCY<br />

70 CHAIRS 82 105<br />

71 TABLES 200 300<br />

72 DINSETS 600 1100;<br />

73<br />

74 TABLE RESORAVAIL(RESOURCE,PLANT) RESOURCES AVAILABLE<br />

75 PLANT1 PLANT2 PLANT3<br />

76 TOP 50 40<br />

77 SMLLATHE 140 130<br />

78 LRGLATHE 90 100<br />

79 CARVER 120 110<br />

80 LABOR 175 125 210;<br />

81<br />

82 TABLE ACTIVITY(PRODUCT,PLANT) TELLS IF A PLANT SELLS A PRODUCT<br />

83 PLANT1 PLANT2 PLANT3<br />

9


Table 3. Example with Basis File Included (Continued)<br />

84 TABLES 1 1<br />

85 CHAIRS 1 1<br />

86 DINSETS 1 ;<br />

87<br />

88 * SECTION C MODEL DEFINITION<br />

89<br />

90 POSITIVE VARIABLES<br />

91 MAKECHAIR(PLANT, TYPE,METHOD) NUMBER OF CHAIRS MADE<br />

92 MAKETABLE(PLANT, TYPE) NUMBER OF TABLES MADE<br />

93 TRNSPORT(PLANT,SUBPRODUCT,TYPE) NUMBER OF ITEMS TRANSPORTED<br />

94 SELL(PLANT,PRODUCT,TYPE) NUMBER OF ITEMS SOLD;<br />

95<br />

96 VARIABLES<br />

97 NETINCOME NET REVENUE (PROFIT);<br />

98 EQUATIONS<br />

99 OBJT OBJECTIVE FUNCTION ( NET REVENUE )<br />

100 RESOUREQ(PLANT,RESOURCE)<br />

101 LINKTABLE(TYPE) OVERALL FIRM TABLE LINKAGE CONSTRAINTS<br />

102 LINKCHAIR(TYPE) OVERALL FIRM CHAIR LINKAGE CONSTRAINTS<br />

103 TRNCHAIREQ(PLANT,TYPE) CHAIRS BALANCE FOR A PLANT<br />

104 TRNTABLEEQ(PLANT,TYPE) TABLES BALANCE FOR A PLANT;<br />

105<br />

106 OBJT.. NETINCOME =E=<br />

107 SUM((TYPE, PRODUCT, PLANT)$ACTIVITY(PRODUCT,PLANT),<br />

108 PRICE(PRODUCT,TYPE) * SELL(PLANT,PRODUCT,TYPE))<br />

109 - SUM((PLANT,TYPE)$ACTIVITY("TABLES",PLANT),<br />

110 MAKETABLE(PLANT,TYPE)*TABLECOST(TYPE))<br />

111 - SUM((PLANT,TYPE,METHOD)$ACTIVITY("CHAIRS",PLANT),<br />

112 MAKECHAIR(PLANT,TYPE,METHOD)* CHAIRCOST(METHOD,TYPE))<br />

113 - SUM((PLANT,TYPE,SUBPRODUCT)$TRANSCOST(SUBPRODUCT,PLANT,TYPE),<br />

114 TRANSCOST(SUBPRODUCT,PLANT,TYPE) * TRNSPORT(PLANT,SUBPRODUCT, TYPE));<br />

115<br />

116 RESOUREQ(PLANT,RESOURCE)..<br />

117 SUM((TYPE,METHOD)$ACTIVITY("CHAIRS",PLANT), TB1(RESOURCE,TYPE,METHOD)<br />

118 * MAKECHAIR(PLANT,TYPE,METHOD)) + SUM(TYPE$TB2(RESOURCE, TYPE),<br />

119 TB2(RESOURCE,TYPE) * MAKETABLE(PLANT,TYPE))<br />

120 =L= RESORAVAIL(RESOURCE,PLANT) ;<br />

121<br />

122 LINKTABLE(TYPE)..<br />

123 SUM(PRODUCT$ACTIVITY(PRODUCT,"PLANT1"), SELL("PLANT1",PRODUCT,TYPE))<br />

124 =L= MAKETABLE("PLANT1",TYPE) +<br />

125 SUM(PLANT$TRANSCOST("TABLES",PLANT,TYPE),<br />

126 TRNSPORT(PLANT,"TABLES",TYPE));<br />

127<br />

128 LINKCHAIR(TYPE)..<br />

129 SELL("PLANT1","DINSETS",TYPE) * SETCHAIR(TYPE)<br />

130 =L= SUM(PLANT$TRANSCOST("CHAIRS",PLANT,TYPE),<br />

131 TRNSPORT(PLANT,"CHAIRS",TYPE));<br />

132<br />

133 TRNCHAIREQ(PLANT,TYPE)..<br />

134 (TRNSPORT(PLANT,"CHAIRS",TYPE) + SELL(PLANT,"CHAIRS",TYPE))<br />

135 $TRANSCOST("CHAIRS",PLANT,TYPE)<br />

136 =L= SUM(METHOD$ACTIVITY("CHAIRS",PLANT),<br />

137 MAKECHAIR(PLANT,TYPE,METHOD));<br />

138<br />

139 TRNTABLEEQ(PLANT,TYPE)..<br />

140 (TRNSPORT(PLANT,"TABLES",TYPE) + SELL(PLANT,"TABLES",TYPE)<br />

141 - MAKETABLE(PLANT,TYPE))$TRANSCOST("TABLES",PLANT,TYPE)<br />

142 =L= 0 ;<br />

143<br />

144 MODEL Furn /ALL/;<br />

145<br />

146 * SECTION D SOLVE THE PROBLEM<br />

147 * option lp=gamsbas<br />

148 $INCLUDE "blockdia.bas"<br />

149 SOLVE Furn USING LP MAXIMIZING NETINCOME;<br />

10


Abridged GAMSCHK USER DOCUMENTATION<br />

Version 1.1<br />

A System for Examining the Structure and Solution<br />

Properties of Linear Programming Problems<br />

Solved using GAMS<br />

by<br />

Bruce A. McCarl<br />

Professor<br />

Department of Agricultural Economics<br />

Texas A&M University<br />

(409) 845-7504 (fax)<br />

mccarl@tamu.edu<br />

© Bruce A. McCarl<br />

June 25,1998


GAMSCHK USER DOCUMENTATION<br />

General Notes on Package Usage ........................................... 1<br />

Selecting a Procedure and Providing Input -- the *.GCK File ................. 2<br />

The *.GCK file: General Notes on Item Selection ......................... 3<br />

Procedure Output ................................................. 5<br />

Nonlinear Terms .................................................. 6<br />

Entering Comments in the *.GCK File .................................. 6<br />

Controlling Page Width in the *.GCK File ............................... 6<br />

Running Multiple Procedures ......................................... 6<br />

Use of the Procedures .................................................... 7<br />

DISPLAYCR ..................................................... 7<br />

MATCHIT ....................................................... 9<br />

ANALYSIS ..................................................... 11<br />

BLOCKLIST .................................................... 11<br />

BLOCKPIC ..................................................... 12<br />

PICTURE ...................................................... 13<br />

POSTOPT ...................................................... 14<br />

ADVISORY .................................................... 15<br />

NONOPT ...................................................... 16<br />

<strong>Options</strong> File ........................................................... 17<br />

<strong>Solver</strong> Choice <strong>Options</strong> ............................................ 17<br />

When Should I Use SOLVE or NOSOLVE .............................. 17<br />

Control of Number of Variable and Row Selections Allowed ................ 18<br />

Scaling ......................................................... 18<br />

NONOPT Filters ................................................. 18<br />

Example <strong>Options</strong> File .............................................. 19<br />

<strong>Solver</strong> <strong>Options</strong> File ............................................... 19<br />

Known Bugs .......................................................... 19<br />

References ............................................................ 21<br />

Appendix A: Reserved Names ............................................ 26<br />

Appendix B: GAMSCHK One Page Summary ................................. 27


LIST OF TABLES<br />

Table 1. Conditions under which a modeler should be advised of potential difficulty for<br />

equations without nonlinear terms. .............................. 22<br />

Table 2. Conditions under which a modeler should be warned about variables in a<br />

maximization problem. ....................................... 23<br />

Table 3. Conditions When Model Elements Could be Unbounded or Infeasible .... 24<br />

Table 4. Conditions for Potential Infeasibility or Redundancy in Equations Based on<br />

Bounds on Variables ......................................... 25


GAMSCHK USER DOCUMENTATION<br />

This document describes procedures designed to aid users who wish to examine<br />

empirical GAMS models for possible flaws. The conceptual basis for many of the routines<br />

herein is supplied in McCarl and Spreen, and McCarl et.al.<br />

This package of routines is designed for use on any GAMS platform, but for now is<br />

implemented on the HP, PC, DEC Alpha, IBM RS6000 and SUN workstations. The<br />

function of the specific components of GAMSCHK are to:<br />

(a) List coefficients for user selected equations and/or variables using the<br />

DISPLAYCR procedure.<br />

(b) List the characteristics of selected groups of variables and/or equations using<br />

MATCHIT.<br />

(c) List the characteristics of equation and variable blocks using BLOCKLIST.<br />

(d) Examine a GAMS model to see whether any variables and equations contain<br />

specification errors using ANALYSIS.<br />

(e) Generate schematics depicting the characteristics of coefficients by variable<br />

and equation blocks using BLOCKPIC.<br />

(f) Generate a schematic for small GAMS models or portions of larger models<br />

depicting the location of coefficients by sign and magnitude using PICTURE.<br />

(g) Reconstruct the reduced cost of variables and the activity within equations<br />

after a model solution using POSTOPT.<br />

(h) Help resolving problems with unbounded or infeasible models using NONOPT<br />

and ADVISORY.<br />

General Notes on Package Usage<br />

GAMSCHK must replace a solver. This is done using a GAMS option statement of<br />

the form:<br />

OPTION LP= GAMSCHK;<br />

or<br />

OPTION NLP=GAMSCHK;<br />

or<br />

OPTION MIP=GAMSCHK;<br />

1


1<br />

2<br />

which replaces either the NLP, LP, or MIP solver with GAMSCHK. 1 In turn, the user will<br />

invoke the solver using the statement:<br />

SOLVE MODELNAME USING LP MINIMIZING OBJNAME;<br />

where MODELNAME is the name used in the GAMS MODEL statement; OBJNAME is<br />

the objective function name for the model; and the type of solver that GAMSCHK has<br />

replaced which must also be able to solve this type of problem (LP, NLP, or MIP) is<br />

identified.<br />

file:<br />

The following are examples of GAMS sequences which can be added to the GAMS<br />

OPTION NLP=GAMSCHK;<br />

SOLVE TRANSPORT USING NLP MINIMIZING Z;<br />

or<br />

OPTION LP=GAMSCHK;<br />

SOLVE FEED USING LP MINIMIZING COST;<br />

or<br />

OPTION MIP=GAMSCHK;<br />

SOLVE RESOURCE USING MIP MAXIMIZING PROFIT;<br />

Selecting a Procedure and Providing Input -- the *.GCK File<br />

GAMSCHK requires that the user indicate which procedures are to be employed.<br />

This is specified through the use of the *.GCK file where the * refers to the filename from<br />

the GAMS execution instruction 2 . The general form of that file is:<br />

FIRST PROCEDURE NAME<br />

ITEM SELECTION INPUT<br />

SECOND PROCEDURE NAME<br />

ITEM SELECTION INPUT<br />

Spaces and capitalization are ignored in this input. For example, a *.GCK file could look like<br />

DISPLAYCR<br />

In all cases, users will be able to replace the LP solver. Replacement of the other solvers<br />

depends on the solver licenses owned by the user.<br />

Thus, if the GAMS instructions are in the file called MYMODEL, and GAMS is invoked<br />

using the DOS command GAMS MYMODEL, then the GCK file would be called<br />

MYMODEL.GCK. If GAMS instructions are on the filename with a period in it then the<br />

name up to the period will be used, i.e., the GCK file associated with MYMODEL.IT<br />

would be MYMODEL.GCK<br />

2


PICTURE<br />

variables<br />

SELL(*,*,FANCY)<br />

maketable<br />

Invariables<br />

transport(plant2,*,fancy)<br />

Equations<br />

objT<br />

notthere<br />

inequations<br />

resourceq(plant1)<br />

The first procedure name in this case is DISPLAYCR and the following 10 lines indicate the<br />

items to be selected. Then, we also request PICTURE. Selection entries are treated using<br />

several assumptions. In particular:<br />

1) If the *.GCK file is empty then it is assumed that the BLOCKPIC procedure is<br />

selected.<br />

2) Spaces maybe freely used in the GCK input file.<br />

3) Upper, lower, or mixed case input is accepted.<br />

4) GAMSCHK recognizes certain words. These words are listed in Appendix A<br />

and cannot be used as variable or equation names.<br />

The *.GCK file: General Notes on Item Selection<br />

Some of the procedures permit selection of variables, equations or functions.<br />

<strong>Specific</strong>ally, the DISPLAYCR, PICTURE, POSTOPT, and MATCHIT procedures accept<br />

input identifying the variables and equations to be utilized. Also NONOPT accepts limited<br />

input controlling its function. General observations about the selection requests are<br />

1) Variables can be chosen by entering the word VARIABLE or VARIABLES<br />

possibly with a modifier, followed by variable selection statements.<br />

2) Variables can also be selected using the INEQUATION or INEQUATIONS<br />

syntax followed by names of equations. Use of this syntax results in selection<br />

of variables with coefficients in the named equations.<br />

3) Equations are selected by entering the keyword, EQUATION or EQUATIONS<br />

possibly with a modifier, followed by equation selection statements.<br />

4) Equations can also be selected using the INVARIABLE or INVARIABLES<br />

syntax followed by names of variables. Use of this syntax results in selection<br />

of equations in which the named variables have coefficients.<br />

3


5) Certain item selection modifier keywords can be used depending on<br />

procedure. The INTERSECT keyword works with procedures DISPLAYCR<br />

and POSTOPT. The INEQUATION and INVARIABLE keywords work with<br />

procedures DISPLAYCR, PICTURE and POSTOPT. LISTEQUATION and<br />

LISTVARIABLE keywords work with the MATCHIT procedure.<br />

INSOLUTION, NOTINSOLUTION, BINDING, and NOTBINDING keywords<br />

work with POSTOPT. The keywords VERBOSE and IDENTIFY work with<br />

NONOPT.<br />

6) If variable or equation names do not follow the keyword, then usually all<br />

variables or equations are assumed selected.<br />

When variables or equations are to be selected after an item selection keyword, a<br />

number of input conventions apply. These conventions are:<br />

1) If a variable or equation name is entered without any following parentheses,<br />

then all cases for that variable or equation are selected.<br />

2) The selection entries identify specific elements from among the sets over<br />

which the variables and equations are defined. In specifying these elements<br />

one can use various wild card entries as discussed below or an element name.<br />

Note GAMS set or subset names cannot be used. Set membership<br />

information is not available to the GAMSCHK routines.<br />

3) Wild cards can be used to select items. An "*" will select any item. For<br />

example, "B*" will select anything starting with a B. "A?B" will select<br />

anything beginning with A, ending with B with one intervening alpha numeric<br />

character.<br />

4) When individual elements are specified, you need not enclose them in quotes<br />

(").<br />

5) Quotes must be specified to include set item names with spaces, and special<br />

characters. In that case wild cards do not work and all input up to the next<br />

quote is simply copied.<br />

6) When the selected item has more dimensions than specified, then all later<br />

dimensions are handled as if a wild card were specified. For example, when a<br />

variable X is defined with reference to 4 sets in the GAMS instructions, but<br />

only 3 parameters are specified in the GAMSCHK input, then the request is<br />

handled as if all elements of the 4th are desired.<br />

7) When the selected item has less dimensions in GAMS than in the item<br />

4


selection input, then all additional dimensions are ignored. Thus, when a<br />

variable X is defined with reference to 3 sets in GAMS, but 4 parameters are<br />

specified in the item selection file, then the 4th specification is ignored.<br />

8) Multiple selection statements can appear on successive lines of the *.GCK file.<br />

Output is ordered according to the way items are found in the GAMS file<br />

which is determined by the ordering of variables, equations, and set elements<br />

in the original GAMS input.<br />

9) Error messages will be generated when an entry cannot be matched to a<br />

GAMS element.<br />

10) Examples include<br />

Procedure Output<br />

X(*,CLEVELAND) which indicates that X will be selected for any<br />

element of the first set where the element in the<br />

second set equals CLEVELAND<br />

X(SEATTLE) when X is two dimensional selects all cases<br />

where the first set element is SEATTLE<br />

X(SEATTLE,CHICAGO,Z) when X is two dimensional selects the case<br />

where the first set element equals SEATTLE,<br />

and the second element equals CHICAGO. The<br />

third is ignored.<br />

X all X's will be selected<br />

X(S*, C.O, Z) when X is three dimensional selects where all<br />

X's with first element starting with S, second<br />

element beginning with C and ending with O<br />

and third element Z will be selected.<br />

* all variables or equations will be selected<br />

{empty selection set} all variables or equations will be selected<br />

In all cases the output generated by the procedure will be written to the *.LST file<br />

associated with the GAMS call. Thus, if the file is called MODEL with the *.GCK file<br />

(MODEL.GCK), then all output will be on MODEL.LST.<br />

5


Nonlinear Terms<br />

GAMS models examined with GAMSCHK may involve nonlinear terms. In such<br />

cases, GAMSCHK uses the value of the nonlinear term sent forth from GAMS which is an<br />

accurate marginal, not total value. GAMS develops this value based on the current level<br />

value of the variable. This will either be: a) the starting point selected by GAMS, if the<br />

model has not been solved, or b) the current solution value, if the model has been solved.<br />

The most accurate portrayals of the coefficients will be generated after the model has been<br />

solved through a GAMS SOLVE command before invoking GAMSCHK. Some cases may<br />

require a solution and/or the specification of a good starting point before using GAMSCHK.<br />

Also, nonlinear terms potentially cause misleading coefficients as those values are local<br />

marginal, not global, values determined by the current levels of the variables. Nonlinear<br />

terms are marked with *** in the DISPLAYCR, POSTOPT, and NONOPT output.<br />

Entering Comments in the *.GCK File<br />

The *.GCK file has been programmed so that users can enter comments. These<br />

comments can take one of two forms. Comments that begin with a hash mark are copied to<br />

the output when the program runs. Comments which begin with a question mark are simply<br />

overlooked. Thus, one can temporarily comment GAMSCHK selection statements making<br />

them inactive by putting in question marks. If multiple procedures are being run or if some<br />

sort of output is decided to screen in the computer output then the hash marks can be<br />

entered.<br />

Controlling Page Width in the *.GCK File<br />

When running multiple procedures, in particular the pictures with other procedures, it<br />

is often desirable to have some procedures run with wide page widths, but the rest with a<br />

narrower page width. The GCK file provides the option to narrow the page width using a<br />

PW= command. In particular, what one can do is run GAMS with a large page width, i.e.<br />

run GAMS BLOCK pw=200, then insert in the GCK file instructions which narrow that page<br />

width for selected procedures. Users should note that the page width can never be made any<br />

wider than the default page width when running with GAMS. Information in excess of the<br />

page width will be ignored. Thus, if the model is run under the default status which has a<br />

page width of 75 characters then GAMSCHK will reduce the page width down to the<br />

maximum page width allowed. Consequently, the pw= command can only be used to narrow<br />

the page width from the default page width, not increase it.<br />

Running Multiple Procedures<br />

GAMSCHK can run multiple procedures during one job. This is done by simply<br />

stacking the sequence of the commands in the .GCK file.<br />

6


Use of the Procedures<br />

The following section describes the procedures available in GAMSCHK and their<br />

input requirements.<br />

DISPLAYCR<br />

Brief Purpose: DISPLAYCR displays all coefficients from the empirical model<br />

for a set of user selected equations and variables. All nonzero<br />

coefficients under each selected variable or in each selected<br />

equation are displayed with the associated variable or equation<br />

name and coefficient value. The selection entries may refer to<br />

all terms in equations /under variables or only those coefficients<br />

at the intersection of the selected variables and equations.<br />

Usage Notes: This option mirrors the GAMS LIMCOL and LIMROW<br />

options, but allows the user to select the specific items to be<br />

displayed. Partial displays within a variable or equation are also<br />

allowed using INTERSECT. Use of VARIABLE and<br />

EQUATION keywords followed by selection statements allows<br />

one to select variables and equations. Use of the<br />

INVARIABLE command allows users to select the equations<br />

which are associated with a particular variable. For example, if<br />

one is having trouble with a particular variable and wants to<br />

look at competition in the equations in which it appears, then<br />

selecting the variable under the INVARIABLE command will<br />

display the complete contents of all the equations in which the<br />

selected variables have coefficients. Similarly, the<br />

INEQUATION command will display the complete contents of<br />

all variables which fall in a particular equation. Nonlinear<br />

terms are marked with ***.<br />

When the keyword INTERSECT is found then only the<br />

coefficients at the intersection of the specified equations and<br />

variables are selected. Use of INTERSECT with the<br />

INVARIABLE syntax results in the named variables and the<br />

equations in which they fall being selected. Similarly, use of<br />

INTERSECT with the INEQUATION syntax results in selection<br />

of the named equations and the variables which fall in those<br />

equations.<br />

Note that when GAMS internal scaling features are employed<br />

7


the default option is that the scaled output is displayed. This<br />

can be altered using the DESCALE feature of the solver<br />

options file.<br />

Input File : The keyword DISPLAYCR is entered followed by optional<br />

lines of item selection input identifying the variables and<br />

equations to be displayed. This file can contain the keywords<br />

VARIABLE, INVARIABLE, EQUATION, and INEQUATION,<br />

with each followed by a specification of the items to be<br />

selected using the procedure input specification conventions<br />

that were described above. The keyword INTERSECT can<br />

also be used. Several special cases are relevant:<br />

a) If none of the above keywords are found after<br />

DISPLAYCR and another procedure name does not<br />

follow, then the input is assumed to identify variables.<br />

b) If input is found but the VARIABLE or INEQUATION<br />

keyword cannot be found then no variables are<br />

assumed selected.<br />

c) If the VARIABLE keyword is entered, but is followed<br />

by the end of file or an Appendix A reserved word and<br />

INEQUATION does not appear, then all variables are<br />

assumed selected.<br />

d) If the EQUATION or INVARIABLE keyword cannot be<br />

found, then no equations are assumed selected.<br />

e) If the EQUATION keyword is entered, but is followed<br />

by the end of the file or a reserved word and the<br />

INVARIABLE command does not occur, then all<br />

equations are assumed selected.<br />

f) The keyword INVARIABLE is allowed. It should be<br />

followed by variable selection statements. In turn,<br />

DISPLAYCR selects all equations which have nonzero<br />

entries under the INVARIABLE selections.<br />

g) The keyword INEQUATION may be used. It should be<br />

followed by equation selection statements. In turn,<br />

DISPLAYCR selects all variables which have nonzero<br />

entries in the INEQUATION selections.<br />

8


MATCHIT<br />

h) The keyword INTERSECT causes only coefficients at<br />

the intersection of the specified equations and variables<br />

to be displayed. This occurs for all specifications in this<br />

run of DISPLAYCR. One should use DISPLAYCR<br />

again if some intersecting and some non-intersecting<br />

displays are desired.<br />

i) When INTERSECT appears along with INVARIABLE,<br />

the named variable is selected along with all the<br />

equations in which it falls. Similarly, when<br />

INTERSECT and INEQUATION appear then all the<br />

named equations and the variables appearing in them<br />

are selected.<br />

Brief Purpose: MATCHIT retrieves the names and characteristics of selected<br />

variables and equations. The characteristics reported tell whether the<br />

items are nonlinear as well as reporting scaling characteristics and<br />

counts of the coefficients. MATCHIT will summarize the items which<br />

match a request or list all the items individually.<br />

Usage Notes: The input to MATCHIT can include the keywords VARIABLE<br />

and EQUATION along with those keywords with the prefix<br />

LIST attached. When the LIST prefix is not used, the<br />

procedure summarizes the characteristics of all items which<br />

match the item requests counting the number of matching<br />

items, the number of those items which are nonlinear, the total<br />

coefficients under or in those items, the number of positive,<br />

negative, and nonlinear coefficients that fall under or in those<br />

items. This does not list the names of the individual items<br />

which match. If the LIST prefix is used (entering<br />

LISTVARIABLE or LISTEQUATION) then the individual<br />

matching items are printed in the order in which they are<br />

encountered. For each matching item the information tells<br />

whether it is nonlinear, how many total coefficients it has, the<br />

count of positive, negative, and nonlinear coefficients falling<br />

under it, and the minimum and maximum absolute values of<br />

coefficients under it (excluding the objective function<br />

coefficient).<br />

Note that when GAMS internal scaling features are employed<br />

then by default scaled output is displayed. This can be altered<br />

using the DESCALE feature of the solver options file.<br />

9


Input File : This file contains the keyword MATCHIT, followed by<br />

optional item selection input data. The optional input identifies<br />

the variables and equations to be displayed. This input can<br />

contain the keywords VARIABLE or LISTVARIABLE followed<br />

by a specification of the variables to be selected using the<br />

procedure input specification conventions that were described<br />

above. This can be followed by the keyword EQUATION or<br />

LISTEQUATION and the specified entries.<br />

Several special cases are relevant:<br />

a) If the procedure name is not followed by any<br />

selection input, then a count of all variables and<br />

equations appears.<br />

b) If the input is found, but the input does not<br />

begin with VARIABLE, EQUATION,<br />

LISTVARIABLE, or LISTEQUATION<br />

keywords, then the input is assumed to contain<br />

variable names.<br />

c) If the VARIABLE keyword is entered, but is not<br />

followed by variable selection statements, and<br />

LISTVARIABLE does not appear, then all<br />

variables are assumed selected.<br />

d) If the EQUATION or LISTEQUATION<br />

keyword cannot be found, then no equations are<br />

assumed selected.<br />

e) If the EQUATION keyword is entered, but is<br />

not followed by equation selection statements<br />

or a LISTEQUATION entry, then all equations<br />

are assumed selected.<br />

f) The keyword LISTVARIABLE is allowed. It<br />

should be followed by variable selection<br />

statements. In turn, MATCHIT lists all<br />

variables which fall under the request.<br />

g) The keyword LISTEQUATION may also be<br />

used. It should be followed by equation<br />

selection statements. In turn, MATCHIT lists<br />

all equations which fall under the request.<br />

10


ANALYSIS<br />

BLOCKLIST<br />

h) If none of the above keywords are found, the<br />

input is assumed to identify variables.<br />

Brief Purpose: Analyzes the structure of all variables and equations. Information is<br />

given on errors involving obvious model misspecifications causing<br />

redundancy, zero variable values, infeasibility, unboundedness, or<br />

obvious constraint relaxations in linear programs. The checks are<br />

those identified in Tables 1, 2 and 3.<br />

Usage Notes: The analysis tests given in Tables 1 and 2 are utilized to<br />

determine if individual variables or equations in the model<br />

possess obvious specification errors. One test, for example,<br />

considers whether or not in a maximization problem a variable<br />

appears which has a positive return in the objective function,<br />

but no coefficients in the constraints indicating an obviously<br />

unbounded model. Similarly, information is provided on<br />

whether certain equations can never be satisfied. For example,<br />

tests examine whether an equality equation appears with a<br />

negative right hand side and all positives on the left hand side.<br />

Also tests see whether the bounds on variables preclude<br />

equation satisfaction or make equations redundant (Table 3).<br />

In ANALYSIS these tests are applied to each and every<br />

variable and equation. The BLOCKPIC and BLOCKLIST<br />

routines utilize the tests on a block by block basis. Thus, the<br />

messages will be triggered only if every variable or equation in<br />

that block has the same problem. Also interactions between<br />

variables and equations are not checked so ANALYSIS only<br />

finds flaws contained in individual variables/equations.<br />

Input File: The keyword ANALYSIS is all that is accepted.<br />

Brief Purpose : The BLOCKLIST procedure displays the number and<br />

characteristics of the items in each GAMS variable and<br />

equation block.<br />

Usage Notes: The characteristic information gives:<br />

1) The variable sign restriction or equation inequality type.<br />

2) The number of variables or equations in this block;<br />

11


BLOCKPIC<br />

3) The number of variables or equations with at least one<br />

nonlinear term in this block.<br />

4) The number of positive coefficients under the variables<br />

or in the equations.<br />

5) The number of negative coefficients under the variables<br />

or in the equations.<br />

6) The number of nonlinear coefficients under the<br />

variables or in the equations.<br />

7) The largest coefficient in absolute value in this block;<br />

8) The smallest coefficient in absolute value in this block.<br />

Analysis tests are also performed as discussed under the<br />

ANALYSIS procedure.<br />

Note that when GAMS internal scaling features are employed,<br />

the default option is that the scaled output is displayed. This<br />

can be altered using the DESCALE feature of the solver<br />

options file.<br />

Input File: No input other than the procedure name is needed.<br />

Brief Purpose: Generates model schematics and scaling information. The<br />

schematics depict coefficient signs, total and average number<br />

of coefficients within each GAMS equation and variable block.<br />

Usage Notes: These schematics are designed to aid users in identifying flaws<br />

in coefficient placement and sign. The summary information<br />

on problem scaling characteristics is designed to help users in<br />

scaling data. The scaling information is usually reported after<br />

any GAMS scaling (using the variablename.scale and<br />

equationname.scale features) but before solver scaling. (The<br />

user can change whether descaling is done - see the options<br />

file). Analysis tests are done using the procedures in Tables 1<br />

and 2.<br />

Note that when GAMS internal scaling features are employed<br />

the default option is that the scaled output is displayed. This<br />

can be altered using the DESCALE feature of the solver<br />

12


PICTURE<br />

options file.<br />

Input File : The keyword BLOCKPIC is all that is recognized.<br />

Brief Purpose: Generates a schematic depicting the location, sign and magnitude of<br />

coefficients for selected variables and equations. Users can use this<br />

schematic to help identify flaws in coefficient placement, magnitude, or<br />

sign. Reports are also generated on the number of individual elements<br />

in the pictured portions of each variable and equation.<br />

Usage Notes: This output can be quite large, so PICTURE should only be<br />

used for small models or model components.<br />

Note that when GAMS internal scaling features are employed,<br />

the default option is that the scaled output is displayed. This<br />

can be altered using the DESCALE feature of the solver<br />

options file.<br />

Input File: Optional input instructions may appear after the PICTURE<br />

keyword. This input selects the variables and equations to be<br />

included. Only coefficients at the intersection of the selected<br />

variables and equations are portrayed. The selected item in the<br />

.GCK file can contain the keywords VARIABLE, or<br />

INVARIABLE followed by a specification of the selected<br />

variables using the procedure input specification conventions<br />

above. This can be followed by the keywords EQUATION or<br />

INEQUATION and the specified entries. Several special cases<br />

are also relevant:<br />

a) If the VARIABLE or INEQUATION keywords cannot<br />

be found, then all variables are assumed selected.<br />

b) If the EQUATION or INVARIABLE keywords cannot<br />

be found, then all equations are assumed to selected.<br />

c) If the none of the VARIABLE, INVARIABLE,<br />

EQUATION, or INEQUATION keywords are found,<br />

everything is pictured and all other input is ignored.<br />

d) When the INVARIABLE keyword is used, then all<br />

equations in which those variables have coefficients are<br />

selected along with the named variables.<br />

13


POSTOPT<br />

e) When the INEQUATION keyword is used, then all<br />

variables which have coefficients in the named<br />

equations are selected along with the named equations.<br />

Brief Purpose: Does post optimality computations. In that capacity<br />

POSTOPT either:<br />

a) Reconstructs the reduced cost of variables after<br />

a GAMS model solution. Modelers can use this<br />

information to discover why certain variables<br />

are nonbasic or why certain shadow prices take<br />

on particular values, or<br />

b) Reconstructs the usage and supply across an<br />

equation after a GAMS model solution.<br />

Modelers can use this information to discover<br />

why certain variables or slacks take on<br />

particular values, as well as to find out where<br />

items within equations are produced and/or<br />

used.<br />

Usage Notes: POSTOPT uses essentially the same input conventions as does<br />

DISPLAYCR. Thus, the usage notes in that selection are also<br />

relevant here. In addition:<br />

1) POSTOPT requires a solution has been obtained<br />

GAMSCHK will automatically cause a solver to be<br />

invoked unless suppressed by the options file;<br />

2) Nonlinear terms may not be accurate in the row sums<br />

as their marginal value not their total value is used but<br />

GAMS will have adjusted the right-hand sides for their<br />

presence; and<br />

3) Attention can be restricted to only certain types of<br />

variables or equations. Variables that are<br />

INSOLUTION (Nonzero or with Zero marginals),<br />

NOTINSOLUTION (zero with a nonzero marginal) can<br />

be requested, BINDING or NONBINDING equations<br />

can be focussed on.<br />

14


ADVISORY<br />

Note that when GAMS internal scaling features are employed,<br />

the default option is that the unscaled output is displayed. This<br />

can be altered using the DESCALE feature of the solver<br />

options file.<br />

Input File : An optional input file is read in, indicating the specific variables<br />

desired using the conventions explained under DISPLAYCR<br />

above. In addition:<br />

1) One can enter INSOLUTION to restrict attention to<br />

variables which are nonzero or have zero marginals.<br />

2) One can enter NOTINSOLUTION to restrict attention<br />

to zero variables.<br />

3) The above entries restrict alteration in all VARIABLE<br />

or INEQUATION selection statements in a POSTOPT<br />

run.<br />

4) One can enter BINDING to only consider equations<br />

with zero slack. Similarly, NONBINDING considers<br />

equations with nonzero slack.<br />

5) The above equation specifications restrict all sections<br />

by all EQUATION or INVARIABLES items in a<br />

POSTOPT run.<br />

Brief Purpose: To identify variables which could be unbounded or equations and<br />

variable bounds which could cause a model to be infeasible.<br />

Usage Notes: The ADVISORY procedure causes a presolution report on the<br />

set of all: a) variables which could be unbounded and/or b)<br />

equations and variable bounds which could cause infeasibility.<br />

The tests used are summarized in Table 3. This procedure<br />

identifies all variables which would need to be bounded as well<br />

as all constraints which need artificial variables if one wishes to<br />

diagnose problems in a model. The same output is also<br />

generated by NONOPT but the ADVISORY version does not<br />

require a solution.<br />

Input file: Just the word ADVISORY<br />

15


NONOPT<br />

Brief Purpose: To help diagnose unbounded and infeasible models.<br />

Usage Notes: The NONOPT procedure can be used in either an informative<br />

mode or with models which terminate as unbounded or<br />

infeasible. NONOPT will look through an optimal model<br />

reporting all variables which may be potentially unbounded or<br />

infeasible and all equations which may be infeasible using the<br />

checks explained under the ADVISORY section. Also in an<br />

unbounded model NONOPT can report the names of<br />

unbounded or infeasible variables or equations as well as either<br />

budgeting or row summing them. NONOPT runs after a<br />

solution and causes a solve to occur.<br />

Input File: NONOPT may be followed by optional keywords IDENTIFY<br />

or VERBOSE. The IDENTIFY keyword causes GAMSCHK<br />

to report potential unbounded variables and/or infeasible<br />

equations. VERBOSE causes full budgets and row summing as<br />

done by the POSTOPT procedure on infeasible equations,<br />

and/or variables as well as unbounded variables and/or<br />

equations. Only the last encountered of the VERBOSE or<br />

IDENTIFY keywords will be obeyed. The details on these<br />

options are as follows:<br />

1) If the IDENTIFY keyword is used, then the rules in Table 3 are<br />

applied to the model. Identify also anticipates that large upper<br />

bounds and/or artificial variables may be present. In an optimal<br />

condition all variable and equation levels that have exponents<br />

greater than the user supplied level filter in the options file (or<br />

6 by default) are identified as items which could be involved<br />

with an unbounded model. Similarly, all variables or equations<br />

with marginals greater in exponent than the user supplied<br />

marginal exponent filter will be identified as items potentially<br />

involved with an infeasible model.<br />

2) When the VERBOSE keyword is read then all variables and<br />

equations which are listed as nonoptimal or infeasible are<br />

treated using the budgeting and row summing aspects of<br />

POSTOPT.<br />

3) When no keyword is found and the model solution is not<br />

16


optimal then the nonoptimal equations, infeasible equations<br />

and/or nonoptimal variables are automatically listed.<br />

<strong>Options</strong> File<br />

GAMSCHK accepts an option file controlling solver choice (when needed);<br />

descaling; and size of the nonoptimal filters; the number of variable and column blocks<br />

selection entries allowed. The file is called GAMSCHK.OPT.<br />

<strong>Solver</strong> Choice <strong>Options</strong><br />

GAMSCHK calls for the solution of the problem when the POSTOPT or NONOPT<br />

procedures are used. In doing this, GAMSCHK internally selects the default GAMS solver<br />

for a problem class. Users may override this choice using the solver options file. Users may<br />

also force or suppress the solution process.<br />

There are 9 solver related keywords allowed in the options file. These are as follows:<br />

OPTION Purpose<br />

LP Gives name of solver for LP problems<br />

NLP Gives name of solver for NLP problems<br />

MIP Gives name of solver for MIP problems<br />

DNLP Gives name of solver for DNLP problems<br />

SOLVERNAME Gives name of solver to be used regardless of<br />

problem type<br />

NOSOLVE Suppresses solution of the problem<br />

SOLVE Forces solution of the problem<br />

DESCALE Controls treatment of scaling<br />

OPTFILE <strong>Solver</strong> options file number<br />

In the first five cases, the option name is followed by the name of one of the licensed solvers.<br />

If the options file is empty, then the default solver will be used. If a solver name is given,<br />

then that solver will be used provided it matches the name of a solver GAMS recognizes.<br />

When Should I Use SOLVE or NOSOLVE<br />

Ordinarily GAMSCHK will cause a solver to be used if either the POSTOPT or the<br />

NONOPT options are used. However, users can force solutions under other cases or<br />

suppress solutions if desired.<br />

One should only force a solution (using the SOLVE option) when one wishes to use<br />

the solution information after GAMSCHK is done either to examine the solution output or do<br />

post optimality calculations. Forcing a solution will not cause GAMSCHK to have improved<br />

17


epresentations of nonlinear terms. That will only occur when a SOLVE statement is<br />

executed before the SOLVE statement involving GAMSCHK.<br />

Control of Number of Variable and Row Selections Allowed<br />

The GAMSCHK program uses an upper estimate on the number of variable or<br />

equation blocks. In rare circumstances users may wish to override this choice. The options<br />

for this are<br />

OPTION Purpose<br />

VARBLOCK Maximum number of variable blocks allowed<br />

EQUBLOCK Maximum number of equation blocks allowed<br />

These options are followed by a number, but should not be routinely used.<br />

Scaling<br />

GAMS users may be utilizing internal features which involve scaling through the<br />

Modelname.SCALEOPT=1, VariableName.SCALE, and EquationName.SCALE options.<br />

GAMSCHK can work with these options to create output which reflects scaled, unscaled or<br />

partially unscaled output. In particular, the command DESCALE can be entered with one of<br />

three options: NEVER, ALL, or PART. If you enter NEVER, then none of the model output<br />

will be descaled. If you enter ALL, then all of the model output will be descaled. The third<br />

option is to use PART. In that case the NONOPT and POSTOPT output will be descaled<br />

whereas scaled information will be displayed for PICTURE, BLOCKPIC, BLOCKLIST,<br />

MATCHIT and DISPLAYCR. The PART option allows investigation of scaling. If you do<br />

not enter a DESCALE option then all information will be reported as if the PART option was<br />

chosen.<br />

NONOPT Filters<br />

The NONOPT model in “IDENTIFY” mode checks through a model solution to<br />

identify large marginals and/or large variable values. The limits on these checks are provided<br />

by two options<br />

OPTION Purpose<br />

LEVELFILT Numerical value of exponent on “unbounded levels”<br />

MARGFILT Numerical value of exponent on “infeasible marginals”<br />

These options provide upper bounds on the exponents of the absolute values for the levels<br />

and marginals. They are followed by an integer which gives the exponent. Thus, entries like<br />

18


LEVELFILT 7<br />

MARGFILT 7<br />

will cause the reporting of all marginals and levels which are greater in absolute value than<br />

10 7<br />

Example <strong>Options</strong> File<br />

The GAMSCHK option file is called GAMSCHK.OPT. An example of a file could<br />

look like the following 6 lines<br />

<strong>Solver</strong> <strong>Options</strong> File<br />

LP OSL<br />

MIP LAMPS<br />

VARBLOCK 50<br />

SOLVE<br />

DESCALE PART<br />

LEVELFILT 4<br />

One other important aspect regarding the options file involves the use of a problem<br />

solver options file when a solver such as MINOS5, OSL, LAMPS etc. is also being used. As<br />

seen above the GAMSCHK.OPT does not recognize option commands such as those which<br />

would be submitted to the programming model solvers - OSL for example. In all cases<br />

GAMSCHK will cause the default option file for the solver to be used when invoking the<br />

solver. Thus if MINOS5 and the options file is invoked is being used, MINOS5 options are<br />

controlled by the option file MINO5.OPT while GAMSCHK.OPT controls GAMSCHK<br />

operation.<br />

Users can change the number of the solver options file being used by using the<br />

OPTFILE parameter in the options file. OPTFILE 2 would cause use of solver options file<br />

.OP2.<br />

Known Bugs<br />

There are a few bugs that can cause GAMSCHK to report improper outputs or results. A list<br />

of the known bugs, their symptoms and a remedy is given below.<br />

Symptom Cause Remedy<br />

19


Zero Shadow Prices in POSTOPT Old GAMS version or no Prior<br />

Solve<br />

Descaling Does Not Work Old Version of GAMS Update<br />

GAMS Blows up after<br />

GAMSCHK Runs<br />

POSTOPT has error in budgets<br />

equal to twice objective function<br />

coefficient for nonlinear<br />

maximizations<br />

ROWSUM does not fully account<br />

for the value of nonlinear terms in<br />

POSTOPT<br />

Error message about size of<br />

VARBLOCK or EQNBLOCK<br />

20<br />

1) Make sure the model was<br />

solved, 2) if it was, do not<br />

suppress solve in option file,<br />

or 3) update to most recent<br />

GAMS version<br />

Old GAMS version Ignore, *.LST file, results are<br />

fine, can be fixed by updating<br />

to the most recent version of<br />

GAMS<br />

Old GAMS MINOS version Switch to a minimization<br />

formulation or update<br />

GAMS/MINOS<br />

Value of nonlinear terms sent<br />

from GAMS are only a<br />

marginal value<br />

exceeded maximum number of<br />

blocks<br />

None planned. GAMSCHK<br />

would need reprogramming<br />

Modify option file, enlarging<br />

or eliminating parameters<br />

GAMSCHK won't run Files are not properly installed Recheck installation. If still<br />

doesn't work report to author<br />

Zero shadow prices when using<br />

NOSOLVE<br />

Old version of GAMS solvers<br />

or Shadow prices suppressed<br />

Try changing<br />

GAMSCOMP.TXT lines 2 or<br />

0 to 12 or 10, if that doesn't<br />

work update GAMS.


References<br />

Brooke, A., D. Kendrick, and A. Meeraus. GAMS: A User's Guide. The Scientific Press,<br />

South San Francisco, CA, 1988.<br />

McCarl, B.A. “So Your GAMS Model Didn’t Work Right: A Guide to Model Repair.”<br />

Texas A&M University, College Station, TX, 1994.<br />

McCarl, B.A., and T.H. Spreen. "Applied Mathematical Programming <strong>Using</strong> Algebraic<br />

Systems." Draft Book, Department of Agricultural Economics, Texas A&M<br />

University, College Station, TX, 1996. On web page http://agrinet.tamu.edu/mccarl<br />

21


Table 1. Conditions under which a modeler should be advised of potential difficulty for equations without nonlinear terms.<br />

Type of Constraint<br />

#<br />

'<br />

$<br />

Count of coefficients under a variable of this type<br />

with a particular sign<br />

Nonnegative Nonpositive Unrestricted<br />

+ - + - + -<br />

Sign of<br />

RHS<br />

22<br />

Type of PS a/<br />

Examples b/<br />

$ 0 c/ 0 0 $ 0 0 0 0 Zero Variables - Case 1 3 x # 0 d/ , -3 y # 0 , 3 x - 3 y # 0<br />

$ 0 0 0 $ 0 0 0 - Infeasible -Case 2 3 x # - k , -3 y # - k , 3 x - 3 y # - k<br />

0 $ 0 $ 0 0 0 0 + or 0 Redundant -Case 3 -3 x # + k , 3 y # + k, -3 x + 3 y # k<br />

$ 0 0 0 $ 0 0 0 0 Zero Variables - Case 1 3 x = 0 , -3 y = 0, 3 x - 3 y = 0<br />

0 $ 0 $ 0 0 0 0 0 Zero Variables - Case 1 -3 x = 0 , 3 y = 0, -3 x + 3 y = 0<br />

$ 0 0 0 $ 0 0 0 - Infeasible -Case 2 3 x - 3 y = -k<br />

0 $ 0 $ 0 0 0 0 + Infeasible -Case 2 -3 x + 3 y = k<br />

0 0 0 0 $ 0 e/ $ 0 e/ 0 Zero Variable - Case 1 z=0 , -z=0<br />

0 $ 0 $ 0 0 0 0 0 Zero Variables - Case 1 -3 x $ 0 , 3 y $ 0 , -3 x + 3 y $ 0<br />

0 $ 0 $ 0 0 0 0 0 or + Infeasible -Case 2 -3 x $ k , 3 y $ k , -3 x + 3 y $ k<br />

$ 0 0 0 $ 0 0 0 - or 0 Redundant -Case 3 3 x $ - k ,-3 y $ - k , 3 x - 3 y $ -k<br />

a/ The PS cases indicate, because the variables in this equation follow this pattern, that:<br />

1. The variables appearing with nonzeros in this equation are forced to equal zero.<br />

2. This equation can never be satisfied and is obviously infeasible.<br />

3. This equation is redundant. The nonnegativity conditions are a stronger restriction.<br />

b/ In the examples x denotes indexed non-negative variables, y indexed non-positive variables, and z a single unrestricted variable.<br />

c/ Here and in the cases below at least one nonzero must occur<br />

d/ These entries give examples of the problem covered by each warning. Namely, in the first case examining only the nonnegative variables suppose all those<br />

variables have signs $ 0 but the right-hand-side is zero. Thus, we have X $ 0 and X # 0 which implies X = 0. A warning is generated in that case.<br />

e/ Only one coefficient is allowed.<br />

Table 2. Conditions under which a modeler should be warned about variables in a maximization problem.


Type of<br />

Variable<br />

Nonnegative<br />

Nonpositive<br />

Objective<br />

function<br />

coefficient<br />

sign<br />

Number of a ij's of a sign in<br />

$ rows = rows in # rows<br />

+ - + - + -<br />

+ $ 0 0 0 0 0 $ 0 Unbounded<br />

Variable<br />

case 1<br />

- 0 $ 0 0 0 $ 0 0 Zero optimal<br />

solution<br />

case 2<br />

0 $ 0 0 0<br />

0 $ 0 0 $ 0 c<br />

23<br />

0 0 $ 0 Variable<br />

Relaxes<br />

constraint<br />

case 3<br />

$ 0 c 0 $ 0 Variable<br />

Relaxes<br />

constraint<br />

case 4<br />

- 0 $ 0 0 0 $ 0 0 Unbounded<br />

Variable<br />

case 1<br />

+ $ 0 0 0 0 0 $ 0 Zero optimal<br />

solution<br />

case 2<br />

0 0 $ 0 0 0 $ 0 0 Variable<br />

Relaxes<br />

constraint<br />

case 3<br />

0 0 $ 0 $ 0 c $ 0 c $ 0 0 Variable<br />

Relaxes<br />

constraint<br />

case 4<br />

Unrestricted +/- 0 0 0 0 0 0 Unbounded<br />

Variable<br />

PS a/ Examples<br />

case 1<br />

max x b/ ,<br />

x + DQ $ a<br />

-x + EQ# b<br />

max -x<br />

-x + DQ $<br />

a<br />

x + EQ # b<br />

max ox<br />

x +DQ $ a<br />

-x + DQ # b<br />

max ox<br />

x + DQ $a<br />

x+ FQ = g<br />

-x + EQ # b<br />

max -y<br />

-y + DQ $<br />

a<br />

y + EQ # b<br />

max y b/ ,<br />

y + DQ $ a<br />

-y + EQ# b<br />

max 0x<br />

-y + DQ $<br />

a<br />

y + EQ # b<br />

max ox<br />

-y + DQ $a<br />

y+FQ = g<br />

y + EQ # b<br />

max ± z<br />

a/ PS cases are: The variables which satisfy this condition are:<br />

1) Unbounded as they contribute to the objective function while satisfying the constraints.<br />

2) Obviously zero since they consume constraint resources and have a cost in the objective function.<br />

3) Warning this variable relaxes all constraints in which it appears<br />

4) Warning this variable relaxes all the equality constraints in which it appears in one direction<br />

b/ Here x(y) has a positive objective term and can be increased without ever violating any constraints so x(y) is unbounded.<br />

c/ Only one coefficient can be present in the equality rows


Table 3. Conditions When Model Elements Could be Unbounded or Infeasible<br />

Types of Variables<br />

Conditions for Potential Unbounded Variables -- Presence of Bounds<br />

Sign of Objective<br />

in Max Problem Upper Lower<br />

$ 0 a + None --- c<br />

#0 - --- None<br />

Unrestricted + None ---<br />

Unrestricted --- --- None<br />

Conditions for Potential Infeasibility Caused by Bounds on Variables<br />

Existence of Bounds<br />

Types of Variables Lower Upper<br />

$ b 0 + ---<br />

#0 --- ---<br />

Unrestricted + ---<br />

Unrestricted --- ---<br />

Conditions for Potential Infeasibility in Equations<br />

Types of<br />

Equations<br />

RHS<br />

# d -<br />

$ +<br />

= + or -<br />

a If a non negative variable has a positive objective function coefficient without an upper bound, then the variable could be<br />

unbounded.<br />

b If a nonnegative variable has a positive lower bound then it could cause infeasibility.<br />

c Any reasonable value can exist for this item<br />

d When a less than or equal equation is present it may not be able to be satisfied if it has a negative RHS.<br />

24


Table 4. Conditions for Potential Infeasibility or Redundancy in Equations Based on Bounds on Variables<br />

Note:<br />

TYPE OF CONSTRAINT PS<br />

≤ b ≥ b<br />

SUM OF THE SMALLEST VALUE a > b --- INFEASIBLE<br />

--- > b REDUNDANT<br />

SUM OF THE LARGEST VALUE b --- < b INFEASIBILE<br />

< b --- REDUNDANT<br />

a. Suppose X j is bounded as follows, LB j (lower bound) ≤ X j ≤UB j (upper bound), and we have the sum which is either >b or b, we know that this<br />

constraint will never be satisfied. In the constraint is ≥b, then if S>b, we know that this constraint will not limit any possible X value. Hence, it is<br />

redundant.<br />

b. Suppose X j is bounded as follows, LB j (lower bound) ≤ X j ≤UB j (upper bound), and we have the sum which is either >b or


VARIABLE<br />

VARIABLES<br />

EQUATION<br />

EQUATIONS<br />

INVARIABLE<br />

INVARIABLES<br />

INEQUATION<br />

INEQUATIONS<br />

LISTVARIABLE<br />

LISTVARIABLES<br />

LISTEQUATION<br />

LISTEQUATIONS<br />

POSTOPT<br />

DISPLAYCR<br />

PICTURE<br />

BLOCKPIC<br />

ANALYSIS<br />

MATCHIT<br />

BLOCKLIST<br />

NONOPT<br />

INSOLUTION<br />

NOTINSOLUTION<br />

NONINSOLUTON<br />

VERBOSE<br />

ADVISORY<br />

BINDING<br />

NONBINDING<br />

NOTBINDING<br />

INTERSECT<br />

IDENTIFY<br />

PW=<br />

Appendix A: Reserved Names<br />

26


Appendix B: GAMSCHK One Page Summary<br />

Invoking GAMSCHK OPTION LP=GAMSCHK<br />

Keywords allowed in GCK file<br />

Keyword Allowed SubKEYWORDS Brief Description<br />

DISPLAYCR<br />

MATCHIT<br />

VARIABLE*<br />

INVARIABLE*<br />

EQUATION*<br />

INEQUATION*<br />

INTERSECT ++<br />

VARIABLE*<br />

LISTVARIABLE*<br />

EQUATION*<br />

LISTEQUATION*<br />

Displays coefficients of selected variables and equations<br />

Indicates variable selections follow<br />

Indicates equations are wanted in which selected variables fall<br />

Indicates equation selections follow<br />

Indicates variables are wanted that fall in selected equations<br />

Show coefficients which appear at intersections of selected<br />

var/eqn<br />

List variable and equation names and summarize characteristics<br />

Summarizes all variables matching selection statements<br />

Lists each variable matching a selection statement<br />

Summarizes all equations matching a selection statement<br />

Lists each equation matching a selection statement<br />

ANALYSIS Checks for obvious structural defects<br />

BLOCKLIST Summarizes characteristics of variable and equation blocks<br />

BLOCKPIC Generates block level schematics<br />

PICTURE<br />

POSTOPT<br />

VARIABLE*<br />

INVARIABLE*<br />

EQUATION*<br />

INEQUATION*<br />

VARIABLE*<br />

INVARIABLE*<br />

EQUATION*<br />

INEQUATION*<br />

INTERSECT ++<br />

NOTINSOLUTION ++<br />

INSOLUTION ++<br />

BINDING ++<br />

NONBINDING ++<br />

Generates tableau schematics<br />

Indicates variable selections follow<br />

Indicates equations are wanted in which selected variables fall<br />

Indicates equation selections follow<br />

Indicates variables are wanted that fall in selected equations<br />

Reconstructs reduced cost and equation activity<br />

Indicates variable selections follow<br />

Indicates equations are wanted in which selected variables fall<br />

Indicates equation selections follow<br />

Indicates variables are wanted that fall in selected equations<br />

Show coefficients which appear at intersections of selected<br />

var/eqn<br />

Only nonzero vars or those with zero reduced cost<br />

Only zero vars will be selected<br />

Only eqns with zero slack will be computed<br />

Only eqns with nonzero slack will be computed<br />

ADVISORY List potential infeasible and unbounded items<br />

NONOPT<br />

IDENTIFY<br />

VERBOSE<br />

Lists potential or actual nonoptimal items<br />

Same as ADVISORY but after solution<br />

Does POSTOPT computations on nonoptimals<br />

Other Notes<br />

1) Items marked above with an * are followed by item selection statements.<br />

2) Items marked with ++ modifiy the types of variables, equations and coefficients selected.<br />

3) In item selection an * is a wild card for multiple characters while a . is a wildcard for one character.<br />

4) Spaces and capitalization don’t matter in any of the input.<br />

5) <strong>Options</strong> file controls scaling, solver choice, nonopt filters and maximum allowed selections.<br />

6) Page width is controlled by a PW= keyword but cannot exceed GAMS page width.<br />

7) Lines beginning with a ? or a # are treated as comments.<br />

27


MILES: A Mixed Inequality and nonLinear Equation <strong>Solver</strong><br />

August, 1993<br />

(Revised, March 1997)<br />

Thomas F. Rutherford<br />

Department of Economics<br />

University of Colorado<br />

Boulder, Colorado 80309-0256<br />

rutherford@colorado.edu<br />

Abstract<br />

MILES is a solver for nonlinear complementarity problems and<br />

nonlinear systems of equations. This solver can be accessed<br />

indirectly through GAMS/MPSGE or GAMS/MCP. This paper documents<br />

the solution algorithm, user options, and program output. The<br />

purpose of the paper is to provide users of GAMS/MPSGE and<br />

GAMS/MCP an overview of how the MCP solver works so that they can<br />

use the program effectively.


1. Introduction<br />

MILES is a Fortran program for solving nonlinear<br />

complementarity problems and nonlinear systems of equations. The<br />

solution procedure is a generalized Newton method with a<br />

backtracking line search. This code is based on an algorithm<br />

investigated by Mathiesen (1985) who proposed a modeling format<br />

and sequential method for solving economic equilibrium models.<br />

The method is closely related to algorithms proposed by Robinson<br />

(1975), Hogan (1977), Eaves (1978) and Josephy (1979). In this<br />

implementation, subproblems are solved as linear complementarity<br />

problems (LCPs), using an extension of Lemke’s<br />

almost-complementary pivoting scheme in which upper and lower<br />

bounds are represented implicitly. The linear solver employs the<br />

basis factorization package LUSOL, developed by Gill et al.<br />

(1991).<br />

The class of problems for which MILES may be applied are<br />

referred to as "generalized" or "mixed" complementarity problems,<br />

which is defined as follows:<br />

Given: F:R n 6R n , R,u 0 R n<br />

Find: z,w,v 0 R n<br />

such that<br />

F(z) ' w & v<br />

R#z # u, w $ 0, v $ 0<br />

w T (z&R)'0, v T (u&z)'0<br />

- 1 -


When R=-4 and u=4 MCP reduces to a nonlinear system of equations.<br />

When R=0 and u=+4, the MCP is a nonlinear complementarity<br />

problem. Finite dimensional variational inequalities are also<br />

MCP. MCP includes inequality-constrained linear, quadratic and<br />

nonlinear programs as special cases, although for these problems<br />

standard optimization methods may be preferred. MCP models which<br />

are not optimization problems encompass a large class of<br />

interesting mathematical programs. <strong>Specific</strong> examples of MCP<br />

formulations are not provided here. See Rutherford (1992a) for<br />

MCP formulations arising in economics. Other examples are<br />

provided by Harker and Pang (1990) and Dirkse (1993).<br />

MILES:<br />

There are two ways in which a problem may be presented to<br />

(i) MILES may be used to solve computable general<br />

equilibrium models generated by MPSGE as a GAMS subsystem. In<br />

the MPSGE language, a model-builder specifies classes of<br />

nonlinear functions using a specialized tabular input format<br />

embedded within a GAMS program. <strong>Using</strong> benchmark quantities and<br />

prices, MPSGE automatically calibrates function coefficients and<br />

generates nonlinear equations and Jacobian matrices. Large,<br />

complicated systems of nonlinear equations may be implemented and<br />

analyzed very easily using this interface to MILES. An<br />

introduction to general equilibrium modeling with GAMS/MPSGE is<br />

provided by Rutherford (1992a).<br />

(ii) MILES may be accessed as a GAMS subsystem using<br />

- 2 -


variables and equations written in standard GAMS algebra and the<br />

syntax for "mixed complementarity problems" (MCP). If more than<br />

one MCP solver is available 1 , the statement "OPTION MCP=MILES;"<br />

tells GAMS to use MILES as the MCP solution system. When<br />

problems are presented to MILES using the MCP format, the user<br />

specifies nonlinear functions using GAMS matrix algebra and the<br />

GAMS compiler automatically generates the Jacobian functions. An<br />

introduction to the GAMS/MCP modeling format is provided by<br />

Rutherford (1992b).<br />

The purpose of this document is to provide users of MILES<br />

with an overview of how the solver works so that they can use the<br />

program more effectively. Section 2 introduces the Newton<br />

algorithm. Section 3 describes the implementation of Lemke’s<br />

algorithm which is used to solve linear subproblems. Section 4<br />

defines switches and tolerances which may be specified using the<br />

options file. Section 5 interprets the run-time log file which<br />

is normally directed to the screen. Section 6 interprets the<br />

status file and the detailed iteration reports which may be<br />

generated. Section 7 lists and suggests remedies for abnormal<br />

termination conditions.<br />

2. The Newton Algorithm<br />

The iterative procedure applied within MILES to solve<br />

1 There is one other MCP solver available through GAMS: PATH<br />

(Ferris and Dirkse, 1992).<br />

- 3 -


nonlinear complementarity problems is closely related to the<br />

classical Newton algorithm for nonlinear equations. This first<br />

part of this section reviews the classical procedure. A thorough<br />

introduction to these ideas is provided by Dennis and Schnabel<br />

(1983). For a practical perspective, see Press et al. (1986).<br />

Newton algorithms for nonlinear equations begin with a local<br />

(Taylor series) approximation of the system of nonlinear<br />

equations. For a point z in the neighborhood of ¯z , the system<br />

of nonlinear functions is linearized:<br />

LF(z) ' F(¯z) % LF(¯z)(z&¯z)<br />

Solving the linear system LF(z) = 0 provides the Newton direction<br />

from ¯z which given by d '&LF .<br />

1 F(¯z)<br />

Newton iteration k begins at point z k . First, the linear<br />

model formed at z k is solved to determine the associated "Newton<br />

direction", d k . Second, a line search in direction d k determines<br />

the scalar steplength 8 and the subsequent iterate: z k+1 = z k + 8<br />

d k . An Armijo or "back-tracking" line search initially considers<br />

8 = 1. If 2F(z , the step size 8 is adopted,<br />

z % d k )2#2F(z k )2<br />

otherwise 8 is multiplied by a positive factor ", " < 1, and the<br />

convergence test is reapplied. This procedure is repeated until<br />

either an improvement results or 8 < 8. When 8 = 0, a positive<br />

- 4 -


step is taken provided that: 2<br />

d<br />

d 2 F(z k % d k )2


F(z k )&LF(z k )z k % LF(z k ) z ' w & v<br />

R#z # u, w $ 0, v $ 0<br />

w T (z&R)'0, v T (u&z)'0<br />

After computing the solution z, MILES sets d k = z - z k .<br />

The linear subproblem incorporates upper and lower bounds on<br />

any or all of the variables, assuring that the iterative sequence<br />

always remains within the bounds: ( R # z k # u). This can be<br />

helpful when, as is often the case, F() is undefined for some<br />

z 0 R n .<br />

Convergence of the Newton algorithm applied to MCP hinges on<br />

three questions: (i) Does the linearized problem always have a<br />

solution? (ii) If the linearized problem has a solution, does<br />

Lemke’s algorithm find it? and (iii) Is it possible to show that<br />

the computed direction d k will provide an "improvement" in the<br />

solution? Only for a limited class of functions F() can all<br />

three questions be answered in the affirmative. For a much<br />

larger class of functions, the algorithm converges in practice<br />

but convergence is not "provable". 3<br />

The answer to question (iii) depends on the choice of a norm<br />

by which an improvement is measured. The introduction of bounds<br />

and complementarity conditions makes the calculation of an error<br />

index is made more complicated. In MILES, the deviation<br />

associated with a candidate solution z, ,(z), is based on a<br />

3 Kaneko (1978) provides some convergence theory for the<br />

linearized subproblem.<br />

- 6 -


measure of the extent to which z, w and v violate applicable<br />

upper and lower bounds and complementarity conditions.<br />

Evaluating Convergence<br />

L U<br />

Let * i and *i be indicator variables for whether zi is off<br />

its lower or upper bound. These are defined as: 4<br />

L<br />

i ' min(1,(z i &R i ) ), and<br />

w i ' F i (z) , v i ' &F i (z) .<br />

B<br />

i ' (z i &u i ) % (R i &z i ) ,<br />

C<br />

i ' L<br />

iw % i U<br />

iv . i<br />

4 In the following x + = max(x, 0).<br />

- 7 -<br />

U<br />

i ' min(1,(u i &z i ) ).<br />

Given z, MILES uses the value of F(z) to implicitly define the<br />

slack variables w and v:<br />

There are two components to the error term associated with index<br />

i, one corresponding to z i’s violation of upper and lower bounds:<br />

and another corresponding to violations of complementarity<br />

conditions:


The error assigned to point z is then taken:<br />

(z) ' 2 B (z)% C (z)2 p<br />

for a pre-specified value of p = 1, 2 or +4. 5<br />

3. Lemke’s Method with Implicit Bounds<br />

A mixed linear complementarity problem has the form:<br />

Given: M 0 R nxn , q,R,u 0 R n<br />

Find: z,w,v 0 R n<br />

such that<br />

Mz % q ' w & v<br />

R#z # u, w $ 0, v $ 0<br />

w T (z&R)'0, v T (u&z)'0<br />

In the Newton subproblem at iteration k, the LCP data are given<br />

by q = F(z k ) - LF(z k ) z k and M = LF(z k ).<br />

The Working Tableau<br />

In MILES, the pivoting scheme for solving the linear problem<br />

works with a re-labeled linear system of the form:<br />

Bx B % Nx N ' q<br />

where x B 0 R n , x N 0 R 2n , and the tableau [B | N] is a conformal<br />

"complementary permutation" of [-M | I | -I]. That is, every<br />

5 Parameter p may be selected with input parameter NORM.<br />

The default value for p is +4.<br />

- 8 -


column i in B must either be the ith column of M, I or -I, while<br />

the corresponding columns i and i+n in N must be the two columns<br />

which were not selected for B.<br />

To move from the problem defined in terms of z, w and v to<br />

the problem defined in terms of x B and x N , we assign upper and<br />

lower bounds for the x B variables as follows:<br />

x B<br />

i '<br />

x B<br />

i '<br />

R i if x B<br />

i ' z i<br />

0 if x B<br />

i ' w i or v i<br />

u i if x B<br />

i ' z i<br />

4 if x B<br />

i ' w i or v i<br />

N N<br />

The values of the non-basic variables xi and xi<br />

B<br />

by the assignment of xi: x B<br />

i<br />

'<br />

z i Y<br />

w i Y<br />

v i Y<br />

- 9 -<br />

x N<br />

i<br />

'w i '0<br />

x N<br />

in 'v i '0<br />

x N<br />

i<br />

'z i 'R i<br />

x N<br />

in 'v i '0<br />

x N<br />

i<br />

'w i '0<br />

x N<br />

in 'z i 'u i<br />

+n are determined


In words: if z i is basic then both w i and v i equal zero. If z i is<br />

non-basic at its lower bound, then w i is possibly non-zero and v i<br />

is non-basic at zero. If z i is non-basic at its upper bound,<br />

then v i is possibly non-zero and w i is non-basic at zero.<br />

Conceptually, we could solve the LCP by evaluating 3 n linear<br />

systems of the form:<br />

x B ' B 1 q & Nx N<br />

Lemke’s pivoting algorithm provides a procedure for finding a<br />

solution by sequentially evaluating some (hopefully small) subset<br />

of these 3 n alternative linear systems.<br />

Initialization<br />

Let B 0 denote the initial basis matrix. 6 The initial values<br />

for basic variables are then:<br />

ˆx B '(B 0 ) 1 (q & N ˆx N )<br />

If x , then the initial basis is feasible and the<br />

B # ˆx B # x B<br />

complementarity problem is solved. 7 Otherwise, MILES introduces<br />

6 In MILES, B 0 is chosen using the initially assigned<br />

B B<br />

values for z. When zi # Ri, then xi = wi; when zi $ ui, then xi =<br />

B<br />

vi; otherwise xi = zi.<br />

7 The present version of the code simply sets B 0 = -I and x B<br />

= w when the user-specified basis is singular. A subsequent<br />

version of the code will incorporate the algorithm described by<br />

Anstreicher, Lee and Rutherford [1992] for coping with<br />

singularity.<br />

- 10 -


an artificial variable z 0 and an artificial column h. Basic<br />

variables are then expressed as follows:<br />

x B ' ˆx B & ˜h z 0<br />

where ˜h is the "transformed artificial column" (the<br />

untransformed column is h ' B ). The coefficients of are<br />

0 ˜h ˜h<br />

selected so that:<br />

(i) The values of "feasible" basis variables are unaffected by<br />

z0: ( x ).<br />

B<br />

B<br />

# x<br />

i i # x B<br />

i Y ˜h '0 i<br />

(ii) The "most infeasible" basic variable ( i=p) is driven to its<br />

upper or lower bound when z 0 = 1:<br />

˜h p '<br />

ˆx B<br />

p &x B<br />

p if ˜x B<br />

p >x B<br />

p<br />

ˆx B<br />

p &x B<br />

p<br />

- 11 -<br />

if ˜x B<br />

p &4, x<br />

i i


Table 1 Pivot Sequence Rules for Lemke’s Algorithm with<br />

Implicit Bounds<br />

Exiting Variable Entering Variable Change in Non-basic<br />

Values<br />

(i) z i at lower<br />

bound<br />

(ii) z i at upper<br />

bound<br />

wi increases from<br />

0<br />

vi increases from<br />

0<br />

(iii) wi at 0 zi increases from<br />

Ri (iv) vi at 0 zi decreases from<br />

ui Pivoting Rules<br />

- 12 -<br />

N<br />

xi = zi = Ri N<br />

xi+n = zi = ui N N<br />

xi = xi<br />

N N<br />

xi = xi<br />

+n = 0<br />

+n = 0<br />

When z 0 enters the basis, it assumes a value of unity, and<br />

at this point (baring degeneracy), the subsequent pivot sequence<br />

is entirely determined. The entering variable in one iteration<br />

is determined by the exiting basic variable in the previous<br />

iteration. For example, if z i were in B 0 and introducing z 0<br />

caused z i to move onto its lower bound, then the subsequent<br />

iteration introduces w i. Conversely, if w i were in B 0 and z 0<br />

caused w i to fall to zero, the subsequent iteration increases z i<br />

from R i. Finally, if v i were in B 0 and z 0’s introduction caused v i<br />

to fall to zero, the subsequent iteration decreases z i from u i.<br />

The full set of pivoting rules is displayed in Table 1. One<br />

difference between this algorithm and the original Lemke (type<br />

III) pivoting scheme (see Lemke (1965), Garcia and Zangwill


(1981), or Cottle and Pang (1992)) is that structural variables<br />

(z i’s) may enter or exit the basis at their upper bound values.<br />

The algorithm, therefore, must distinguish between pivots in<br />

which the entering variable increases from a lower bound from<br />

those in which the entering variable decreases from an upper<br />

bound.<br />

Another difference with the "usual" Lemke pivot procedure is<br />

that an entering structural variable may move from one bound to<br />

another. When this occurs, the subsequent pivot introduces the<br />

corresponding slack variable. For example, if z i is increased<br />

from R i to u i without driving a basic variable infeasible, then z i<br />

becomes non-basic at u i, and the subsequent pivot introduces v i.<br />

This type of pivot may be interpreted as z i entering and exiting<br />

the basis in a single step. 8<br />

In theory it is convenient to ignore degeneracy, while in<br />

practice degeneracy is unavoidable. The present algorithm does<br />

not incorporate a lexicographic scheme to avoid cycling, but it<br />

does implement a ratio test procedure which assures that when<br />

there is more than one candidate, priority is given to the most<br />

stable pivot. The admissible set of pivots is determined on both<br />

an absolute pivot tolerance (ZTOLPV) and a relative pivot<br />

8 If all structural variables are subject to finite upper<br />

and lower bounds, then no z i variables may be part of a<br />

homogeneous solution adjacent to a secondary ray. This does not<br />

imply, however, that secondary rays are impossible when all z i<br />

variables are bounded, as a ray may then be comprised of w i and<br />

v i variables.<br />

- 13 -


tolerance (ZTOLRP). No pivot with absolute value smaller than<br />

min(ZTOLPV, ZTOLRP 5V5) is considered, where 5V5 is the norm of<br />

the incoming column.<br />

Termination on a Secondary Ray<br />

Lemke’s algorithm terminates normally when the introduction<br />

of a new variable drives z 0 to zero. This is the desired<br />

outcome, but it does not always happen. The algorithm may be<br />

interrupted prematurely when no basic variable "blocks" an<br />

incoming variable, a condition known as "termination on a<br />

secondary ray". In anticipation of such outcomes, MILES<br />

maintains a record of the value of z 0 for successive iterations,<br />

and it saves basis information associated with the smallest<br />

*<br />

observed value, z0. (In Lemke’s algorithm, the pivot sequence is<br />

determined without regard for the effect on z 0, and the value of<br />

the artificial variable may follow an erratic (non-monotone) path<br />

from its initial value of one to the final value of zero.)<br />

When MILES encounters a secondary ray, a restart procedure<br />

*<br />

is invoked in which the set of basic variables associated with z0 are reinstalled. This basis (augmented with one column from the<br />

non-basic triplet to replace z 0) serves as B 0 , and the algorithm<br />

is restarted. In some cases this procedure permits the pivot<br />

sequence to continue smoothly to a solution, while in others<br />

cases may only lead to another secondary ray.<br />

- 14 -


4. The <strong>Options</strong> File<br />

MILES accepts the same format options file regardless of how<br />

the system is being accessed, through GAMS/MPSGE or GAMS/MCP.<br />

The options file is a standard text file which is normally named<br />

MILES.OPT. 9 The following is a typical options file:<br />

BEGIN SPECS<br />

END SPECS<br />

ITLIMT = 50<br />

CONTOL = 1.0E-8<br />

LUSIZE = 16<br />

All entries are of the form " = ", where keywords<br />

have at most 6 characters. The following are recognized keywords<br />

which may appear in the options file, identified by keyword, type<br />

and default value, and grouped according to function:<br />

Termination control<br />

CONTOL r Default = 1.0E-6<br />

is the convergence tolerance. Whenever an iterate is<br />

encountered for which ,(z) < CONTOL, the algorithm<br />

terminates. This corresponds to the GAMS/MINOS parameter<br />

"Row tolerance".<br />

ITLIMT i Default = 25<br />

is an upper bound on the number of Newton iterations. This<br />

9 When invoking MILES from within GAMS it is possible to<br />

use one of several option file names. See the README<br />

documentation with GAMS 2.25 for details.<br />

- 15 -


corresponds to the GAMS/MINOS parameter "Major iterations".<br />

ITERLIM i Default = 1000<br />

is an upper bound on the cumulative number of Lemke<br />

iterations. When MILES is invoked from within a GAMS<br />

program (either with MPSGE or GAMS/MCP), .ITERLIM can<br />

be used to set this value. This corresponds to the<br />

GAMS/MINOS parameter "Iterations limit".<br />

NORM r Default = 3<br />

defines the vector norm to be used for evaluating ,(z).<br />

Acceptable values are 1, 2 or 3 which correspond to p = 1, 2<br />

and +4.<br />

NRFMAX i Default = 3<br />

establishes an upper bound on the number of factorizations<br />

in any LCP. This avoids wasting lots of CPU if a subproblem<br />

proves difficult to solve.<br />

NRSMAX i Default = 1<br />

sets an upper bound on the number of restarts which the<br />

linear subproblem solver will undertake before giving up.<br />

Basis factorization control<br />

FACTIM r Default = 120.0<br />

indicates the maximum number of CPU seconds between<br />

recalculation of the basis factors.<br />

INVFRQ i Default = 200<br />

determines the maximum number of Lemke iterations between<br />

- 16 -


ecalculation of the basis factors. This corresponds to the<br />

GAMS/MINOS parameter "Factorization frequency".<br />

ITCH i Default = 30<br />

indicates the frequency with which the factorization is<br />

checked. The number refers to the number of basis<br />

replacements operations between refinements. This<br />

corresponds to the GAMS/MINOS parameter "Check frequency".<br />

Pivot Selection<br />

PLINFY r Default = 1.D20<br />

is the value assigned for "plus infinity" ("+INF" in GAMS<br />

notation).<br />

ZTOLPV r Default = 3.644E-11 (EPS**(2./3.)) 10<br />

is the absolute pivot tolerance. This corresponds, roughly,<br />

to the GAMS/MINOS parameter "Pivot tolerance" as it applies<br />

for nonlinear problems.<br />

ZTOLRP r Default = 3.644E-11 (EPS**(2./3.))<br />

is the relative pivot tolerance. This corresponds, roughly,<br />

to the GAMS/MINOS parameter "Pivot tolerance" as it applies<br />

for nonlinear problems.<br />

ZTOLZE r Default = 1.E-6<br />

is used in the subproblem solution to determine when any<br />

10 A number of tolerances are set on the basis of the<br />

machine precision, EPS. For Lahey Fortran running on 80486<br />

processor, EPS = 2.2D-16.<br />

- 17 -


variable has exceeded an upper or lower bound. This<br />

corresponds to GAMS/MINOS parameter "Feasibility tolerance".<br />

Linearization and Data Control<br />

SCALE l Default = .TRUE.<br />

invokes row and column scaling of the LCP tableau in every<br />

iteration. This corresponds, roughly, to the GAMS/MINOS<br />

switch "scale all variables".<br />

ZTOLDA r Default = 1.483E-08 (EPS**(1/2))<br />

sets a lower bound on the magnitude of nonzeros recognized<br />

in the linearization. All coefficients with absolute value<br />

less than ZTOLDA are ignored.<br />

Newton Iteration Control<br />

DMPFAC r Default = 0.5<br />

is the Newton damping factor, represented by " above.<br />

MINSTP r Default = 0.03<br />

is the minimum Newton step length, represented by 8 above.<br />

Output Control<br />

INVLOG l Default = .TRUE.<br />

is a switch which requests LUSOL to generate a report with<br />

basis statistics following each refactorization.<br />

LCPDMP l Default =.FALSE.<br />

is a switch to generate a printout of the LCP data after<br />

- 18 -


scaling.<br />

LCPECH l Default = .FALSE.<br />

is a switch to generate a printout of the LCP data before<br />

scaling, as evaluated.<br />

LEVOUT i Default = 0<br />

sets the level of debug output written to the log and status<br />

files. The lowest meaningful value is -1 and the highest is<br />

3. This corresponds, roughly, to the GAMS/MINOS parameter<br />

"Print level"<br />

PIVLOG l Default =.FALSE.<br />

is a switch to generate a status file listing of the Lemke<br />

pivot sequence.<br />

LUSOL parameters (as with MINOS 5.4, except LUSIZE)<br />

LUSIZE i Default = 10<br />

is used to estimate the number of LU nonzeros which will be<br />

stored, as a multiple of the number of nonzeros in the<br />

Jacobian matrix.<br />

LPRINT i Default=0<br />

is the print level, < 0 suppresses output.<br />

= 0 gives error messages.<br />

= 1 gives debug output from some of the other<br />

routines in LUSOL.<br />

>= 2 gives the pivot row and column and the<br />

no. of rows and columns involved at each<br />

- 19 -


MAXCOL i Default=5<br />

elimination step in lu1fac.<br />

in lu1fac is the maximum number of columns searched allowed<br />

in a Markowitz-type search for the next pivot element. For<br />

some of the factorization, the number of rows searched is<br />

maxrow = maxcol - 1.<br />

ELMAX1 r Default=10.0<br />

is the maximum multiplier allowed in L during factor.<br />

ELMAX2 r Default=10<br />

is the maximum multiplier allowed in L during updates.<br />

SMALL r Default=EPS**(4./5.)<br />

is the absolute tolerance for treating reals as zero. IBM<br />

double: 3.0d-13<br />

UTOL1 r Default=EPS**(2./3.)<br />

is the absolute tol for flagging small diagonals of U. IBM<br />

double: 3.7d-11<br />

UTOL2 r Default=EPS**(2./3.)<br />

is the relative tol for flagging small diagonals of U. IBM<br />

double: 3.7d-11<br />

USPACE r Default=3.0<br />

is a factor which limits waste space in U. In lu1fac, the<br />

row or column lists are compressed if their length exceeds<br />

uspace times the length of either file after the last<br />

compression.<br />

DENS1 r Default=0.3<br />

- 20 -


is the density at which the Markowitz strategy should search<br />

maxcol columns and no rows.<br />

DENS2 r Default=0.6<br />

is the density at which the Markowitz strategy should search<br />

only 1 column or (preferably) use a dense LU for all the<br />

remaining rows and columns.<br />

5. Log File Output<br />

The log file is intended for display on the screen in order<br />

to permit monitoring progress. Relatively little output is<br />

generated.<br />

A sample iteration log is displayed in Table 2. This output<br />

is from two cases solved in succession. This and subsequent<br />

output comes from program TRNSP.FOR which calls the MILES library<br />

directly. (When MILES is invoked from within GAMS, at most one<br />

case is processed at a time.)<br />

- 21 -


Table 2 Sample Iteration Log<br />

MILES (July 1993) Ver:225-386-02<br />

Thomas F. Rutherford<br />

Department of Economics<br />

University of Colorado<br />

Technical support available only by Email: TOM@GAMS.COM<br />

Work space allocated -- 0.01 Mb<br />

Initial deviation ........ 3.250E+02 P_01<br />

Convergence tolerance .... 1.000E-06<br />

0 3.25E+02 1.00E+00 (P_01 )<br />

1 1.14E-13 1.00E+00 (W_02 )<br />

Major iterations ........ 1<br />

Lemke pivots ............ 10<br />

Refactorizations ........ 2<br />

Deviation ............... 1.137E-13<br />

Solved.<br />

Work space allocated -- 0.01 Mb<br />

Initial deviation ........ 5.750E+02 W_02<br />

Convergence tolerance .... 1.000E-06<br />

0 5.75E+02 1.00E+00 (W_02 )<br />

1 2.51E+01 1.00E+00 (P_01 )<br />

2 4.53E+00 1.00E+00 (P_01 )<br />

3 1.16E+00 1.00E+00 (P_01 )<br />

4 3.05E-01 1.00E+00 (P_01 )<br />

5 8.08E-02 1.00E+00 (P_01 )<br />

6 2.14E-02 1.00E+00 (P_01 )<br />

7 5.68E-03 1.00E+00 (P_01 )<br />

8 1.51E-03 1.00E+00 (P_01 )<br />

9 4.00E-04 1.00E+00 (P_01 )<br />

10 1.06E-04 1.00E+00 (P_01 )<br />

11 2.82E-05 1.00E+00 (P_01 )<br />

12 7.47E-06 1.00E+00 (P_01 )<br />

13 1.98E-06 1.00E+00 (P_01 )<br />

14 5.26E-07 1.00E+00 (P_01 )<br />

Major iterations ........ 14<br />

Lemke pivots ............ 23<br />

Refactorizations ........ 15<br />

Deviation ............... 5.262E-07<br />

Solved.<br />

- 22 -


The first line of the log output gives the MILES program<br />

date and version information. This information is important for<br />

bug reports.<br />

The line beginning "Work space..." reports the amount of<br />

memory which has been allocated to solve the model - 10K for this<br />

example. Thereafter is reported the initial deviation together<br />

with the name of the variable associated with the largest<br />

B C<br />

imbalance (, i + ,i).<br />

The next line reports the convergence<br />

tolerance.<br />

The lines beginning 0 and 1 are the major iteration reports<br />

for those iterations. the number following the iteration number<br />

is the current deviation, and the third number is the Armijo step<br />

length. The name of the variable complementary to the equation<br />

with the largest associated deviation is reported in parenthesis<br />

at the end of the line.<br />

Following the final iteration is a summary of iterations,<br />

refactorizations, amd final deviation. The final message reports<br />

the solution status. In this case, the model has been<br />

successfully processed ("Solved.").<br />

6. Status File Output<br />

The status file reports more details regarding the solution<br />

process than are provided in the log file. Typically, this file<br />

is written to disk and examined only if a problem arises. Within<br />

- 23 -


GAMS, the status file appears in the listing only following the<br />

GAMS statement "OPTION SYSOUT=ON;".<br />

The level of output to the status file is determined by the<br />

options passed to the solver. In the default configuration, the<br />

status file receives all information written to the log file<br />

together a detailed listing of all switches and tolerances and a<br />

report of basis factorization statistics.<br />

When output levels are increased from their default values<br />

using the options file, the status file can receive considerably<br />

more output to assist in debugging. Tables 3-6 present a status<br />

file generated with LEVOUT=3 (maximum), PIVLOG=T, and LCPECH=T.<br />

The status file begins with the same header as the log file.<br />

Thereafter is a complete echo-print of the user-supplied option<br />

file when one is provided. Following the core allocation report<br />

is a full echo-print of control parameters, switches and<br />

tolerance as specified for the current run.<br />

Table 4 contiues the status file. The iteration-by-<br />

iteration report of variable and function values is produced<br />

whenever LEVOUT $ 2. Table 4 also contains an LCP echo-print.<br />

This report has two sections: $ROWS and $COLUMNS. The four<br />

columns of numbers in the $ROWS section are the constant vector<br />

(q), the current estimate of level values for the associated<br />

variables (z), and the lower and upper bounds vectors ( R and u).<br />

The letters L and U which appear between the ROW and Z columns<br />

are used to identify variables which are non-basic at their lower<br />

- 24 -


and upper bounds, respectively. In this example, all upper<br />

bounds equal +4, so no variables are non-basic at their upper<br />

bound.<br />

By convention, only variable (and not equation names) appear<br />

in the status file. An equation is identified by the<br />

corresponding variable. We therefore see in the $COLUMNS:<br />

section of the matrix echo-print, the row names correspond to the<br />

names of z variables. The names assigned to variables z i, w i and<br />

v i are z-, w- and v-, as shown in the<br />

$COLUMNS section. The nonzeros for w- and v- variables are<br />

not shown because they are assumed to be -/+I.<br />

The status file output continues on Table 5 where the first<br />

half of the table reports output from the matrix scaling<br />

procedure, and the second half reports the messages associated<br />

with initiation of Lemke’s procedure.<br />

The "lu6chk warning" is a LUSOL report. Thereafter are two<br />

factorization reports. Two factorizations are undertaken here<br />

because the first basis was singular, so the program install all<br />

the lower bound slacks in place of the matrix defined by the<br />

initial values, z.<br />

Following the second factorization report, at the bottom of<br />

Table 5 is a summary of initial pivot. "Infeasible in 3 rows."<br />

indicates that ˜h contains 3 nonzero elements. "Maximum<br />

infeasibility" reports the largest amount by which a structural<br />

variable violates an upper or lower bound. "Artificial column<br />

- 25 -


with 3 elements." indicates that the vector h ' B contains 3<br />

0 ˜h<br />

elements (note that in this case B 0 = -I because the initial<br />

basis was singular, hence the equivalence between the number of<br />

nonzeros in ˜h and h.).<br />

Table 6 displays the final section of the status file. At<br />

the top of page 6 is the Lemke iteration log. The columns are<br />

interpreted as follows:<br />

ITER is the iteration index beginning with 0,<br />

STATUS is a statistic representing the efficiency of the<br />

Lemke path. Formally, status is the ratio of the<br />

minimum number of pivots from B 0 to the current<br />

basis divided by the actual number of pivots.<br />

When the status is 1, Lemke’s algorithm is<br />

performing virtually as efficiently as a direct<br />

factorization (apart from the overhead of basis<br />

factor updates.)<br />

Z% indicates the percentage of columns in the basis<br />

are "structural" (z’s).<br />

Z0 indicates the value of the artificial variable.<br />

Notice that in this example, the artificial<br />

variable declines monotonically from its initial<br />

value of unity.<br />

ERROR is a column in which the factorization error is<br />

reported, when it is computed. For this run,<br />

- 26 -


ITCH=30 and hence no factorization errors are<br />

computed.<br />

INFEAS. is a column in which the magnitude of the<br />

infeasibility introduced by the artificial column<br />

(defined using the box-norm) is reported. (In<br />

MILES the cover vector h contains many different<br />

nonzero values, not just 1’s; so there may be a<br />

large difference between the magnitude of the<br />

artificial variable and the magnitude of the<br />

induced infeasibility.<br />

PIVOTS reports the pivot magnitude in both absolute terms<br />

(the first number) and relative terms (the second<br />

number). The relative pivot size is the ratio of<br />

the pivot element to the norm of the incoming<br />

column.<br />

IN/OUT report the indices (not names) of the incoming and<br />

outgoing columns for every iteration. Notice that<br />

Lemke’s iteration log concludes with variable z 0<br />

exiting the basis.<br />

The convergence report for iteration 1 is no different from the<br />

report written to the log file, and following this is a second<br />

report of variable and function values. We see here that a<br />

solution has been obtained following a single subproblem. This<br />

is because the underlying problem is, in fact, linear.<br />

- 27 -


The status file (for this case) concludes with an iteration<br />

summary identical to the log file report and a summary of how<br />

much CPU time was employed overall and within various subtasks.<br />

(Don’t be alarmed if the sum of the last five numbers does not<br />

add up to the first number - some cycles are not monitored<br />

precisely.)<br />

- 28 -


Table 3 Status File with Debugging Output (page 1 of 4)<br />

MILES (July 1993) Ver:225-386-02<br />

Thomas F. Rutherford<br />

Department of Economics<br />

University of Colorado<br />

Technical support available only by Email: TOM@GAMS.COM<br />

User supplied option file:<br />

>BEGIN<br />

> PIVLOG = .TRUE.<br />

> LCPECH = .TRUE.<br />

> LEVOUT = 3<br />

>END<br />

Work space allocated -- 0.01 Mb<br />

NEWTON algorithm control parameters:<br />

Major iteration limit ..(ITLIMT). 25<br />

Damping factor .........(DMPFAC). 5.000E-01<br />

Minimum step length ....(MINSTP). 1.000E-02<br />

Norm for deviation .....(NORM)... 3<br />

Convergence tolerance ..(CONTOL). 1.000E-06<br />

LEMKE algorithm control parameters:<br />

Iteration limit .......(ITERLIM). 1000<br />

Factorization frequency (INVFRQ). 200<br />

Feasibility tolerance ..(ZTOLZE). 1.000E-06<br />

Coefficient tolerance ..(ZTOLDA). 1.483E-08<br />

Abs. pivot tolerance ...(ZTOLPV). 3.644E-11<br />

Rel. pivot tolerance ...(ZTOLRP). 3.644E-11<br />

Cover vector tolerance .(ZTOLZ0). 1.000E-06<br />

Scale every iteration ...(SCALE). T<br />

Restart limit ..........(NRSMAX). 1<br />

Output control switches:<br />

LCP echo print ...........(LCPECH). F<br />

LCP dump .................(LCPDMP). T<br />

Lemke inversion log ......(INVLOG). T<br />

Lemke pivot log ......... (PIVLOG). T<br />

Initial deviation ........ 3.250E+02 P_01<br />

Convergence tolerance .... 1.000E-06<br />

================================<br />

Convergence Report, Iteration 0<br />

===============================================================<br />

ITER DEVIATION STEP<br />

0 3.25E+02 1.00E+00 (P_01 )<br />

===============================================================<br />

- 29 -


Table 4 Status File with Debugging Output (page 2 of 4)<br />

Iteration 0 values.<br />

ROW Z F<br />

-------- ------------ ------------<br />

X_01_01 L 0.00000E+00 -7.75000E-01<br />

X_01_02 L 0.00000E+00 -8.47000E-01<br />

X_01_03 L 0.00000E+00 -8.38000E-01<br />

X_02_01 L 0.00000E+00 -7.75000E-01<br />

X_02_02 L 0.00000E+00 -8.38000E-01<br />

X_02_03 L 0.00000E+00 -8.74000E-01<br />

W_01 L 0.00000E+00 3.25000E+02<br />

W_02 L 0.00000E+00 5.75000E+02<br />

P_01 1.00000E+00 -3.25000E+02<br />

P_02 1.00000E+00 -3.00000E+02<br />

P_03 1.00000E+00 -2.75000E+02<br />

==================================<br />

Function Evaluation, Iteration: 0<br />

==================================<br />

$ROWS:<br />

X_01_01 -2.25000000E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

X_01_02 -1.53000004E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

X_01_03 -1.61999996E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

X_02_01 -2.25000000E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

X_02_02 -1.61999996E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

X_02_03 -1.25999998E-01 0.00000000E+00 0.00000000E+00 1.00000000E+20<br />

W_01 -3.25000000E+02 0.00000000E+00 0.00000000E+00 1.00000000E+00<br />

W_02 -5.75000000E+02 0.00000000E+00 0.00000000E+00 1.00000000E+00<br />

P_01 3.25000000E+02 1.00000000E+00 0.00000000E+00 1.00000000E+20<br />

P_02 3.00000000E+02 1.00000000E+00 0.00000000E+00 1.00000000E+20<br />

P_03 2.75000000E+02 1.00000000E+00 0.00000000E+00 1.00000000E+20<br />

... 0.00000000E+00 0.00000000E+00 0.00000000E+00 0.00000000E+00<br />

$COLUMNS:<br />

Z-X_01_01 W_01 -1.00000000E+00<br />

P_01 1.00000000E+00<br />

Z-X_01_02 W_01 -1.00000000E+00<br />

P_02 1.00000000E+00<br />

Z-X_01_03 W_01 -1.00000000E+00<br />

P_03 1.00000000E+00<br />

Z-X_02_01 W_02 -1.00000000E+00<br />

P_01 1.00000000E+00<br />

Z-X_02_02 W_02 -1.00000000E+00<br />

P_02 1.00000000E+00<br />

Z-X_02_03 W_02 -1.00000000E+00<br />

P_03 1.00000000E+00<br />

Z-W_01 X_01_01 1.00000000E+00<br />

X_01_02 1.00000000E+00<br />

X_01_03 1.00000000E+00<br />

Z-W_02 X_02_01 1.00000000E+00<br />

X_02_02 1.00000000E+00<br />

X_02_03 1.00000000E+00<br />

Z-P_01 X_01_01 -1.00000000E+00<br />

X_02_01 -1.00000000E+00<br />

Z-P_02 X_01_02 -1.00000000E+00<br />

X_02_02 -1.00000000E+00<br />

Z-P_03 X_01_03 -1.00000000E+00<br />

X_02_03 -1.00000000E+00<br />

... ... 0.00000000E+00<br />

- 30 -


Table 5 Status File with Debugging Output (page 3 of 4)<br />

SCALING LCP DATA<br />

-------<br />

MIN ELEM MAX ELEM MAX COL RATIO<br />

AFTER 0 1.00E+00 1.00E+00 1.00<br />

AFTER 1 1.00E+00 1.00E+00 1.00<br />

AFTER 2 1.00E+00 1.00E+00 1.00<br />

AFTER 3 1.00E+00 1.00E+00 1.00<br />

SCALING RESULTS:<br />

A(I,J)


Table 6 Status File with Debugging Output (page 4 of 4)<br />

LEMKE PIVOT STEPS<br />

=================<br />

ITER STATUS Z% Z0 ERROR INFEAS. ---- PIVOTS ---- IN OUT<br />

1 1.00 0 1.000 3.E+02 1.E+00 1 Z0 W 9<br />

2 1.00 9 1.000 1.E+00 1.E+00 2 Z 9 W 1<br />

3 1.00 18 0.997 9.E-01 9.E-01 1 Z 1 W 10<br />

4 1.00 27 0.997 1.E+00 1.E+00 1 Z 10 W 2<br />

5 1.00 36 0.996 9.E-01 4.E-01 1 Z 2 W 11<br />

6 1.00 45 0.996 1.E+00 1.E+00 1 Z 11 W 6<br />

7 1.00 55 0.479 2.E+00 1.E+00 1 Z 6 W 7<br />

8 1.00 64 0.479 1.E+00 1.E+00 1 Z 7 W 4<br />

9 1.00 73 0.000 1.E+00 1.E+00 2 Z 4 W 8<br />

10 1.00 73 0.000 1.E-03 2.E-03 1 V 8 Z0<br />

================================<br />

Convergence Report, Iteration 1<br />

===============================================================<br />

ITER DEVIATION STEP<br />

0 3.25E+02 1.00E+00<br />

1 1.14E-13 1.00E+00 (W_02 )<br />

===============================================================<br />

Iteration 1 values.<br />

ROW Z F<br />

-------- ------------ ------------<br />

X_01_01 2.50000E+01 -8.32667E-17<br />

X_01_02 3.00000E+02 -5.55112E-17<br />

X_01_03 L 0.00000E+00 3.60000E-02<br />

X_02_01 3.00000E+02 -8.32667E-17<br />

X_02_02 L 0.00000E+00 8.99999E-03<br />

X_02_03 2.75000E+02 2.77556E-17<br />

W_01 1.00000E+00 -1.13687E-13<br />

W_02 1.00000E+00 1.13687E-13<br />

P_01 1.22500E+00 0.00000E+00<br />

P_02 1.15300E+00 0.00000E+00<br />

P_03 1.12600E+00 0.00000E+00<br />

Major iterations ........ 1<br />

Lemke pivots ............ 10<br />

Refactorizations ........ 2<br />

Deviation ............... 1.137E-13<br />

Solved.<br />

Total solution time .: 0.6 sec.<br />

Function & Jacobian..: 0.2 sec.<br />

LCP solution ........: 0.2 sec.<br />

Refactorizations ....: 0.1 sec.<br />

FTRAN ...............: 0.0 sec.<br />

Update ..............: 0.1 sec.<br />

- 32 -


7. Termination Messages<br />

Basis factorization error in INVERT.<br />

An unexpected error code returned by LUSOL. This should<br />

normally not occur. Examine the status file for a message<br />

from LUSOL. 11<br />

Failure to converge.<br />

Two successive iterates are identical - the Newton search<br />

direction is not defined. This should normally not occur.<br />

Inconsistent parameters ZTOLZ0, ZTOLZE.<br />

ZTOLZ0 determines the smallest value loaded into the cover<br />

vector h, whereas ZTOLZE is the feasibility tolerance<br />

employed in the Harris pivot selection procedure. If ZTOLZ0<br />

< -ZTOLZE, Lemke’s algorithm cannot be executed because the<br />

initial basis is infeasible.<br />

Insufficient space for linearization.<br />

Available memory is inadequate for holding the nonzeros in<br />

the Jacobian. More memory needs to be allocated. On a PC,<br />

you probably will need to install more physical memory - if<br />

there is insufficient space for the Jacobi matrix, there is<br />

far too little memory for holding the LU factors of the same<br />

matrix.<br />

Insufficient space to invert.<br />

More memory needs to be allocated for basis factors.<br />

11 Within GAMS, insert the line "OPTION SYSOUT=ON;" prior<br />

to the solve statement and resubmit the program in order to pass<br />

the MILES solver status file through to the listing.<br />

- 33 -


Increase the value of LUSIZE in the options file, or assign<br />

a larger value to .workspace if MILES is accessed<br />

through GAMS.<br />

Iteration limit exceeded.<br />

This can result from either exceeding the major (Newton) or<br />

minor (Lemke) iterations limit. When MILES is invoked from<br />

GAMS, the Lemke iteration limit can be set with the<br />

statement ".iterlim = xx;" (the default value is<br />

1000). The Newton iteration limit is 25 by default, and it<br />

can be modified only through the ITLIMT option.<br />

Resource interrupt.<br />

Elapsed CPU time exceeds options parameter RESLIM. To<br />

increase this limit, either add RESLIM = xxx in the options<br />

file or (if MILES is invoked from within GAMS), add a GAMS<br />

statement ".RESLIM = xxx;".<br />

Singular matrix encountered.<br />

Lemke’s algorithm has been interrupted due to a singularity<br />

arising in the basis factorization, either during a column<br />

replacement or during a refactorization. For some reason, a<br />

restart is not possible.<br />

Termination on a secondary ray.<br />

Lemke’s algorithm terminated on a secondary ray. For some<br />

reason, a restart is not possible.<br />

Unknown termination status.<br />

The termination status flag has not been set, but the code<br />

- 34 -


has interrupted. Look in the status file for a previous<br />

message. This termination code should not happen often.<br />

5 References<br />

K.J. Anstreicher, J. Lee and T.F. Rutherford "Crashing a Maximum<br />

Weight Complementary Basis", Mathematical Programming. (1992)<br />

A. Brooke, D. Kendrick, and A. Meeraus GAMS: A User’s Guide,<br />

Scientific Press, (1987).<br />

R.W. Cottle and J.S. Pang The Linear Complementarity Problem ,<br />

Academic Press, (1992).<br />

J.E. Dennis and R.B. Schnabel Numerical Methods for Unconstrained<br />

Optimization and Nonlinear Equations , Prentice-Hall (1983).<br />

S. Dirkse "Robust solution of mixed complementarity problems",<br />

Computer Sciences Department, University of Wisconsin (1992).<br />

B.C. Eaves, "A locally quadratically convergent algorithm for<br />

computing stationary points," Tech. Rep., Department of<br />

Operations Research, Stanford University, Stanford, CA (1978).<br />

P.E. Gill, W. Murray, M.A. Saunders and M.H. Wright "Maintaining<br />

LU factors of a general sparse matrix, Linear Algebra and its<br />

Applications 88/89, 239-270 (1991).<br />

C.B. Garcia and W.I. Zangwill Pathways to Solutions, Fixed Points<br />

and Equilibria, Prentice-Hall (1981)<br />

P. Harker and J.S. Pang "Finite-dimensional variational<br />

inequality and nonlinear complementarity problems: a survey of<br />

theory, algorithms and applications", Mathematical Programming<br />

48, pp. 161-220 (1990).<br />

W.W. Hogan, "Energy policy models for project independence,"<br />

Computation and Operations Research 2 (1975) 251-271.<br />

N.H. Josephy, "Newton’s method for generalized equations",<br />

Techncical Summary Report #1965, Mathematical Research Center,<br />

University of Wisconsin - Madison (1979).<br />

I. Kaneko, "A linear complementarity problem with an n by 2n ’P’matrix",<br />

Mathematical Programming Study 7 , pp. 120-141, (1978).<br />

- 35 -


C.E. Lemke "Bimatrix equilibrium points and mathematical<br />

programming", Management Science 11, pp. 681-689, (1965).<br />

L. Mathiesen, "Computation of economic equilibria by a sequence<br />

of linear complementarity problems", Mathematical Programming<br />

Study 23 (1985).<br />

P.V. Preckel, "NCPLU Version 2.0 User’s Guide", Working Paper,<br />

Department of Agricultural Economics, Purdue University, (1987).<br />

W.H. Press, B.P.Flannery, S.A. Teukolsky, W.T. Vetterling<br />

Numerical Recipes: The Art of Scientific Computing , Cambridge<br />

University Press (1986).<br />

J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear<br />

Equations in Several Variables, Academic Press (1970).<br />

S.M. Robinson, "A quadratically-convergent algorithm for general<br />

nonlinear programming problems," Mathematical Programming Study 3<br />

(1975).<br />

T.F. Rutherford "Extensions of GAMS for variational and<br />

complementarity problems with applications in economic<br />

equilibrium analysis", Working Paper 92-7, Department of<br />

Economics, University of Colorado (1992a).<br />

T.F. Rutherford "Applied general equilibrium modeling using<br />

MPS/GE as a GAMS subsystem", Working Paper 92-15, Department of<br />

Economics, University of Colorado (1992b).<br />

- 36 -


Table 7 Transport Model in GAMS/MCP (page 1 of 2)<br />

*==>TRNSP.GMS<br />

option mcp=miles;<br />

$TITLE LP TRANSPORTATION PROBLEM FORMULATED AS A ECONOMIC EQUILIBRIUM<br />

* =================================================================<br />

* In this file, Dantzig’s original transportation model is<br />

* reformulated as a linear complementarity problem. We first<br />

* solve the model with fixed demand and supply quantities, and<br />

* then we incorporate price-responsiveness on both sides of the<br />

* market.<br />

*<br />

* T.Rutherford 3/91 (revised 5/91)<br />

* =================================================================<br />

* This problem finds a least cost shipping schedule that meets<br />

* requirements at markets and supplies at factories<br />

*<br />

* References:<br />

* Dantzig, G B., Linear Programming and Extensions<br />

* Princeton University Press, Princeton, New Jersey, 1963,<br />

* Chapter 3-3.<br />

*<br />

* This formulation is described in detail in Chapter 2<br />

* (by Richard E. Rosenthal) of GAMS: A Users’ Guide.<br />

* (A Brooke, D Kendrick and A Meeraus, The Scientific Press,<br />

* Redwood City, California, 1988.<br />

*<br />

SETS<br />

I canning plants / SEATTLE, SAN-DIEGO /<br />

J markets / NEW-YORK, CHICAGO, TOPEKA / ;<br />

PARAMETERS<br />

A(I) capacity of plant i in cases (when prices are unity)<br />

/ SEATTLE 325<br />

SAN-DIEGO 575 /,<br />

B(J) demand at market j in cases (when prices equal unity)<br />

/ NEW-YORK 325<br />

CHICAGO 300<br />

TOPEKA 275 /,<br />

ESUB(J) Price elasticity of demand (at prices equal to unity)<br />

/ NEW-YORK 1.5<br />

CHICAGO 1.2<br />

TOPEKA 2.0 /;<br />

TABLE D(I,J) distance in thousands of miles<br />

NEW-YORK CHICAGO TOPEKA<br />

SEATTLE 2.5 1.7 1.8<br />

SAN-DIEGO 2.5 1.8 1.4 ;<br />

SCALAR F freight in dollars per case per thousand miles /90/ ;<br />

PARAMETER C(I,J) transport cost in thousands of dollars per case ;<br />

C(I,J) = F * D(I,J) / 1000 ;<br />

PARAMETER PBAR(J) Reference price at demand node J;<br />

- 37 -


Table 8 Transport Model in GAMS/MCP (page 2 of 2)<br />

POSITIVE VARIABLES<br />

W(I) shadow price at supply node i,<br />

P(J) shadow price at demand node j,<br />

X(I,J) shipment quantities in cases;<br />

EQUATIONS<br />

SUPPLY(I) supply limit at plant i,<br />

FXDEMAND(J) fixed demand at market j,<br />

PRDEMAND(J) price-responsive demand at market j,<br />

PROFIT(I,J) zero profit conditions;<br />

PROFIT(I,J).. W(I) + C(I,J) =G= P(J);<br />

SUPPLY(I).. A(I) =G= SUM(J, X(I,J));<br />

FXDEMAND(J).. SUM(I, X(I,J)) =G= B(J);<br />

PRDEMAND(J).. SUM(I, X(I,J)) =G= B(J) * (PBAR(J)/P(J))**ESUB(J);<br />

* Declare models including specification of equation-variable association:<br />

MODEL FIXEDQTY / PROFIT.X, SUPPLY.W, FXDEMAND.P/ ;<br />

MODEL EQUILQTY / PROFIT.X, SUPPLY.W, PRDEMAND.P/ ;<br />

* Initial estimate:<br />

P.L(J) = 1; W.L(I) = 1;<br />

PARAMETER REPORT(*,*,*) Summary report;<br />

SOLVE FIXEDQTY USING MCP;<br />

REPORT("FIXED",I,J) = X.L(I,J); REPORT("FIXED","Price",J) = P.L(J);<br />

REPORT("FIXED",I,"Price") = W.L(I);<br />

* Calibrate the demand functions:<br />

PBAR(J) = P.L(J);<br />

* Replicate the fixed demand equilibrium:<br />

SOLVE EQUILQTY USING MCP;<br />

REPORT("EQUIL",I,J) = X.L(I,J); REPORT("EQUIL","Price",J) = P.L(J);<br />

REPORT("EQUIL",I,"Price") = W.L(I);<br />

DISPLAY "BENCHMARK CALIBRATION", REPORT;<br />

* Compute a counter-factual equilibrium:<br />

C("SEATTLE","CHICAGO") = 0.5 * C("SEATTLE","CHICAGO");<br />

SOLVE FIXEDQTY USING MCP;<br />

REPORT("FIXED",I,J) = X.L(I,J); REPORT("FIXED","Price",J) = P.L(J);<br />

REPORT("FIXED",I,"Price") = W.L(I);<br />

* Replicate the fixed demand equilibrium:<br />

SOLVE EQUILQTY USING MCP;<br />

REPORT("EQUIL",I,J) = X.L(I,J); REPORT("EQUIL","Price",J) = P.L(J);<br />

REPORT("EQUIL",I,"Price") = W.L(I);<br />

DISPLAY "Reduced Seattle-Chicago transport cost:", REPORT;<br />

- 38 -


- 39 -


GAMS/MINOS MINOS 1<br />

Table of Contents:<br />

GAMS/MINOS<br />

.....................................................................................................................................<br />

1. INTRODUCTION.............................................................................................. 2<br />

2. HOW TO RUN A MODEL WITH GAMS/MINOS.......................................... 2<br />

3. OVERVIEW OF GAMS/MINOS...................................................................... 2<br />

3.1. Linear programming .................................................................................... 3<br />

3.2. Problems with a Nonlinear Objective.......................................................... 4<br />

3.3. Problems with Nonlinear Constraints.......................................................... 6<br />

4. GAMS OPTIONS .............................................................................................. 7<br />

4.1. <strong>Options</strong> specified through the option statement........................................... 7<br />

4.2. <strong>Options</strong> specified through model suffixes ................................................... 8<br />

5. SUMMARY OF MINOS OPTIONS ................................................................. 8<br />

5.1. Output related options.................................................................................. 8<br />

5.2. <strong>Options</strong> affecting Tolerances ....................................................................... 9<br />

5.3. <strong>Options</strong> affecting Iteration limits ................................................................. 9<br />

5.4. Other algorithmic options ............................................................................ 9<br />

5.5. Examples of GAMS/MINOS option file ................................................... 10<br />

6. Special Notes.................................................................................................... 11<br />

6.1. Modeling Hints .......................................................................................... 11<br />

6.2. Storage ....................................................................................................... 11<br />

7. The GAMS/MINOS Log File........................................................................... 12<br />

7.1. Linear Programs......................................................................................... 12<br />

7.2. Nonlinear Objective................................................................................... 15<br />

7.3. Nonlinear constraints ................................................................................. 17<br />

8. Detailed Description of MINOS <strong>Options</strong>......................................................... 19<br />

9. Acknowledgements .......................................................................................... 39<br />

10. References ...................................................................................................... 40


2 MINOS GAMS/MINOS<br />

1. INTRODUCTION<br />

This section describes the GAMS interface to MINOS which is a general purpose<br />

nonlinear programming solver. GAMS/MINOS is a specially adapted version of the solver<br />

that is used for solving linear and nonlinear programming problems.<br />

GAMS/MINOS is designed to find solutions that are locally optimal. The nonlinear<br />

functions in a problem must be smooth (i.e., their first derivatives must exist). The<br />

functions need not be separable. Integer restrictions cannot be imposed directly.<br />

A certain region is defined by the linear constraints in a problem and by the bounds on the<br />

variables. If the nonlinear objective and constraint functions are convex within this region,<br />

any optimal solution obtained will be a global optimum. Otherwise there may be several<br />

local optima, and some of these may not be global. In such cases the chances of finding a<br />

global optimum are usually increased by choosing a staring point that is “sufficiently<br />

close,” but there is no general procedure for determining what “close” means, or for<br />

verifying that a given local optimum is indeed global.<br />

GAMS allows you to specify values for many parameters that control GAMS/MINOS, and<br />

with careful experimentation you may be able to influence the solution process in a helpful<br />

way. All MINOS options available through GAMS/MINOS are summarized at the end of<br />

this chapter.<br />

2. HOW TO RUN A MODEL WITH GAMS/MINOS<br />

MINOS is capable of solving models of the following types: LP, NLP and RMINLP. If<br />

MINOS is not specified as the default LP, NLP or RMINLP solver, then the following<br />

statement can be used in your GAMS model:<br />

option lp=minos5; { or nlp or rminlp }<br />

It should appear before the solve statement.<br />

3. OVERVIEW OF GAMS/MINOS<br />

GAMS/MINOS is a system designed to solve large-scale optimization problems expressed<br />

in the following form:<br />

Minimize F(x) + c T x + d T y (1)<br />

subject to f(x) + A1 y b1 (2)<br />

A2 x + A3 y b2 (3)<br />

l ≤ x,y ≤ u (4)


GAMS/MINOS MINOS 3<br />

where the vectors c, d, b1, b2, l, u and the matrices A1, A2, A3 are constant, F(x) is a smooth<br />

scalar function, and f(x) is a vector of smooth functions. The < > signs mean that<br />

individual constraints may be defined using ≤, = or ≥ corresponding to the GAMS<br />

constructs =L= , =E= and =G=.<br />

The components of x are called the nonlinear variables, and the components of y are the<br />

linear variables. Similarly, the equations in (2) are called the nonlinear constraints, and the<br />

equations in (3) are the linear constraints. Equations (2) and (3) together are called the<br />

general constraints.<br />

Let m1 and n1 denote the number of nonlinear constraints and variables, and let m and n<br />

denote the total number of (general) constraints and variables. Thus, A3 has m-m1 rows and<br />

n-n1 columns.<br />

The constraints (4) specify upper and lower bounds on all variables. These are<br />

fundamental to many problem formulations and are treated specially by the solution<br />

algorithms in GAMS/MINOS. Some of the components of l and u may be -∞ or +∞<br />

respectively, in accordance with the GAMS use of -INF and + INF.<br />

The vectors b1 and b2 are called the right-hand side, and together are denoted by b.<br />

3.1. Linear programming<br />

If the functions F(x) and f(x) are absent, the problem becomes a linear program. Since<br />

there is no need to distinguish between linear and nonlinear variables, we use x rather than<br />

y. GAMS/MINOS converts all general constraints into equalities, and the only remaining<br />

inequalities are simple bounds on the variables. Thus, we write linear programs in the form<br />

minimize c T x<br />

subject to Ax + Is = 0<br />

l ≤ x, s ≤ u<br />

where the elements of x are your own GAMS variables, and s is a set of slack variables:<br />

one for each general constraint. For computational reasons, the right-hand side b is<br />

incorporated into the bounds on s.<br />

In the expression Ax + Is = 0 we write the identity matrix explicitly if we are concerned<br />

with columns of the associated matrix (AI). Otherwise we will use the equivalent notation<br />

Ax + s = 0.<br />

GAMS/MINOS solves linear programs using a reliable implementation of the primal<br />

simplex method (Dantzig, 1963), in which the constraints Ax + Is = 0 are partitioned into<br />

the form<br />

BxB + NxN = 0,


4 MINOS GAMS/MINOS<br />

where the basis matrix B is square and nonsingular. The elements of xB and xN are called<br />

the basic or nonbasic variables respectively. Together they are a permutation of the vector<br />

(x, s).<br />

Normally, each nonbasic variable is equal to one of its bounds, and the basic variables take<br />

on whatever values are needed to satisfy the general constraints. (The basic variables may<br />

be computed by solving the linear equation BxB = NxN.) It can be shown that if an optimal<br />

solution to a linear program exists, then it has this form.<br />

The simplex method reaches such a solution by performing a sequence of iterations, in<br />

which one column of B is replaced by one column of N(and vice versa), until no such<br />

interchange can be found that will reduce the value of c T x.<br />

As indicated nonbasic variables usually satisfy their upper and lower bounds. If any<br />

components of xB lie significantly outside their bounds, we say that the current point is<br />

infeasible. In this case, the simplex method uses a Phase 1 procedure to reduce the sum of<br />

infeasibilities to zero. This is similar to the subsequent Phase 2 procedure that optimizes<br />

the true objective function c T x.<br />

If the solution procedures are interrupted, some of the nonbasic variables may lie strictly<br />

between their bounds” lj < xj < uj. In addition, at a “feasible” or “optimal” solution, some<br />

of the basic variables may lie slightly outside their bounds: lj - δ < xj < lj or uj < xj + δ<br />

where δ is a feasibility tolerance (typically 10 -6 ). In rare cases, even a new nonbasic<br />

variables might lie outside their bounds by as much as δ.<br />

GAMS/MINOS maintains a sparse LU factorization of the basis matrix B , using a<br />

Markowitz ordering scheme and Bartels-Golub updates, as implemented in the Fortran<br />

package LUSOL(Gill et al. 1987).(see Bartels and Golub, 1969; Bartels, 1971; Reid, 1976<br />

and 1982.) The basis factorization is central to the efficient handling of sparse linear and<br />

nonlinear constraints.<br />

3.2. Problems with a Nonlinear Objective<br />

When nonlinearities are confined to the term F(x) in the objective function, the problem is<br />

a linearly constrained nonlinear program. GAMS/MINOS solves such problems using a<br />

reduced-gradient algorithm (Wolfe, 1962) combined with a quasi-Newton algorithm<br />

follows that described in Murtagh and Saunders (1978).<br />

In the reduced-gradient method, the constraints Ax + Is = 0 are partitioned into the form<br />

BxB + SxS + NxN = 0<br />

where xs is a set of superbasic variables. At a solution, the basic and superbasic variables<br />

will lie somewhere between their bounds (to within the feasibility tolerance δ ) , while<br />

nonbasic variables will normally be equal to one of their bounds, as before. Let the<br />

number of superbasic variables be s, the number of columns in S. (The context will always<br />

distinguish s from the vector of slack variables .) At a solution, s will be no more than n1,<br />

the number of nonlinear variables. In many practical cases we have found that s remains<br />

reasonably small, say 200 or less, even if n1 is large.


GAMS/MINOS MINOS 5<br />

In the reduced-gradient algorithm, xs is regarded as a set of “independent variables” or<br />

“free variables” that are allowed to move in any desirable direction, namely one that will<br />

improve the value of the objective function (or reduce the sum of infeasibilities). The basic<br />

variables can then be adjusted in order to continue satisfying the linear constraints.<br />

If it appears that no improvement can be made with the current definition of B, S and N,<br />

some of the nonbasic variables are selected to be added to S, and the process is repeated<br />

with an increased value of s. At all stages, if a basic or superbasic variable encounters one<br />

of its bounds, the variable is made nonbasic and the value of s is reduced by one.<br />

A step of the reduced-gradient method is called a minor iteration. For linear problems, we<br />

may interpret the simplex method as being the same as the reduced-gradient method, with<br />

the number of superbasic variable oscillating between 0 and 1.<br />

A certain matrix Z is needed now for descriptive purposes. It takes the form<br />

−1 ⎛ − B S⎞<br />

⎜ ⎟<br />

Z = ⎜ I ⎟<br />

⎜ ⎟<br />

⎝ 0 ⎠<br />

though it is never computed explicitly. Given LU factorization of the basis matrix B, it is<br />

possible to compute products of the form Zq and Z T g by solving linear equations involving<br />

B or B T . This in turn allows optimization to be performed on the superbasic variables,<br />

while the basic variables are adjusted to satisfy the general linear constraints.<br />

An important feature of GAMS/MINOS is a stable implementation of a quasi-Newton<br />

algorithm for optimizing the superbasic variables. This can achieve superlinear<br />

convergence during any sequence of iterations for which the B, S, N partition remains<br />

constant. A search direction q for the superbasic variables is obtained by solving a system<br />

of the form<br />

R T Rq = -Z T g<br />

where g is a gradient of F(x), Z T g is the reduced gradient, and R is a dense upper<br />

triangular matrix. GAMS computes the gradient vector g analytically, using symbolic<br />

differentiation. The matrix R is updated in various ways in order to approximate the<br />

reduced Hessian according to R T R ≈ Z T HZ where H is the matrix of second derivatives of<br />

F(x) (the Hessian).<br />

Once q is available, the search direction for all variables is defined by p = Zq. A line<br />

search is then performed to find an approximate solution to the one-dimensional problem<br />

Minimize F(x + αp)<br />

α<br />

subject to 0 < α < β<br />

where β is determined by the bounds on the variables. Another important GAMS/MINOS<br />

is a step-length procedure used in the linesearch to determine the step-length α (see Gill et


6 MINOS GAMS/MINOS<br />

at., 1979). The number of nonlinear function evaluations required may be influenced by<br />

setting the Line search tolerance, as discussed in Section D.3.<br />

As a linear programming, an equation B T π = gB is solved to obtain the dual variables or<br />

shadow prices π where gB is the gradient of the objective function associated with basic<br />

variables. It follows that gB - B T π = 0 The analogous quantity for superbasic variables is<br />

the reduced-gradient vector Z T g = gs - s T π ; this should also be zero at an optimal solution.<br />

(In practice its components will be of order r || π || where r is the optimality tolerance,<br />

typically 10 -6 , and || π || is a measure of the size of the elements of π.)<br />

3.3. Problems with Nonlinear Constraints<br />

If any of the constraints are nonlinear, GAMS/MINOS employs a project Lagrangian<br />

algorithm, based on a method due to Robinson (1972), see Murtagh and Saunders (1982).<br />

This involves a sequence of major iterations, each of which requires the solution of a<br />

linearly constrained subproblem. Each subproblem contains linearized versions of the<br />

nonlinear constraints, as well as the original linear constraints and bounds.<br />

At the start of the k-th major iteration, let xk be an estimate of the nonlinear variables, and<br />

let λk be an estimate of the Lagrange multipliers (or dual variables) associated with the<br />

nonlinear constraints. The constraints are linearized by changing f(x) in equation (2) to its<br />

linear approximation:<br />

or more briefly<br />

f’(x, xk) = f(xk) + J(xk) (x - xk)<br />

f’ = fk + Jk (x - xk)<br />

where J(xk) is the Jacobian matrix evaluated at xk. (The i-th row of the Jacobian is the<br />

gradient vector of the i-th nonlinear constraint function. As for the objective gradient,<br />

GAMS calculates the Jacobian using symbolic differentiation).<br />

The subproblem to be solved during the k-th major iteration is then<br />

Minimize x,y F(x) + c T x + d T y - λk T (f - f’) + 0.5 ρ (f - f’) T (f - f’) (5)<br />

subject to f’ + A1y b1 (6)<br />

A2x + A3y b2 (7)<br />

l < (x,y) < u (8)<br />

The objective function (5) is called an augmented Lagrangian. The scalar ρ is a penalty<br />

parameter, and the term involving ρ is a modified quadratic penalty function.<br />

GAMS/MINOS uses the reduced-gradient algorithm to minimize (5) subject to (6) - (8).<br />

As before, slack variables are introduced and b1 and b2 are incorporated into the bounds on<br />

the slacks. The linearized constraints take the form


GAMS/MINOS MINOS 7<br />

⎛ Jk A1⎞x<br />

I 0 s1<br />

Jkxk fk<br />

⎜ ⎟<br />

⎝ A A ⎠ y 0 I s 0<br />

⎛<br />

⎜<br />

⎝<br />

⎞<br />

⎟ +<br />

⎠<br />

⎛ ⎞<br />

⎜ ⎟<br />

⎝ ⎠<br />

⎛ ⎞ ⎛ − ⎞<br />

⎜ ⎟ = ⎜ ⎟<br />

⎝ ⎠ ⎝ ⎠<br />

2 3<br />

2<br />

This system will be referred to as Ax + Is = 0 as in the linear case. The Jacobian Jk is<br />

treated as a sparse matrix, the same as the matrices A1, A2, and A3.<br />

In the output from GAMS/MINOS, the term Feasible subproblem indicates that the<br />

linearized constraints have been satisfied. In general, the nonlinear constraints are<br />

satisfied only in the limit, so that feasibility and optimality occur at essentially the same<br />

time. The nonlinear constraint violation is printed every major iteration. Even if it is zero<br />

early on (say at the initial point), it may increase and perhaps fluctuate before tending to<br />

zero. On “well behaved” problems, the constraint violation will decrease quadratically<br />

(i.e., very quickly) during the final few major iteration.<br />

4. GAMS OPTIONS<br />

The following GAMS options are used by GAMS/MINOS:<br />

4.1. <strong>Options</strong> specified through the option statement<br />

The following options are specified through the option statement. For example,<br />

set iterlim = 100 ;<br />

sets the iterations limit to 100.<br />

Iterlim integer Sets the iteration limit. The algorithm will terminate and<br />

pass on the current solution to GAMS. In case a pre-solve is<br />

done, the post-solve routine will be invoked before reporting<br />

the solution.<br />

(default = 1000)<br />

Reslim real Sets the time limit in seconds. The algorithm will terminate<br />

and pass on the current solution to GAMS. In case a presolve<br />

is done, the post-solve routine will be invoked before<br />

reporting the solution.<br />

(default = 1000.0)<br />

Bratio real Determines whether or not to use an advanced basis.<br />

(default = 0.25)<br />

Sysout on/off Will echo the MINOS messages to the GAMS listing file.<br />

(default = off)


8 MINOS GAMS/MINOS<br />

4.2. <strong>Options</strong> specified through model suffixes<br />

The following options are specified through the use of the model suffix. For example,<br />

mymodel.workspace = 10 ;<br />

sets the amount of memory used to 10 MB. Mymodel is the name of the model as<br />

specified by the model statement. In order to be effective, the assignment of the model<br />

suffix should be made between the model and solve statements.<br />

workspace real Gives MINOS x MB of workspace. Overrides the memory<br />

estimation.(default is estimated by solver)<br />

optfile integer Instructs MINOS to read the option file minos5.opt.<br />

(default = 0)<br />

scaleopt integer Instructs MINOS to use scaling information passed by<br />

GAMS through the variable.SCALE parameters(default = 0)<br />

5. SUMMARY OF MINOS OPTIONS<br />

The performance of GAMS/MINOS is controlled by a number of parameters or “options.”<br />

Each option has a default value that should be appropriate for most problems. (The<br />

defaults are given in the Section 7.) For special situations it is possible to specify nonstandard<br />

values for some or all of the options through the MINOS option file.<br />

All these options should be entered in the option file 'minos5.opt' after setting the<br />

model.OPTFILE parameter to 1. The option file is not case sensitive and the keywords<br />

must be given in full. Examples for using the option file can be found at the end of this<br />

section. The second column in the tables below contains the section where more detailed<br />

information can be obtained about the corresponding option in the first column.<br />

5.1. Output related options<br />

Debug level Controls amounts of output information.<br />

Log Frequency Frequency of iteration log information.<br />

Print level Amount of output information.<br />

Scale, print Causes printing of the row and column-scales.<br />

Solution No/Yes Controls printing of final solution.<br />

Summary frequency Controls information in summary file.


GAMS/MINOS MINOS 9<br />

5.2. <strong>Options</strong> affecting Tolerances<br />

Crash tolerance crash tolerance<br />

Feasibility tolerance Variable feasibility tolerance for linear constraints.<br />

Line search tolerance Accuracy of step length location during line search.<br />

LU factor tolerance<br />

LU update tolerance<br />

LU Singularity tolerance<br />

Tolerance during LU factorization.<br />

Optimality tolerance Optimality tolerance.<br />

Pivot Tolerance Prevents singularity.<br />

Row Tolerance Accuracy of nonlinear constraint satisfaction at optimum.<br />

Subspace tolerance Controls the extent to which optimization is confined to the<br />

current set of basic and superbasic variables<br />

5.3. <strong>Options</strong> affecting Iteration limits<br />

Iterations limit Maximum number of minor iterations allowed<br />

Major iterations Maximum number of major iterations allowed.<br />

Minor iterations Maximum number of minor iterations allowed between<br />

successive linearizations of the nonlinear constraints.<br />

5.4. Other algorithmic options<br />

Check frequency frequency of linear constraint satisfaction test.<br />

Completion accuracy level of sub-problem solution.<br />

Crash option perform crash<br />

Damping parameter See Major Damping Parameter<br />

Expand frequency Part of anti-cycling procedure<br />

Factorization frequency Maximum number of basis changes between factorizations.<br />

Hessian dimension Dimension of reduced Hessian matrix<br />

Lagrangian Determines linearized sub-problem objective function.<br />

Major damping parameter Forces stability between subproblem solutions.<br />

Minor damping parameter Limits the change in x during a line search.<br />

Multiple price Pricing strategy<br />

Partial Price Level of partial pricing.<br />

Penalty Parameter Value of ρ in the modified augmented Lagrangian.<br />

Radius of convergence Determines when ρ will be reduced.<br />

Scale option Level of scaling done on the model.<br />

Start assigned nonlinears Affects the starting strategy during cold start.<br />

Superbasics limit Limits storage allocated for superbasic variables.<br />

Unbounded objective value Detects unboundedness in nonlinear problems.


10 MINOS GAMS/MINOS<br />

Unbounded step size Detects unboundedness in nonlinear problems.<br />

Verify option Finite-difference check on the gradients<br />

Weight on linear objective Invokes the composite objective technique,<br />

5.5. Examples of GAMS/MINOS option file<br />

The following example illustrates the use of certain options that might be helpful for<br />

“difficult” models involving nonlinear constraints. Experimentation may be necessary with<br />

the values specified, particularly if the sequence of major iterations does not converge<br />

using default values.<br />

BEGIN GAMS/MINOS options<br />

* These options might be relevant for very nonlinear models.<br />

Major damping parameter 0.2 * may prevent divergence.<br />

Minor damping parameter 0.2 * if there are singularities<br />

* in the nonlinear functions.<br />

Penalty parameter 10.0 * or 100.0 perhaps-a value<br />

* higher than the default.<br />

Scale linear variables * (This is the default.)<br />

END GAMS/MINOS options<br />

Conversely, nonlinearly constrained models that are very nearly linear may optimize more<br />

efficiently if some of the “cautious” defaults are relaxed:<br />

BEGIN GAMS/MINOS options<br />

* Suggestions for models with MILDLY nonlinear constraints<br />

Completion Full<br />

Minor alteration limit 200<br />

Penalty parameter 0.0 * or 0.1 perhaps-a value<br />

* smaller than the default.<br />

* Scale one of the following<br />

Scale all variables * if starting point is VERY GOOD.<br />

Scale linear variables * if they need it.<br />

Scale No * otherwise.<br />

END GAMS/MINOS options<br />

Most of the options described in the next section should be left at their default values for<br />

any given model. If experimentation is necessary, we recommend changing just one option<br />

at a time.


GAMS/MINOS MINOS 11<br />

6. Special Notes<br />

6.1. Modeling Hints<br />

Unfortunately, there is no guarantee that the algorithm just described will converge from<br />

an arbitrary starting point. The concerned modeler can influence the likelihood of<br />

convergence as follows:<br />

• Specify initial activity levels for the nonlinear variables as carefully as possible (using<br />

the GAMS suffix .L).<br />

• Include sensible upper and lower bounds on all variables.<br />

• Specify a Major damping parameter that is lower than the default value, if the<br />

problem is suspected of being highly nonlinear.<br />

• Specify a Penalty parameter ρ that is higher than the default value, again if the<br />

problem is highly nonlinear.<br />

In rare cases it may be safe to request the values λk = 0 and ρ = 0 for all subproblems, by<br />

specifying Lagrangian=No. However, convergence is much more like with the default<br />

setting, Lagrangian=Yes. The initial estimate of the Lagrange multipliers is then λ0 = 0,<br />

but for later subproblems λk is taken to be the Lagrange multipliers associated with the<br />

(linearized) nonlinear constraints at the end of the previous major iteration.<br />

For the first subproblem, the default value for the penalty parameter is ρ = 100.0/m1<br />

where m1 is the number of nonlinear constraints. For later subproblems, ρ is reduced in<br />

stages when it appears that the sequence {xk, λk} is converging. In many times it is safe to<br />

specify ρ = 0, particularly if the problem is only mildly nonlinear. This may improve the<br />

overall efficiency.<br />

6.2. Storage<br />

GAMS/MINOS uses one large array of main storage for most of its workspace. The<br />

implementation places no fixed limit on the size of a problem or on its shape (many<br />

constraints and relatively few variables, or vice versa). In general, the limiting factor will<br />

be the amount of main storage available on a particular machine, and the amount of<br />

computation time that one‘s budget and/or patience can stand.<br />

Some detailed knowledge of a particular model will usually indicate whether the solution<br />

procedure is likely to be efficient. An important quantity is m, the total number of general<br />

constraints in (2) and (3). The amount of workspace required by GAMS/MINOS is<br />

roughly 100m words, where one “word” is the relevant storage unit for the floating-point<br />

arithmetic being used. This usually means about 800m bytes for workspace. A further


12 MINOS GAMS/MINOS<br />

300K bytes, approximately, are needed for the program itself, along with buffer space for<br />

several files. Very roughly, then, a model with m general constraints requires about<br />

(m+300) K bytes of memory.<br />

Another important quantity, is n, the total number of variables in x and y. The above<br />

comments assume that n is not much larger than m, the number of constraints. A typical<br />

ratio is n/m is 2 or 3.<br />

If there are many nonlinear variables (i.e., if n1 is large), much depends on whether the<br />

objective function or the constraints are highly nonlinear or not. The degree of<br />

nonlinearity affects s, the number of superbasic variables. Recall that s is zero for purely<br />

linear problems. We know that s is need never be larger than n1+1. In practice, s is often<br />

very much less than this upper limit.<br />

In the quasi-Newton algorithm, the dense triangular matrix R has dimension s and requires<br />

about ½ s 2 words of storage. If it seems likely that s will be very large, some aggregation<br />

or reformulation of the problem should be considered.<br />

7. The GAMS/MINOS Log File<br />

MINOS writes different logs for LPs, NLPs with linear constraints, and NLPs with nonlinear<br />

constraints. In this section., a sample log file is shown for for each case, and the<br />

appearing messages are explained.<br />

7.1. Linear Programs<br />

MINOS uses a standard two-phase Simplex method for LPs. In the first phase, the sum of<br />

the infeasibilities at each iteration is minimized. Once feasibility is attained, MINOS<br />

switches to phase 2 where it minimizes (or maximizes) the original objective function. The<br />

different objective functions are called the phase 1 and phase 2 objectives. Notice that the<br />

marginals in phase 1 are with respect to the phase 1 objective. This means that if MINOS<br />

interrupts in phase 1, the marginals are "wrong" in the sense that they do not reflect the<br />

original objective.<br />

The log for the problem [MEXLS] is as follows:<br />

--- Starting compilation<br />

--- mexls.gms(1242)<br />

--- Starting execution<br />

--- mexls.gms(1241)<br />

--- Generating model ONE<br />

--- mexls.gms(1242)<br />

--- 353 rows, 578 columns, and 1873 non-zeroes.<br />

--- Executing MINOS5<br />

Work space allocated -- .21 Mb<br />

Reading data...<br />

Itn Nopt Ninf Sinf,Objective<br />

1 1 74 1.62609255E+01<br />

20 14 74 1.62609255E+01


GAMS/MINOS MINOS 13<br />

40 4 67 1.40246211E+01<br />

60 14 55 1.10065392E+01<br />

80 12 42 8.35651175E+00<br />

Itn Nopt Ninf Sinf,Objective<br />

100 6 31 7.21650054E+00<br />

120 21 20 6.65998885E+00<br />

140 13 11 5.77113823E+00<br />

160 1 8 2.35537706E+00<br />

180 4 5 9.58810903E-01<br />

Itn Nopt Ninf Sinf,Objective<br />

200 9 2 3.16184199E-01<br />

Itn 208 -- Feasible solution. Objective = 4.283788540E+04<br />

220 2 0 4.11169434E+04<br />

240 25 0 3.88351313E+04<br />

260 1 0 3.77495457E+04<br />

280 11 0 3.66399964E+04<br />

Itn Nopt Ninf Sinf,Objective<br />

300 2 0 3.30955215E+04<br />

320 2 0 3.09919632E+04<br />

340 5 0 2.84521363E+04<br />

360 1 0 2.76671551E+04<br />

380 2 0 2.76055456E+04<br />

Itn 392 -- 80 nonbasics set on bound, basics recomputed<br />

Itn 392 -- Feasible solution. Objective = 2.756959890E+04<br />

EXIT -- OPTIMAL SOLUTION FOUND<br />

Major, Minor itns 1 392<br />

Objective function 2.7569598897150E+04<br />

Degenerate steps 83 21.17<br />

Norm X, Norm PI 2.10E+01 4.49E+04<br />

Norm X, Norm PI 8.05E+03 1.06E+02 (unscaled)<br />

--- Restarting execution<br />

--- mexls.gms(1242)<br />

--- Reading solution for model ONE<br />

--- All done<br />

Considering each section of the log file above in order:<br />

------------------------------------------------------------------<br />

Work space allocated -- .21 Mb<br />

Reading data...<br />

------------------------------------------------------------------<br />

When MINOS is loaded, the amount of memory needed is first estimated. This estimate is<br />

based on statistics like the number of rows, columns and non-zeros. This amount of<br />

memory is then allocated and the problem is then loaded into MINOS.<br />

Next, the MINOS iteration log begins. In the beginning, a feasible solution has not yet<br />

been found, and the screen log during this phase of the search looks like:


14 MINOS GAMS/MINOS<br />

-------------------------------------- ---------------------------<br />

Itn Nopt Ninf Sinf,Objective<br />

1 1 74 1.62609255E+01<br />

------------------------------------------------------------------<br />

The first column gives the iteration count. A simplex iteration corresponds to a basis<br />

change. Nopt means "Number of non-optimalities" which refers to the number of<br />

marginals that do not have the correct sign (i.e. they do not satisfy the optimality<br />

conditions), and Ninf means "Number of infeasibilities" which refers to the number of<br />

infeasible constraints . The last column gives the "Sum of infeasibilities" in phase 1 (when<br />

Ninf > 0) or the Objective in phase 2.<br />

As soon as MINOS finds a feasible solution, it returns a message similar to the one below<br />

that appears for the example discussed:<br />

------------------------------------------------------------------<br />

Itn 208 -- Feasible solution. Objective = 4.283788540E+04<br />

------------------------------------------------------------------<br />

MINOS found a feasible solution and will switch from phase 1 to phase 2. During phase 2<br />

(or the search once feasibility has been reached), MINOS returns a message that looks<br />

like:<br />

------------------------------------------------------------------<br />

Itn 392 -- 80 nonbasics set on bound, basics recomputed<br />

------------------------------------------------------------------<br />

Even non-basic variables may not be exactly on their bounds during the optimization<br />

phase. MINOS uses a sliding tolerance scheme to fight stalling as a result of degeneracy.<br />

Before it terminates it wants to get the "real" solution. The non-basic variables are<br />

therefore reset to their bounds and the basic variables are recomputed to satisfy the<br />

constraints Ax = b. If the resulting solution is infeasible or non-optimal, MINOS will<br />

continue iterating. If necessary, this process is repeated one more time. Fortunately,<br />

optimality is usually confirmed very soon after the first "reset". In the example discussed,<br />

feasibility was maintained after this excercise, and the resulting log looks like:<br />

------------------------------------------------------------------<br />

Itn 392 -- Feasible solution. Objective = 2.756959890E+04<br />

------------------------------------------------------------------<br />

MINOS has found the optimal solution, and then exits with the following summary:<br />

------------------------------------------------------------------<br />

Major, Minor itns 1 392<br />

Objective function 2.7569598897150E+04<br />

Degenerate steps 83 21.17<br />

Norm X, Norm PI 2.10E+01 4.49E+04<br />

Norm X, Norm PI 8.05E+03 1.06E+02 (unscaled)<br />

------------------------------------------------------------------


GAMS/MINOS MINOS 15<br />

For an LP, the number of major iterations is always one. The number of minor iterations is<br />

the number of Simplex iterations.<br />

Degenerate steps are ones where a Simplex iteration did not do much other than a basis<br />

change: two variables changed from basic to non-basic and vice versa, but none of the<br />

activity levels changed. In this example there were 83 degenerate iterations, or 21.17% of<br />

the total 392 iterations.<br />

The norms ||x|| , ||pi|| are also printed both before and after the model is unscaled. Ideally<br />

the scaled ||x|| should be about 1 (say in the range 0.1 to 100), and ||pi|| should be greater<br />

than 1. If the scaled ||x|| is closer to 1 than the final ||x|| , then scaling was probably<br />

helpful. Otherwise, the model might solve more efficiently without scaling.<br />

If ||x|| or ||pi|| are several orders of magnitude different before and after scaling, you may<br />

be able to help MINOS by scaling certain sets of variables or constraints yourself (for<br />

example, by choosing different units for variables whose final values are very large.)<br />

7.2. Nonlinear Objective<br />

Consider the WEAPONS model from the model library which has nonlinear objective and<br />

linear constraints. The screen output resulting from running this model is as follows:<br />

--- Starting compilation<br />

--- weapons.gms(59)<br />

--- Starting execution<br />

--- weapons.gms(57)<br />

--- Generating model WAR<br />

--- weapons.gms(59)<br />

--- 13 rows, 66 columns, and 156 non-zeroes.<br />

--- Executing MINOS5<br />

Work space allocated -- .06 Mb<br />

Reading data...<br />

Reading nonlinear code...<br />

Itn Nopt Ninf Sinf,Objective Nobj NSB<br />

1 -1 3 9.75000000E+01 3 64<br />

Itn 6 -- Feasible solution. Objective = 1.511313490E+03<br />

20 5 0 1.70385909E+03 23 45<br />

40 3 0 1.73088173E+03 63 30<br />

60 1 0 1.73381961E+03 109 25<br />

80 1 0 1.73518225E+03 155 20<br />

Itn Nopt Ninf Sinf,Objective Nobj NSB<br />

100 0 0 1.73555465E+03 201 17<br />

EXIT -- OPTIMAL SOLUTION FOUND<br />

Major, Minor itns 1 119<br />

Objective function 1.7355695798562E+03<br />

FUNOBJ, FUNCON calls 245 0


16 MINOS GAMS/MINOS<br />

Superbasics, Norm RG 18 2.18E-08<br />

Degenerate steps 2 1.68<br />

Norm X, Norm PI 2.74E+02 1.00E+00<br />

Norm X, Norm PI 2.74E+02 1.00E+00 (unscaled)<br />

--- Restarting execution<br />

--- weapons.gms(59)<br />

--- Reading solution for model WAR<br />

--- All done<br />

The parts described in the LP example will not be repeated - only the new additions will<br />

be discussed.<br />

------------------------------------------------------------------<br />

Reading nonlinear code...<br />

------------------------------------------------------------------<br />

Besides reading the matrix GAMS/MINOS also reads the instructions that allow the<br />

interpreter to calculate nonlinear function values and their derivatives.<br />

The iteration log looks slightly different from the earlier case, and has a couple of<br />

additional columns.<br />

------------------------------------------------------------------<br />

Itn Nopt Ninf Sinf,Objective Nobj NSB<br />

1 -1 3 9.75000000E+01 3 64<br />

------------------------------------------------------------------<br />

The column Nobj gives the number of times MINOS calculated the objective and its<br />

gradient. The column NSB contains the number of superbasics. In MINOS, variables are<br />

typed basic, non-basic or superbasic. The number of superbasics once the number settles<br />

down (there may be rather more at the beginning of a run) is a measure of the nonlinearity<br />

of the problem. The final number of superbasics will never exceed the number of<br />

nonlinear variables, and in practice will often be much less (in most cases less than a few<br />

hundred).<br />

There is not much that a modeler can do in this regard, but it is worth remembering that<br />

MINOS works with a dense triangular matrix R (which is used to approximate the reduced<br />

Hessian) of that dimension. Optimization time and memory requirements will be<br />

significant if there are more than two or three hundred superbasics.<br />

Once the optimum solution has been found, MINOS exits after printing a search summary<br />

that is slightly different from the earlier case.<br />

------------------------------------------------------------------<br />

Major, Minor itns 1 119<br />

------------------------------------------------------------------<br />

As in the earlier case, there is always only one major iteration. Each iteration of the<br />

reduced-gradient method is called a minor iteration.<br />

------------------------------------------------------------------<br />

FUNOBJ, FUNCON calls 245 0<br />

------------------------------------------------------------------


GAMS/MINOS MINOS 17<br />

FUNOBJ is the number of times MINOS asked the interpreter to evaluate the objective<br />

function and its gradients. FUNCON means the same for the nonlinear constraints. As this<br />

model has only linear constraints, the number is zero.<br />

If you know that your model has linear constraints but FUNCON turns out to be positive,<br />

your nonlinear objective function is being regarded as a nonlinear constraint. You can<br />

improve MINOS’s efficiency by following some easy rules. Please see "Objective: special<br />

treatment of in NLP" in the index for the GAMS User’s Guide.<br />

------------------------------------------------------------------<br />

Superbasics, Norm RG 18 2.18E-08<br />

------------------------------------------------------------------<br />

The number of superbasic variables in the final solution is reported and also the norm of<br />

the reduced gradient (which should be close to zero for an optimal solution).<br />

7.3. Nonlinear constraints<br />

For models with nonlinear constraints the log is more complicated. CAMCGE from the<br />

model library is such an example, and the screen output resulting from running it is shown<br />

below:<br />

--- Starting compilation<br />

--- camcge.gms(444)<br />

--- Starting execution<br />

--- camcge.gms(435)<br />

--- Generating model CAMCGE<br />

--- camcge.gms(444)<br />

--- 243 rows, 280 columns, and 1356 non-zeroes.<br />

--- Executing MINOS5<br />

Work space allocated -- .36 Mb<br />

Reading data...<br />

Reading nonlinear code...<br />

Major Minor Ninf Sinf,Objective Viol RG NSB Ncon Penalty Step<br />

1 0 1 0.000000E+00 1.3E+02 0.0E+00 184 3 6.8E-01 1.0E+00<br />

2 40T 30 0.00000000E+00 8.8E+01 7.3E+02 145 4 6.8E-01 1.0E+00<br />

3 40T 30 0.00000000E+00 8.8E+01 1.7E+01 106 5 6.8E-01 1.0E+00<br />

4 40T 30 0.00000000E+00 8.8E+01 2.1E+01 66 6 6.8E-01 1.0E+00<br />

5 40T 25 0.00000000E+00 8.8E+01 1.7E+01 26 7 6.8E-01 1.0E+00<br />

6 29 0 1.91734197E+02 2.6E-03 0.0E+00 0 9 6.8E-01 1.0E+00<br />

7 0 0 1.91734624E+02 2.3E-08 0.0E+00 0 10 1.4E+00 1.0E+00<br />

8 0 0 1.91734624E+02 4.5E-13 0.0E+00 0 11 1.4E+00 1.0E+00<br />

9 0 0 1.91734624E+02 6.8E-13 0.0E+00 0 12 0.0E+00 1.0E+00<br />

EXIT -- OPTIMAL SOLUTION FOUND<br />

Major, Minor itns 9 189<br />

Objective function 1.9173462423688E+02<br />

FUNOBJ, FUNCON calls 8 12<br />

Superbasics, Norm RG 0 0.00E+00<br />

Degenerate steps 0 .00


18 MINOS GAMS/MINOS<br />

Norm X, Norm PI 1.37E+03 2.88E+01<br />

Norm X, Norm PI 8.89E+02 7.01E+01 (unscaled)<br />

Constraint violation 6.82E-13 7.66E-16<br />

--- Restarting execution<br />

--- camcge.gms(444)<br />

--- Reading solution for model CAMCGE<br />

--- All done<br />

Note that the iteration log is different from the two cases discussed above. A small<br />

section of this log is shown below.<br />

--------------------------------------------------------------------<br />

Major Minor Ninf Sinf,Objective Viol RG NSB Ncon Penalty Step<br />

1 0 1 0.00000000E+00 1.3E+02 0.0E+00 184 3 6.8E-01 1.0E+00<br />

2 40T 30 0.00000000E+00 8.8E+01 7.3E+02 145 4 6.8E-01 1.0E+00<br />

--------------------------------------------------------------------<br />

Two sets of iterations - Major and Minor, are now reported. A description of the various<br />

columns present in this log file follows:<br />

Major A major iteration involves linearizing the nonlinear constraints and<br />

performing a number of minor iterations on the resulting subproblem.<br />

The objective for the subproblem is an augmented Lagrangian, not the<br />

true objective function.<br />

Minor The number of minor iterations performed on the linearized subproblem.<br />

If it is a simple number like 29, then the subproblem was solved to<br />

optimality. Here, 40T means that the subproblem was terminated. (There<br />

is a limit on the number of minor iterations per major iteration, and by<br />

default this is 40.) in general the T is not something to worry about.<br />

Other possible flags are I and U, which mean that the subproblem was<br />

Infeasible or Unbounded. MINOS may have difficulty if these keep<br />

occurring.<br />

Ninf The number of infeasibilities at the end of solving the subproblem. It is<br />

0 if the linearized constraints were satisfied. It then means that all linear<br />

constraints in the original problem were satisfied, but only Viol (below)<br />

tells us about the nonlinear constraints.<br />

Objective The objective function for the original nonlinear program.<br />

Viol The maximum violation of the nonlinear constraints.<br />

RG The reduced gradient for the linearized subproblem. If Viol and RG are<br />

small and the subproblem was not terminated, the original problem has<br />

almost been solved.<br />

NSB The number of superbasics. If ninf > 0 at the beginning, it will not<br />

matter too much if NSB is fairly large. It is only when ninf reaches 0 that<br />

MINOS starts working with a dense matrix R of dimension NSB.


GAMS/MINOS MINOS 19<br />

Ncon The number of times MINOS has evaluated the nonlinear constraints and<br />

their derivatives.<br />

Penalty The current value of the penalty parameter in the augmented Lagrangian<br />

(the objective for the subproblems). If the major iterations appear to be<br />

converging, MINOS will decrease the penalty parameter. If there appears<br />

to be difficulty, such as unbounded subproblems, the penalty parameter<br />

will be increased.<br />

Step The step size taken towards the solution suggested by the last major<br />

iteration. Ideally this should be 1.0, especially near an optimum. The<br />

quantity Viol should then decrease rapidly. If the subproblem solutions<br />

are widely different, MINOS may reduce the step size under control of<br />

the Major Damping parameter.<br />

The final exit summary has an additional line in this case. For the problem being<br />

described, this line looks like:<br />

------------------------------------------------------------------<br />

Constraint violation 6.82E-13 7.66E-16<br />

------------------------------------------------------------------<br />

The first number is the largest violation of any of the nonlinear constraints (the final value<br />

of Viol). The second number is the relative violation, Viol/(1.0 + ||x||) . It may be more<br />

meaningful if the solution vector x is very large.<br />

For an optimal solution, these numbers must be small.<br />

Note: The CAMCGE model (like many CGE models or other almost square systems) can<br />

better be solved with the MINOS option Start Assigned Nonlinears Basic. It can be noticed<br />

in the log that the algorithm starts out with lots of super basics (corresponding to the nonlinear<br />

variables). MINOS starts with removing these superbasics, and only then starts<br />

working towards an optimal solution.<br />

8. Detailed Description of MINOS <strong>Options</strong><br />

The following is an alphabetical list of the keywords that may appear in the<br />

GAMS/MINOS options file, and a description of their effect. The letters I and r denote<br />

integer and real values. The number ε denotes machine precision (typically 10 -15 or 10 -16 ).<br />

<strong>Options</strong> not specified will take the default values shown.<br />

Check frequency i Every i-th iteration after the most recent basis factorization,<br />

a numerical test is made to see if the current solution x<br />

satisfies the general linear constraints (including linearized<br />

nonlinear constraints, if any). The constraints are of the form<br />

Ax + s = 0 where s is the set of slack variables. To perform<br />

the numerical test, the residual vector r = Ax + s is


20 MINOS GAMS/MINOS<br />

Completion Full/Partial<br />

computed. If the largest component of r is judged to be too<br />

large, the current basis is refactorized and the basic variables<br />

are recomputed to satisfy the general constraints more<br />

accurately.<br />

(Default = 30)<br />

When there are nonlinear constraints, this determines<br />

whether subproblems should be solved to moderate accuracy<br />

(partial completion), or to full accuracy (full completion),<br />

GAMS/MINOS implements the option by using two sets of<br />

convergence tolerances for the subproblems.<br />

Use of partial completion may reduce the work during early<br />

major iterations, unless the Minor iterations limit is active.<br />

The optimal set of basic and superbasic variables will<br />

probably be determined for any given subproblem, but the<br />

reduced gradient may be larger than it would have been with<br />

full completion.<br />

An automatic switch to full completion occurs when it<br />

appears that the sequence of major iterations is converging.<br />

The switch is made when the nonlinear constraint error is<br />

reduced below 100*(Row tolerance), the relative change in<br />

λk is 0.1 or less, and the previous subproblem was solved to<br />

optimality.<br />

Full completion tends to give better Langrange-multiplier<br />

estimates. It may lead to fewer major iterations, but may<br />

result in more minor iterations.<br />

(Default = FULL)<br />

Crash option i If a restart is not being performed, an initial basis will be<br />

selected from certain columns of the constraint matrix (AI).<br />

The value of i determines which columns of A are eligible.<br />

Columns of I are used to fill “gaps” where necessary.<br />

If i > 0, three passes are made through the relevant columns<br />

of A, searching for a basis matrix that is essentially<br />

triangular. A column is assigned to “pivot” on a particular<br />

row if the column contains a suitably large element in a row<br />

that has not yet been assigned.(The pivot elements ultimately<br />

form the diagonals of the triangular basis).<br />

Pass 1 selects pivots from free columns (corresponding to<br />

variables with no upper and lower bounds). Pass 2 requires<br />

pivots to be in rows associated with equality (=E=)<br />

constraints. Pass 3 allows the pivots to be in inequality rows.<br />

For remaining (unassigned) rows, the associated slack<br />

variables are inserted to complete the basis.


GAMS/MINOS MINOS 21<br />

Crash tolerance r<br />

Damping parameter r<br />

(Default = 1)<br />

0 The initial basis will contain only slack variables: B = I<br />

1 All columns of A are considered (except those excluded by<br />

the Start assigned nonlinears option).<br />

2 Only the columns of A corresponding to the linear variables<br />

y will be considered.<br />

3 Variables that appear nonlinearly in the objective will be<br />

excluded from the initial basis.<br />

4 Variables that appear nonlinearly in the constraints will be<br />

excluded from the initial basis.<br />

The Crash tolerance r allows the starting procedure CRASH<br />

to ignore certain “small” nonzeros in each column of A. If<br />

amax is the largest element in column j, other nonzeros aij in<br />

the column are ignored if |aij| < amax.r. To be meaningful, r<br />

should be in the range 0 ≤ r < 1).<br />

When r > 0.0 the basis obtained by CRASH may not be<br />

strictly triangular, but it is likely to be nonsingular and<br />

almost triangular. The intention is to obtain a starting basis<br />

containing more columns of A and fewer (arbitrary) slacks.<br />

A feasible solution may be reached sooner on some<br />

problems.<br />

For example, suppose the first m columns of A are the<br />

matrix shown under LU factor tolerance ; i.e., a tridiagonal<br />

matrix entries -1, 4, -1. To help CRASH choose all m<br />

columns for the initial basis, we could specify Crash<br />

tolerance r for some value of r > 0.25.<br />

(Default = 0.1)<br />

See Major Damping Parameter<br />

(Default = 2.0)<br />

Debug level i This causes various amounts of information to be output.<br />

Most debug levels will not be helpful to GAMS users, but<br />

they are listed here for completeness.<br />

(Default = 0)<br />

0 No debug output.<br />

2 (or more) Output from M5SETX showing the maximum residual after a<br />

row check.<br />

40 Output from LU8RPC (which updates the LU factors of the


22 MINOS GAMS/MINOS<br />

basis matrix), showing the position of the last nonzero in the<br />

transformed incoming column<br />

50 Output from LU1MAR (which updates the LU factors each<br />

refactorization), showing each pivot row and column and the<br />

dimensions of the dense matrix involved in the associated<br />

elimination.<br />

100 Out from M2BFAC and M5LOG listing the basic and<br />

superbasic variables and their values at every iteration.<br />

Expand frequency i This option is part of anti-cycling procedure designed to<br />

guarantee progress even on highly degenerate problems.<br />

For linear models, the strategy is to force a positive step at<br />

every iteration, at the expense of violating the bounds on the<br />

variables by a small amount. Suppose the specified<br />

feasibility tolerance is δ.<br />

Over a period of i iterations, the tolerance actually used by<br />

GAMS/MINOS increases from 0.5 δ to δ (in steps 0.58 δ / i).<br />

For nonlinear models, the same procedure is used for<br />

iterations in which there is only one superbasic variable.<br />

(Cycling can occur only when the current solution is at a<br />

vertex of the feasible region.) Thus, zero steps are allowed if<br />

there is more than one superbasic variable, but otherwise<br />

positive steps are enforced.<br />

Increasing i helps reduce the number of slightly infeasible<br />

nonbasic basic variables (most of which are eliminated<br />

during a resetting procedure). However, it also diminishes<br />

the freedom to choose a large pivot element (see Pivot<br />

tolerance).<br />

(Default = 10000)<br />

Factorization frequency i At most i basis changes will occur between factori-zations of<br />

the basis matrix.<br />

With linear programs, the basis factors are usually updated<br />

every iteration. The default i is reasonable for typical<br />

problems. Higher values up to i =100 (say) may be more<br />

efficient on problems that are extremely sparse and well<br />

scaled.<br />

When the objective function is nonlinear, fewer basis<br />

updates will occur as an optimum is approached. The<br />

number of iterations between basis factorizations will<br />

therefore increase. During these iterations a test is made


GAMS/MINOS MINOS 23<br />

regularly (according to the Check frequency) to ensure that<br />

the general constraints are satisfied. If necessary the basis<br />

will be re-factorized before the limit of i updates is reached.<br />

When the constraints are nonlinear, the Minor iterations<br />

limit will probably preempt i .<br />

(Default = 50)<br />

Feasibility tolerance r When the constraints are linear, a feasible solution is one in<br />

which all variables, including slacks, satisfy their upper and<br />

lower bounds to within the absolute tolerance r. (Since<br />

slacks are included, this means that the general linear<br />

constraints are also satisfied to within r.)<br />

GAMS/MINOS attempts to find a feasible solution before<br />

optimizing the objective function. If the sum of<br />

infeasibilities cannot be reduced to zero, the problem is<br />

declared infeasible. Let SINF be the corresponding sum of<br />

infeasibilities, If SINF is quite small, it may be appropriate<br />

to raise r by a factor of 10 or 100. Otherwise, some error in<br />

the data should be suspected.<br />

If SINF is not small, there may be other points that have a<br />

significantly smaller sum of infeasibilities. GAMS/MINOS<br />

does not attempt to find a solution that minimizes the sum.<br />

If Scale option = 1 or 2, feasibility is defined in terms of the<br />

scaled problem (since it is then more likely to be<br />

meaningful)<br />

A nonlinear objective function F(x) will be evaluated only at<br />

feasible points. If there are regions where F(x) is undefined,<br />

every attempt should be made to eliminate these regions<br />

from the problem. For example, for a function F(x) =<br />

sqrt(x1) + log(x2), it should be essential to place lower<br />

bounds on both variables. If Feasibility tolerance = 10 -6 , the<br />

bounds x1 > 10 -5 and x2 > 10 -4 might be appropriate. (The log<br />

singularity is more serious; in general, keep variables as far<br />

away from singularities as possible.)<br />

If the constraints are nonlinear, the above comments apply to<br />

each major iteration. A “feasible solution” satisfies the<br />

current linearization of the constraints to within the<br />

tolerance r. The associated subproblem is said to be feasible.<br />

As for the objective function, bounds should be used to keep<br />

x more than r away from singularities in the<br />

constraint functions f(x).<br />

At the start of major iteration k, the constraint functions f(xk)


24 MINOS GAMS/MINOS<br />

are evaluated at a certain point xk. This point always satisfies<br />

the relevant bounds (l < xk < u), but may not satisfy the<br />

general linear constraints.<br />

During the associated minor iterations, F(x) and f(x) will be<br />

evaluated only at points x that satisfy the bound and the<br />

general linear constraints (as well as the linearized nonlinear<br />

constraints).<br />

If a subproblem is infeasible, the bounds on the linearized<br />

constraints are relaxed temporarily, in several stages.<br />

Feasibility with respect to the nonlinear constraints<br />

themselves is measured against the Row tolerance (not<br />

against r. The relevant test is made at the start of a major<br />

iteration.<br />

(Default = 1.0E-6)<br />

Hessian dimension r This specifies than an r X r triangular matrix R is to be<br />

available for use by the quasi-Newton algorithm (to<br />

approximate the reduced Hessian matrix according to Z T HZ<br />

≈ R T R. Suppose there are s superbasic variables at a<br />

particular iteration. Whenever possible, r should be greater<br />

than s.<br />

If r > s, the first s columns of R will be used to approximate<br />

the reduced Hessian in the normal manner. If there are no<br />

further changes to the set of superbasic variables, the rate of<br />

convergence will ultimately be superlinear.<br />

If r < s, a matrix of the form,<br />

R<br />

R = ⎛<br />

⎜<br />

⎝<br />

r<br />

0 ⎞<br />

⎟<br />

D⎠<br />

will be used to approximate the reduced Hessian, Where Rr<br />

is an r X r upper triangular matrix and D is a diagonal<br />

matrix of order s - r. The rate of convergence will no longer<br />

be superlinear (and may be arbitrarily slow.<br />

The storage required if of the order ½ r 2 , which is substantial<br />

if r is as large as 200 (say). In general, r should be slight<br />

over-estimate of the final number of superbasic variables,<br />

whenever storage permits. It need not be larger than n1+1,<br />

where n1 is the number of nonlinear variables. For many<br />

problems it can be much smaller than n1.<br />

If Superbasics limit s is specified, the default value of r is<br />

the same number, s (and conversely). This is a safeguard to


GAMS/MINOS MINOS 25<br />

Iterations limit i<br />

ensure super-linear convergence wherever possible. If<br />

neither r nor s is specified, GAMS chooses values for both,<br />

using certain characteristics of the problem.<br />

(Default = Superbasics limit)<br />

This is maximum number of minor iterations allowed (i.e.,<br />

iterations of the simplex method or the reduced-gradient<br />

method). This option, if set, overrides the GAMS ITERLIM<br />

specification. If i = 0, no minor iterations are performed, but<br />

the starting point is tested for both feasibility and optimality.<br />

Iters or Itns are alternative keywords.<br />

(Default = 1000)<br />

Lagrangian Yes/No This determines the form of the objective function used for<br />

the linearized subproblems. The default value yes is highly<br />

recommended. The Penalty parameter value is then also<br />

relevant.<br />

If No is specified, the nonlinear constraint functions will be<br />

evaluated only twice per major iteration. Hence this option<br />

may be useful if the nonlinear constraints are very expensive<br />

to evaluate. However, in general there is a great risk that<br />

convergence may not occur.<br />

(Default = yes)<br />

Linesearch tolerance r<br />

For nonlinear problems, this controls the accuracy with<br />

which a step-length α is located in the one-dimensional<br />

problem<br />

Minimize F(x + αp)<br />

α<br />

subject to 0 < α ≤ β<br />

A linesearch occurs on most minor iterations for which x is<br />

feasible. [If the constraints are nonlinear, the function being<br />

minimized is the augmented Lagrangian in equation (5).]<br />

r must be a real value in the range 0.0 < r < 1.0.<br />

The default value r = 0.1 requests a moderately accurate<br />

search. It should be satisfactory in most cases.<br />

If the nonlinear functions are cheap to evaluate, a more<br />

accurate search may be appropriate: try r = 0.01 or r =<br />

0.001. The number of iterations should decrease, and this<br />

will reduce total run time if there are many linear or<br />

nonlinear constraints.


26 MINOS GAMS/MINOS<br />

Log Frequency i<br />

LU factor tolerance r1<br />

LU update tolerance r2<br />

LU Singularity tolerance r3<br />

If the nonlinear function are expensive to evaluate, a less<br />

accurate search may be appropriate; try r = 0.5 or perhaps r<br />

= 0.9. (The number of iterations will probably increase but<br />

the total number of function evaluations may decrease<br />

enough to compensate.)<br />

(Default = 0.1)<br />

In general, one line of the iteration log is printed every i-th<br />

minor iteration. A heading labels the printed items, which<br />

include the current iteration number, the number and sum of<br />

feasibilities (if any), the subproblem objective value (if<br />

feasible), and the number of evaluations of the nonlinear<br />

functions,<br />

A value such as i = 10, 100 or larger is suggested for those<br />

interested only in the final solution<br />

Log frequency 0 may be used as shorthand for Log frequency<br />

99999.<br />

If Print level > 0, the default value of i is 1. If Print level =<br />

0, the default value of i is 100. If Print level = 0 and the<br />

constraints are nonlinear, the minor iteration log is not<br />

printed (and the Log frequency is ignored). Instead, one line<br />

is printed at the beginning of each major iteration.<br />

(Default = 1 or 100)<br />

The first two tolerances affect the stability and sparsity of<br />

the basis factorization B = LU during re-factorization and<br />

updates respectively. The values specified must satisfy ri ≥<br />

1.0. The matrix L is a product of matrices of the form.<br />

⎛ 1<br />

⎜<br />

⎝ μ<br />

⎞<br />

⎟<br />

1⎠<br />

where the multipliers μ will satisfy |μ| < ri.<br />

1. The default values ri = 10.0 usually strike a good<br />

compromise between stability and sparsity.<br />

2. For large and relatively dense problems, ri = 25.0 (say)<br />

may give a useful improvement it sparsity without impairing<br />

stability to a serious degree.<br />

3. For certain very regular structures (e.g., band matrices) it<br />

may be necessary to set and r1 and/or r2 to values smaller<br />

than the default in order to achieve stability. For example, if


GAMS/MINOS MINOS 27<br />

Major damping parameter r<br />

the columns of A include a sub-matrix of the form<br />

⎛ 4 − 1 ⎞<br />

⎜<br />

⎟<br />

⎜<br />

−1 4 −1<br />

⎟<br />

⎝ − 1 4 ⎠<br />

it would be judicious to set both r1 and r2 to values in the<br />

range 1.0 < ri < 4.0. The singularity tolerance r3 helps guard<br />

against ill-conditioned basis matrices. When the basis is<br />

refactorized, the diagonal elements of U are tested as<br />

follows: if |Ujj| ≤ r3 or |Ujj| < r3 maxi |Ujj| , the j-th column<br />

of the basis is replaced by the corresponding slack variable.<br />

(This is most likely to occur after a restart, or at the start of a<br />

major iteration.)<br />

In some cases , the Jacobian matrix may converge to values<br />

that make the basis could become very ill-conditioned and<br />

the optimization could progress very slowly (if at all).<br />

Setting r3 = 1.0E-5, say, may help cause a judicious change<br />

of basis.<br />

(Default values: r1 = 10.0, r2 = 10.0, r3 = ε 2/3 ≈ 10 -11 )<br />

The parameter may assist convergence on problems that<br />

have highly nonlinear constraints. It is intended to prevent<br />

large relative changes between subproblem solutions (xk, λk)<br />

and (xk+1, λk+1). For example, the default value 2.0 prevents<br />

the relative change in either xk or λk from exceeding 200<br />

percent. It will not be active on well behaved problems.<br />

The parameter is used to interpolate between the solutions at<br />

the beginning and end of each major iteration. Thus xk+1 and<br />

λk+1 are changed to<br />

xk + σ (xk+1 - xk) and λk + σ (λk+1 - λk)<br />

for some step-length σ < 1. In the case of nonlinear equation<br />

(where the number of constraints is the same as the number<br />

of variables) this gives a damped Newton method.<br />

This is very crude control. If the sequence of major<br />

iterations does not appear to be converging, one should first<br />

re-run the problem with a higher Penalty parameter (say 10<br />

or 100 times the default ρ). (Skip this re-run in the case of<br />

nonlinear equations: there are no degrees of freedom and the<br />

value of ρ is irrelevant.)<br />

If the subproblem solutions continue to change violently, try<br />

reducing r to 0.2 or 0.1 (say).


28 MINOS GAMS/MINOS<br />

Major iterations i<br />

Minor damping parameter r<br />

For implementation reason, the shortened step to σ applies<br />

to the nonlinear variables x, but not to the linear variables y<br />

or the slack variables s. This may reduce the efficiency of<br />

the control.<br />

(Default = 2.0)<br />

This is maximum number of major iterations allowed. It is<br />

intended to guard against an excessive number of<br />

linearizations of the nonlinear constraints, since in some<br />

cases the sequence of major iterations my not converge. The<br />

progress of the major iterations can be best monitored using<br />

Print level 0 (the default).<br />

(Default = 50)<br />

This parameter limits the change in x during a linesearch. It<br />

applies to all nonlinear problems, once a “feasible solution”<br />

or “feasible subproblem” has been found.<br />

A linesearch of the form minimize F(x + αp) is<br />

α<br />

performed over the range 0 < α ≤ β, where β is the step to<br />

the nearest upper or lower bound on x. Normally, the first<br />

step length tried is α1 = min (1,β).<br />

In some cases, such as F(x) = ae bx or F(x) = ax b , even a<br />

moderate change in the components of r can lean to floatingpoint<br />

overflow. The parameter r is therefore used to define a<br />

limit<br />

β’ = r (1 + ||x||)/||p||<br />

and the first evaluation of F(x) is at the potentially smaller<br />

steplength α1 = min(1, β, β’).<br />

Wherever possible, upper and lower bounds on x should be<br />

used to prevent evaluation of nonlinear functions at<br />

meaningless points. The Minor damping parameter provides<br />

an additional safeguard. The default value r = 2.0 shout not<br />

affect progress on well behaved problems, but setting r = 0.1<br />

or 0.01 may be helpful when rapidly varying function are<br />

present. A “good” starting point may be required. An<br />

important application is to the class of nonlinear least<br />

squares problems.<br />

In case where several local optima exist, specifying a small<br />

value for r may help locate an optima near the starting point.<br />

(Default = 2.0)


GAMS/MINOS MINOS 29<br />

Minor iterations i This is the maximum number of minor iterations allowed<br />

between successive linearizations of the nonlinear<br />

constraints.<br />

A moderate value (e.g., 20 ≤ i ≤ 50 ) prevents excessive<br />

efforts being expended on early major iterations, but allows<br />

later subproblems to be solved to completion.<br />

The limit applies to both infeasible and feasible iterations. In<br />

some cases, a large number of iterations, (say K) might be<br />

required to obtain a feasible subproblem. If good starting<br />

values are supplied for variables appearing nonlinearly in the<br />

constraints, it may be sensible to specify > K, to allow the<br />

first major iteration to terminate at a feasible (and perhaps<br />

optimal) subproblem solution. (If a “good” initial<br />

subproblem is arbitrarily interrupted by a small I th<br />

subsequent linearization may be less favorable than the<br />

first.)<br />

In general it is unsafe to specify value as small as i = 1or 2.<br />

)even when an optimal solution has been reached, a few<br />

minor iterations may be needed for the corresponding<br />

subproblem to be recognized as optimal.)<br />

The Iteration limit provides an independent limit on the total<br />

minor iterations (across all subproblems).<br />

If the constraints are linear, only the Iteration limit applies:<br />

the minor iterations value is ignored.<br />

(Default = 40)<br />

Multiple price i<br />

“pricing” refers to a scan of the current non-basic variables<br />

to determine if any should be changed from their value (by<br />

allowing them to become superbasic or basic).<br />

If multiple pricing in effect, the i best non-basic variables are<br />

selected for admission of appropriate sign.) If partial pricing<br />

is also in effect , the best i best variables are selected from<br />

the current partition of A and I.<br />

The default i = 1 is best for linear programs, since an<br />

optimal solution will have zero superbasic variables.<br />

Warning : If i > 1, GAMS/MINOS will use the reducedgradient<br />

method (rather than the simplex method ) even on<br />

purely linear problems. The subsequent iterations of not<br />

correspond to the efficient “minor iterations” carried out be<br />

commercial linear programming system using multiple<br />

pricing. (In the latter systems, the classical simplex method<br />

is applied to a tableau involving i dense columns of<br />

dimension m, and i is therefore limited for storage reasons


30 MINOS GAMS/MINOS<br />

Optimality tolerance r<br />

typically to the range 2 ≤ i ≤ 7.)<br />

GAMS/MINOS varies all superbasic variables<br />

simultaneously. For linear problems its storage requirements<br />

are essentially independent of i . Larger values of i are<br />

therefore practical, but in general the iterations and time<br />

required when i > 1 are greater than when the simplex<br />

method is used (i=1) .<br />

On large nonlinear problems it may be important to set i >1<br />

if the starting point does not contain many superbasic<br />

variables. For example, if a problem has 3000 variables and<br />

500 of them are nonlinear, the optimal solution may well<br />

have 200 variables superbasic. If the problem is solved in<br />

several runs, it may be beneficial to use i = 10 (say) for<br />

early runs, until it seems that the number of superbasics has<br />

leveled off.<br />

If Multiple price i is specified , it is also necessary to specify<br />

Superbasic limit s for some s > i.<br />

(Default = 1)<br />

This is used to judge the size of the reduced gradients dj = gj<br />

- π T aj, where gj is the gradient of the objective function<br />

corresponding to the j-th variable. aj is the associated<br />

column of the constraint matrix (or Jacobian), and π is the<br />

set of dual variables.<br />

By construction, the reduced gradients for basic variables<br />

are always zero. Optimality will be declared if the reduced<br />

gradients for nonbasic variables at their lower or upper<br />

bounds satisfy<br />

dj / ||π|| ≥ - r or dj / ||π|| ≤ r<br />

respectively, and if dj / ||π|| ≤ r for superbasic variables.<br />

In the ||π|| is a measure of the size of the dual variables. It is<br />

included to make the tests independents of a scale factor on<br />

the objective function.<br />

The quantity actually used is defined by<br />

∑<br />

i= 1<br />

, || π || = max{ σ / m, 1}<br />

so that only large scale factors are allowed for.<br />

If the objective is scaled down to be small, the optimality<br />

test effectively reduced to comparing Dj against r.


GAMS/MINOS MINOS 31<br />

(Default = 1.0E-6)<br />

Partial Price i<br />

This parameter is recommended for large problems that have<br />

significantly more variables than constraints. It reduces the<br />

work required for each “pricing” operation (when a nonbasic<br />

variable is selected to become basic or superbasic).<br />

When i = 1, all columns of the constraints matrix (A I) are<br />

searched.<br />

Otherwise, Aj and I are partitioned to give i roughly equal<br />

segments Aj , Ij (j = 1 to i). If the previous search was<br />

successful on Aj-1 , Ij -1, the next search begins on the<br />

segments Aj , Ij. (All subscripts here are modulo i.)<br />

If a reduced gradient is found that is large than some<br />

dynamic tolerance, the variable with the largest such reduced<br />

gradient (of appropriate sign) is selected to become<br />

superbasic. (several may be selected if multiple pricing has<br />

been specified.) IF nothing is found, the search continues on<br />

the next segments Aj + 1, Ij + 1 and so on.<br />

Partial price t (or t/2 or t/3) may be appropriate for timestage<br />

models having t time periods.<br />

(Default = 10 for LPs, or 1 for NLPs)<br />

Penalty Parameter r This specifies the value of ρ in the modified augmented<br />

Lagrangian. It is used only when Lagrangian = yes (the<br />

default setting).<br />

For early runs on a problem is known to be unknown<br />

characteristics, the default value should be acceptable. If the<br />

problem is problem is known to be highly nonlinear, specify<br />

a large value, such as 10 times the default. In general, a<br />

positive value of ρ may be necessary of known to ensure<br />

convergence, even convex programs.<br />

On the other hand, if ρ is too large, the rate of convergence<br />

may be unnecessarily slow. If the functions are not highly<br />

nonlinear or a good starting point is known, it will often be<br />

safe to specify penalty parameter 0.0<br />

Initially, use a moderate value for r (such as the default) and<br />

a reasonably low Iterations and/or major iterations limit.<br />

If successive major iterations appear to be terminating with<br />

radically different solutions, the penalty parameter should be<br />

increased. (See also the Major damping parameter. )<br />

If there appears to be little progress between major iteration,<br />

it may help to reduce the penalty parameter.<br />

(Default = 100.0/m1)


32 MINOS GAMS/MINOS<br />

Pivot Tolerance r<br />

Print level i<br />

Broadly speaking, the pivot tolerance is used to prevent<br />

columns entering the basis if they would cause the basis to<br />

become almost singular. The default value of r should be<br />

satisfactory in most circumstances.<br />

When x changes to x + αp for some search direction p, a<br />

“ratio test” is used to determine which component of x<br />

reaches an upper or lower bound first. The corresponding<br />

element of p is called the pivot element.<br />

For linear problems, elements of P are ignored (and<br />

therefore cannot be pivot elements) if they are smaller than<br />

the pivot tolerance r.<br />

For nonlinear problems, elements smaller than r ||p|| are<br />

ignored.<br />

It is common (on “degenerate” problems) for two or more<br />

variables to reach a bound at essentially the same time. In<br />

such cases, the Feasibility tolerance (say t) provides some<br />

freedom to maximize the pivot element and thereby improve<br />

numerical stability. Excessively small values of t should not<br />

be specified.<br />

To a lesser extent, the Expand frequency (say f) also<br />

provides some freedom to maximize pivot the element.<br />

Excessively large values of f should therefore not be<br />

specified .<br />

(Default = ε 2/3 ≈ 10 -11 )<br />

This varies the amount of information that will be output<br />

during optimization.<br />

Print level 0 sets the default Log and summary frequencies<br />

to 100. It is then easy to monitor the progress of run.<br />

Print level 1 (or more) sets the default Log and summary<br />

frequencies to 1, giving a line of output for every minor<br />

iteration. Print level 1 also produces basis statistics., i.e.,<br />

information relating to LU factors of the basis matrix<br />

whenever the basis is re-factorized.<br />

For problems with nonlinear constraints, certain quantities<br />

are printed at the start of each major iteration. The value of<br />

i is best thought of as a binary number of the form<br />

Print level JFLXB<br />

where each letter stand for a digit that is either 0 or 1. The<br />

quantities referred to are:


GAMS/MINOS MINOS 33<br />

Radius of convergence r<br />

B Basis statistics, as mentioned above.<br />

X xk, the nonlinear variables involved in the objective function<br />

or the constraints.<br />

L λk, the Lagrange-multiplier estimates for the nonlinear<br />

constraints. (Suppressed if Lagrangian=No, since then<br />

λk=0.)<br />

F f(xk), the values of the nonlinear constraint functions.<br />

J J(xk), the Jacobian matrix.<br />

To obtain output of any item, set the corresponding digit to<br />

1, otherwise to 0. For example, Print level 10 sets X=1 and<br />

the other digits equal to zero; the nonlinear variables will be<br />

printed each major iteration.<br />

If J =1, the Jacobian matrix will be output column-wise at<br />

the start of each major iteration. Column j will be preceded<br />

by the value of the corresponding variable xj and a key to<br />

indicate whether the variable is basic, superbasic or<br />

nonbasic. (Hence if J = 1, there is no reason to specify X =<br />

1 unless the objective contains more nonlinear variables than<br />

the Jacobian.) A typical line of output is<br />

3 1.250000D+01 BS 1 1.00000D+00<br />

4 2.00000D+00<br />

which would mean that x3 is basic at value 12.5, and the<br />

third column of the Jacobian has elements of 1.0 and 2.0 in<br />

rows 1 and 4. (Note: the GAMS/MINOS row numbers are<br />

usually different from the GAMS row numbers; see the<br />

Solution option.)<br />

(Default = 0)<br />

This determines when the penalty parameter ρ will be<br />

reduced (if initialized to a positive value). Both the nonlinear<br />

constraint violation (see ROWERR below) and the relative<br />

change in consecutive Lagrange multiplier estimate must be<br />

less than r at the start of a major iteration before ρ is reduced<br />

or set to zero.<br />

A few major iterations later, full completion will be<br />

requested if not already set, and the remaining sequence of<br />

major iterations should converge quadratically to an<br />

optimum.<br />

(Default = 0.01)


34 MINOS GAMS/MINOS<br />

Row Tolerance r This specifies how accurately the nonlinear constraints<br />

should be satisfied at a solution. The default value is usually<br />

small enough, since model data is often specified to about<br />

that an accuracy.<br />

Let ROWERR be the maximum component of the residual<br />

vector f(x) + A1y - b1, normalized by the size of the solution.<br />

Thus<br />

ROWERR = || f(x) + A1y - b1|| ∞ / (1+XNORM)<br />

Where XNORM is a measure of the size of the current<br />

solution (x,y). The solution is regarded acceptably feasible if<br />

ROWERR ≤ r.<br />

If the problem functions involve data that is known to be of<br />

low accuracy, a larger Row tolerance may appropriate.<br />

(Default = 1.0E-6)<br />

Scale option i<br />

Scaling done on the model.<br />

(Default = 2 for LPs, 1 for NLPs)<br />

0 No scaling. If storage is at a premium, this option should be<br />

used<br />

1 Linear constraints and variables are scaled by an iterative<br />

procedure that attempts to make the matrix coefficients as<br />

close as possible to 1.0 (see Fourer, 1982). This will<br />

sometimes improve the performance of the solution<br />

procedures. Scale linear variables is an equivalent option.<br />

2 All constraints and variables are scaled by the iterative<br />

procedure. Also, a certain additional scaling is performed<br />

that may be helpful if the right-hand side b or the solution x<br />

is large. This takes into account columns of (AI) that are<br />

fixed or have positive lower bounds or negative upper<br />

bounds. Scale nonlinear variables or Scale all variables are<br />

equivalent options.<br />

Scale Yes sets the default. (Caution: If all variables are<br />

nonlinear, Scale Yes unexpectedly does nothing, because<br />

there are no linear variables to scale). Scale No suppresses<br />

scaling (equivalent to Scale Option 0).<br />

If nonlinear constraints are present, Scale option 1 or 0<br />

should generally be rid at first. Scale option 2 gives scales<br />

that depend on the initial Jacobian, and should therefore be<br />

used only if (a) good starting point is provided, and (b) the<br />

problem is not highly nonlinear.


GAMS/MINOS MINOS 35<br />

Scale, print This causes the row-scales r(i) and column-scales c(j) to be<br />

printed. The scaled matrix coefficients are a’ij = aij c(j)/r(i),<br />

and the scaled bounds on the variables, and slacks are l’j =<br />

lj/c(j), u’j = uj/c(j), where c(j) = r(j-n) if j > n.<br />

If a Scale option has not already been specified, Scale, print<br />

sets the default scaling.<br />

Scale Tolerance r<br />

Solution No/Yes<br />

Start assigned nonlinears<br />

All forms except Scale option may specify a tolerance r<br />

where 0 < r


36 MINOS GAMS/MINOS<br />

excluded. Let K denote the number of such “assigned<br />

nonlinear variables.”<br />

Note that the first and fourth keywords are significant.<br />

(Default = superbasic)<br />

Superbasic Specify superbasic for highly nonlinear models, as long as K<br />

is not too large (say K < 100) and the initial values are<br />

“good”.<br />

Basic Specify basic for models that are essentially “square” (i.e., if<br />

there are about as many general constraints as variables).<br />

Nonbasic Specify nonbasic if K is large.<br />

Eligible for crash Specify eligible for Crash for linear or nearly linear models.<br />

The nonlinear variables will be treated in the manner<br />

described under Crash option.<br />

Subspace tolerance r<br />

Summary frequency i<br />

This controls the extent to which optimization is confined to<br />

the current set of basic and superbasic variables (Phase 4<br />

iterations), before one or more nonbasic variables are added<br />

to the superbasic set (Phase 3).<br />

r must be a real number in the range 0 < r ≤ 1.<br />

When a nonbasic variables xj is made superbasic, the<br />

resulting norm of the reduced-gradient vector (for all<br />

superbasics) is recorded. Let this be ||Z T g0||. (In fact, the<br />

norm will be |dj| , the size of the reduced gradient for the<br />

new superbasic variable xj .<br />

Subsequent Phase 4 iterations will continue at least until the<br />

norm of the reduced-gradient vector satisfies ||Z T g0|| ≤ r<br />

||Z T g0|| is the size of the largest reduced-gradient component<br />

among the superbasic variables.)<br />

A smaller value of r is likely to increase the total number of<br />

iterations, but may reduce the number of basic changes. A<br />

larger value such as r = 0.9 may sometimes lead to improved<br />

overall efficiency, if the number of superbasic variables has<br />

to increase substantially between the starting point and an<br />

optimal solution.<br />

Other convergence tests on the change in the function being<br />

minimized and the change in the variables may prolong<br />

Phase 4 iterations. This helps to make the overall<br />

performance insensitive to larger values of r.<br />

(Default = 0.5)<br />

A brief form of the iteration log is output to the summary


GAMS/MINOS MINOS 37<br />

Superbasics limit i<br />

Unbounded objective value r<br />

file. In general, one line is output every i-th minor iteration.,<br />

In an interactive environment, the output normally appears at<br />

the terminal and allows a run to be monitored. If something<br />

looks wrong, the run can be manually terminated.<br />

The Summary frequency controls summary output in the<br />

same as the log frequency controls output to the print file<br />

A value such as i = 10 or 100 is often adequate to determine<br />

if the SOLVE is making progress. If Print level = 0, the<br />

default value of i is 100. If Print level > 0, the default value<br />

of i is 1. If Print level = 0 and the constraints are nonlinear,<br />

the Summary frequency is ignored. Instead, one line is<br />

printed at the beginning of each major iteration.<br />

(Default = 1 or 100)<br />

This places a limit on the storage allocated for superbasic<br />

variables. Ideally, i should be set slightly larger than the<br />

“number of degrees of freedom” expected at an optimal<br />

solution.<br />

For linear problems, an optimum is normally a basic solution<br />

with no degrees of freedom.(The number of variables lying<br />

strictly between their bounds is not more than m, the number<br />

of general constraints.) The default value of i is therefore 1.<br />

For nonlinear problems, the number of degrees of freedom is<br />

often called the “number of independent variables.”<br />

Normally, i need not be greater than n1+1, where n1 is the<br />

number of nonlinear variables.<br />

For many problems, i may be considerably smaller than n1.<br />

This will save storage if n1 is very large.<br />

This parameter also sets the Hessian dimension, unless the<br />

latter is specified explicitly (and conversely). If neither<br />

parameter is specified, GAMS chooses values for both,<br />

using certain characteristics of the problem.<br />

(Default = Hessian dimension)<br />

These parameters are intended to detect unboundedness in<br />

nonlinear problems. During a line search of the form<br />

minimize α F(x + αp)<br />

If |F| exceeds r or if α exceeds r2, iterations are terminated<br />

with the exit message PROBLEM IS UNBOUNDED (OR<br />

BADLY SCALED) .<br />

If singularities are present, unboundedness in F(x) may be


38 MINOS GAMS/MINOS<br />

Unbounded step size r<br />

manifested by a floating-point overflow (during the<br />

evaluation of F(x + αp), before the test against r1 can be<br />

made.<br />

Unboundedness is x is best avoided by placing finite upper<br />

and lower bounds on the variables. see also the Minor<br />

damping parameter.<br />

(Default = 1.0E+20)<br />

These parameters are intended to detect unboundedness in<br />

nonlinear problems. During a line search of the form<br />

minimize α F(x + αp)<br />

If α exceeds r, iterations are terminated with the exit<br />

message PROBLEM IS UNBOUNDED (OR BADLY<br />

SCALED) .<br />

If singularities are present, unboundedness in F(x) may be<br />

manifested by a floating-point overflow (during the<br />

evaluation of F(x + αp), before the test against r1 can be<br />

made.<br />

Unboundedness is x is best avoided by placing finite upper<br />

and lower bounds on the variables. see also the Minor<br />

damping parameter.<br />

(Default = 1.0E+10)<br />

Verify option i This option refers to a finite-difference check on the<br />

gradients (first derivatives) computed by GAMS for each<br />

nonlinear function. GAMS computes gradients analytically,<br />

and the values obtained should normally be taken as<br />

“correct”.<br />

Gradient verification occurs before the problem is scaled,<br />

and before the first basis is factorized. (Hence, it occurs<br />

before the basic variables are set to satisfy the general<br />

constraints Ax + s = 0.)<br />

(Default = 0)<br />

0 Only a “cheap” test is performed, requiring three evaluations<br />

of the nonlinear objective (if any) and two evaluations of the<br />

nonlinear constraints. Verify No is an equivalent option.<br />

1 A more reliable check is made on each component of the<br />

objective gradient. Verify objective gradients is an<br />

equivalent option.<br />

2 A check is made on each column of the Jacobian matrix


GAMS/MINOS MINOS 39<br />

Weight on linear objective r<br />

9. Acknowledgements<br />

associated with the nonlinear constraints. Verify constraint<br />

gradients is an equivalent option.<br />

A detailed check is made on both the objective and the<br />

3<br />

Jacobian. Verify, Verify gradients, and Verify Yes are<br />

equivalent options.<br />

-1 No checking is performed.<br />

The keywords invokes the so-called composite objective<br />

technique, if the first solution obtained infeasible, and if the<br />

objective function contains linear terms. While trying to<br />

reduce the sum of infeasibilities, the method also attempts to<br />

optimize the linear objective.<br />

At each infeasible iteration, the objective function is defined<br />

to be<br />

minimize x σw(c T x) + (sum of infeasibilities)<br />

where σ = 1 for minimization and σ = -1 for maximization<br />

and c is the linear objective.<br />

If an “optimal” solution is reached while still infeasible, w is<br />

reduced by a factor of 10. This helps to allow for the<br />

possibility that the initial w is too large. It also provides<br />

dynamic allowance for the fact the sum of infeasibilities is<br />

tending towards zero.<br />

The effect of w is disabled after five such reductions, or if a<br />

feasible solution is obtained.<br />

This option is intended mainly for linear programs. It is<br />

unlikely to helpful if the objective function is nonlinear.<br />

(Default = 0.0)<br />

This chapter is a minor revision to the earlier description of the GAMS/MINOS<br />

system found in the GAMS User’s Guide. Phillip Gill, Walter Murray, Bruce A.<br />

Murtagh, Michael A. Saunders, and Margaret H. Wright were the authors of this<br />

earlier description. The section on the GAMS/MINOS log file was initially put<br />

together by Michael Saunders and Arne Drud.


40 MINOS GAMS/MINOS<br />

10. References<br />

Bartels, R.H. (1971), A stabilization of the simplex method, Numerische<br />

Mathematik 16, 414-434.<br />

Bartels, R.H. and G. H. Golub (1969), The simplex method of linear programming<br />

using the LU decomposition, Communications of the ACM 12, 266-268.<br />

Dantzig, G.B. (1963), Linear Programming and Extensions, Princeton University<br />

Press, Princeton, New Jersey.<br />

Davidon, W.C. (1959), Variable metric methods for minimization, A.E.C. Res. and<br />

Develop. Report ANL-5990, Argonne National Laboratory, Argonne, IL.<br />

Fourer, R. (1982), Solving staircase linear programs by the simplex method,<br />

Mathematical Programming 23, 274-313.<br />

Gill, P.E. , W, Murray, M. A. Saunders and M. H. Wright (1979), Two step-length<br />

algorithms for numerical optimization, Report SOL 79-25, Department of<br />

Operations Research, Stanford University, Stanford, California<br />

Gill, P.E. , W, Murray, M. A. Saunders and M. H. Wright (1987), Maintaining LU<br />

factors of a general sparse matrix, Linear Algebra and its Applications 88/89, 239-<br />

270.<br />

Murtagh, B.A. and M. A. Saunders (1978), Large-scale linearly constrained<br />

optimization, Mathematical Programming 14, 41-72.<br />

Murtagh, B.A. and M. A. Saunders (1982), A projected Lagrangian algorithm and<br />

its implementation for sparse nonlinear constraints, Mathematic Programming<br />

Study 16, Algorithm for Constrained Minimization of Smooth Nonlinear Function,<br />

84-117.<br />

Murtagh, B.A. and M. A. Saunders (1983), MINOS 5.0 User‘s Guide, Report SOL<br />

83-20, Department of Operations Research, Stanford University, (Revised as<br />

MINOS 5.1 User‘s Guide, Report SOL 83-20R, 1987.)<br />

Reid, J.K. (1976), Fortran subroutines for handling sparse linear programming<br />

bases, Report R8269, Atomic Energy Research Establishment, Harwell, England.<br />

Reid, J.K. (1982), A sparsity-exploiting variant of the Bartels-Golub<br />

decomposition for linear programming bases, Mathematical Programming 24, 55-<br />

69.<br />

Robinson, S,M. (1972), A quadratically convergent algorithm for general nonlinear<br />

programming problems, Mathematical Programming 3, 145-156.


GAMS/MINOS MINOS 41<br />

Wolfe, P (1962), The reduced-gradient method, unpublished manuscript, RAND<br />

Corporation.


GAMS/MPSWRITE MPSWRITE 1<br />

GAMS / MPSWRITE 1.2<br />

1 INTRODUCTION 3<br />

2 AN EXAMPLE 4<br />

3 NAME GENERATION: A SHORT NOTE 6<br />

4 THE OPTION FILE 6<br />

5 ILLUSTRATIVE EXAMPLE REVISITED 6<br />

6 USING ANALYZE WITH MPSWRITE 7<br />

7 THE GAMS/MPSWRITE OPTIONS 10<br />

7.1 NAME GENERATION 10<br />

7.2 NAME DESIGN 10<br />

7.3 USING FILES 11<br />

7.4 WRITING FILES 11<br />

8 NAME GENERATION EXAMPLES 13<br />

9 INTEGRATING GAMS AND ANALYZE 21


MPSWRITE 2 GAMS/MPSWRITE<br />

Release Notes<br />

MPSWRITE 1.2 – February 22, 2000 – Differences from 1.1<br />

• Enabled the use of long identifier and label (set element) names. In earlier versions,<br />

the length of identifier and element names was limited to 10 characters.<br />

• Added the option OBJLAST (OBJective row LAST) to place the objective row as<br />

the last/first row in the MPS matrix. Most solution systems have MPS file readers<br />

which take the first free row (N) to be the objective row. Previous versions of<br />

MPSWRITE placed the objective row as the last row in the matrix.<br />

MPSWRITE 1.1 - April 15, 1999 – Differences from 1.0<br />

• Updated the ANALYZE link to be compatible with the book by Harvey J.<br />

Greenberg, A Computer-Assisted Analysis for Mathematical Programming Models<br />

and Solutions: A User’s Guide to ANALYZE, Kluwer Academic Publishers, Boston,<br />

1993.<br />

• Added the option MCASE (Mixed CASE names) to control the casing of MPS<br />

names.


GAMS/MPSWRITE MPSWRITE 3<br />

1 Introduction<br />

The GAMS modeling language allows you to represent a mathematical programming<br />

model in a compact and easily understood form and protects the user from having to<br />

represent the model in the form that is required by any particular solver. This solver<br />

specific form is usually difficult to write and even more difficult to read and understand.<br />

However, there may be cases in which you require the model data in precisely such a form.<br />

You may need to use applications or solvers that are not interfaced to GAMS or you like to<br />

test developmental algorithms either in an academic or industrial setting. MPSWRITE is a<br />

subsystem of GAMS that addresses this problem. It is not a solver and can be used<br />

independently of them. It can generate the following files for a GAMS model:<br />

• MPS matrix file<br />

• MPS basis file<br />

• GAMS to MPS mapping file<br />

• ANALYZE syntax file<br />

• ANALYZE LP file<br />

Many solvers can read an optimization problem in the form of an MPS file. The MPS<br />

matrix file stores the problem matrix as well as the various bounds in a specified format. In<br />

this file, names with up to eight characters are used to specify the variables and<br />

constraints. In GAMS, however, the variable or constraint name can have as many<br />

hundreds of characters. Therefore, in order to create the MPS file, MPSWRITE must<br />

generate unique names with a maximum of eight characters for the variable and constraint<br />

names.<br />

The MPS basis file describes the status of all the variables and constraints at a point. It is<br />

also written in the MPS format. The mapping file relates the names generated for the MPS<br />

matrix and basis files to their corresponding GAMS name.<br />

Since the MPS file format can only handle linear models, nonlinear models are linearized<br />

around the current point. The matrix coefficients are then the partial derivatives of the<br />

nonlinear terms and the right-hand-sides are adjusted accordingly.<br />

MPSWRITE is capable of generating MPS files for models of the following types; lp,<br />

rmip, mip, nlp, dnlp, rminlp, minlp. In case of NLP models, the MPS file is<br />

generated at the current point.<br />

ANALYZE is a system developed by Harvey Greenberg and coworkers at the University of<br />

Colorado - Denver for the analysis of linear programming models and solutions. The LP file<br />

contains the matrix and solution information for the model while the syntax file is similar in<br />

role to the mapping file but in a form recognizable by ANALYZE.


MPSWRITE 4 GAMS/MPSWRITE<br />

2 AN EXAMPLE<br />

In order to illustrate the use of MPSWRITE add the following lines to [TRNSPORT] just<br />

before the solve statement<br />

option lp = mpswrite ;<br />

or simply change the default LP solver by using a GAMS command line parameter<br />

>gams trnsport lp=mpswrite<br />

Although we use the solve statement to execute MPSWRITE, no optimization will take<br />

place. The current solution values will remain unchanged. By default the MPSWRITE<br />

outputs the MPS file, the basis file and the mapping file. The generation of any of these<br />

files can be turned off using options in the option file as will be explained later.<br />

Apart from the listing file that re1ports model statistics, the following files are also<br />

generated: by running the model - gams.mps, gams.bas, gams.map.<br />

NAME GAMSMOD<br />

* OBJECTIVE ROW HAS TO BE MINIMIZED<br />

* ROWS : 7<br />

* COLUMNS : 7<br />

* NON-ZEROES : 20<br />

ROWS<br />

N R0000000<br />

E R0000001<br />

L R0000002<br />

L R0000003<br />

G R0000004<br />

G R0000005<br />

G R0000006<br />

COLUMNS<br />

C0000001 R0000001 -0.225000000<br />

C0000001 R0000002 1.000000000<br />

C0000001 R0000004 1.000000000<br />

C0000002 R0000001 -0.153000000<br />

C0000002 R0000002 1.000000000<br />

C0000002 R0000005 1.000000000<br />

C0000003 R0000001 -0.162000000<br />

C0000003 R0000002 1.000000000<br />

C0000003 R0000006 1.000000000<br />

C0000004 R0000001 -0.225000000<br />

C0000004 R0000003 1.000000000<br />

C0000004 R0000004 1.000000000<br />

C0000005 R0000001 -0.162000000<br />

C0000005 R0000003 1.000000000<br />

C0000005 R0000005 1.000000000<br />

C0000006 R0000001 -0.126000000<br />

C0000006 R0000003 1.000000000<br />

C0000006 R0000006 1.000000000<br />

C0000007 R0000001 1.000000000<br />

C0000007 R0000000 1.000000000


GAMS/MPSWRITE MPSWRITE 5<br />

RHS<br />

RHS R0000002 350.0000000<br />

RHS R0000003 600.0000000<br />

RHS R0000004 325.0000000<br />

RHS R0000005 300.0000000<br />

RHS R0000006 275.0000000<br />

BOUNDS<br />

FR BOUND C0000007<br />

ENDATA<br />

Figure 1: MPS matrix file for trnsport.gms<br />

NAME GAMSMOD<br />

ENDATA<br />

Figure 2: MPS basis file for trnsport.gms<br />

6 7<br />

R0000001 COST<br />

R0000002 SUPPLY SEATTLE<br />

R0000003 SUPPLY SAN-DIEGO<br />

R0000004 DEMAND NEW-YORK<br />

R0000005 DEMAND CHICAGO<br />

R0000006 DEMAND TOPEKA<br />

C0000001 X SEATTLE NEW-YORK<br />

C0000002 X SEATTLE CHICAGO<br />

C0000003 X SEATTLE TOPEKA<br />

C0000004 X SAN-DIEGO NEW-YORK<br />

C0000005 X SAN-DIEGO CHICAGO<br />

C0000006 X SAN-DIEGO TOPEKA<br />

C0000007 Z<br />

Figure 3: GAMS to MPS mapping file for trnsport.gms<br />

By default, the row and column names in the generated files correspond to the sequence<br />

number of the row or column and are eight characters wide. The MPS matrix and basis<br />

files correspond to the standard format. Note that the basis file is empty since the model<br />

has not been solved and the default solution values result in a unit basis. A unit basis<br />

means that all rows are basic and all variables are at bounds, the default format for the MPS<br />

basis file. We could have solved the model first and then use a second solve statement to<br />

write the MPS files. The MPS basis file will then contain entries for non-basic rows and<br />

basic columns. In the mapping file, the first line gives the number of rows and columns in<br />

the model. In the other lines the MPSWRITE generated row and column names is followed<br />

by the corresponding GAMS name (first the name of row/column followed by the indices).<br />

Since the model is being solved only with MPSWRITE, the listing file does not generate<br />

any solution and only reports that the solution is unknown.<br />

Note also that the number of rows in the MPS file is one more than in the GAMS file. This<br />

is because GAMS creates an extra row containing only the objective variable. This<br />

transforms a problem in GAMS format, using an objective variable, into the MPS format,<br />

using an objective row. Finally, it is important to note that MPSWRITE always converts a<br />

problem into a minimization form when presenting the model in the MPS file format.


MPSWRITE 6 GAMS/MPSWRITE<br />

3 NAME GENERATION: A SHORT NOTE<br />

The process involves compressing a name that can potentially be hundreds of characters<br />

long into a unique one of up to sixteen characters for the ANALYZE files and the mapping<br />

file and up to eight characters for the MPS and basis files. MPSWRITE allows for two<br />

ways of generating these names. The default names correspond to the sequence number of<br />

the particular row or column. However, these names, like the ones shown in Fig.1 are not<br />

always easy to identify with the corresponding GAMS entity without the aid of the<br />

mapping file. MPSWRITE, therefore, also allows for the more intelligent generation of<br />

names that correspond more closely with the original GAMS name and are therefore more<br />

easily identifiable. The procedure used for generating the names in this form is explained<br />

in detail in Section 9.<br />

4 The Option File<br />

The option file is called mpswrite.opt. This file is always read if it exists to avoid<br />

certain situations that occur due to adding a solver to the definition of MPSWRITE (as<br />

explained later in the chapter). In such a case, if say, BDMLP and MPSWRITE are called<br />

within the same solver step, the system does not know which option file to pick up.<br />

Therefore, the system always looks for mpswrite.opt when using MPSWRITE even if<br />

the name of the solver has been changed. The option file has the following format: each<br />

line contains either a comment (comments have a '*' as the first non-blank character) or an<br />

option. For example,<br />

* write MPS names with a maximum width of 8 characters<br />

namewid 8<br />

* design name syntax based on GAMS name<br />

nametyp 2<br />

The contents of the option file are echoed to the screen and copied to the listing file.<br />

5 ILLUSTRATIVE EXAMPLE REVISITED<br />

Create the option file 'mpswrite.opt' and in it add the following line<br />

nametyp 2<br />

The option file is explained in greater detail in a following section. The MPS file<br />

generated on running the above problem is shown in Fig. 4.<br />

NAME GAMSMOD<br />

* OBJECTIVE ROW HAS TO BE MINIMIZED<br />

* ROWS : 7<br />

* COLUMNS : 7<br />

* NON-ZEROES : 20<br />

ROWS<br />

N +OBJ<br />

E C<br />

L SSE<br />

L SSA<br />

G DNE<br />

G DCH<br />

G DTO


GAMS/MPSWRITE MPSWRITE 7<br />

COLUMNS<br />

XSENE C -0.225000000<br />

XSENE SSE 1.000000000<br />

XSENE DNE 1.000000000<br />

XSECH C -0.153000000<br />

XSECH SSE 1.000000000<br />

XSECH DCH 1.000000000<br />

XSETO C -0.162000000<br />

XSETO SSE 1.000000000<br />

XSETO DTO 1.000000000<br />

XSANE C -0.225000000<br />

XSANE SSA 1.000000000<br />

XSANE DNE 1.000000000<br />

XSACH C -0.162000000<br />

XSACH SSA 1.000000000<br />

XSACH DCH 1.000000000<br />

XSATO C -0.126000000<br />

XSATO SSA 1.000000000<br />

XSATO DTO 1.000000000<br />

Z C 1.000000000<br />

Z +OBJ 1.000000000<br />

RHS<br />

RHS SSE 350.0000000<br />

RHS SSA 600.0000000<br />

RHS DNE 325.0000000<br />

RHS DCH 300.0000000<br />

RHS DTO 275.0000000<br />

BOUNDS<br />

FR BOUND Z<br />

ENDATA<br />

Figure 4. MPS file for trnsport.gms with nametyp=2<br />

Note that the names of the rows and columns are now more easily identifiable with the<br />

corresponding GAMS name. For example, the MPS variable XSENE corresponds to<br />

x('Seattle','New-York') and is much more easily identifiable than C0000001<br />

as in Fig.1. Note that in the newly generated name, the first character X represents the<br />

variable name x, the next two characters SE represent the first index label Seattle and<br />

the last two characters, NE, represent the second index label New-York.<br />

6 USING ANALYZE WITH MPSWRITE<br />

MPSWRITE uses two files to communicate with ANALYZE: The LP file and the system<br />

file. The LP file contains matrix information and the values of the rows and columns.<br />

ANALYZE then studies the linear program and its solution and interactively assists in<br />

diagnostic analysis. The syntax file is similar to the mapping file and serves to provide<br />

additional information for ANALYZE. Here, names of up to 16 characters are permitted to<br />

represent the GAMS variable or constraint.<br />

In order to illustrate the generation of the ANALYZE LP and syntax files consider<br />

[TRNSPORT]. The options domps and domap are used to switch to ANALYZE format<br />

and are set to the value of 2. The new option file is shown below:<br />

nametyp 2<br />

domps 2<br />

domap 2


MPSWRITE 8 GAMS/MPSWRITE<br />

The explanation of these options is given in greater detail in a following section. On<br />

solving [TRNSPORT] in the same form as in the previous section, the LP and syntax files<br />

are generated as gams.lp and gams.syn respectively and are shown in Figures 5 and 6.<br />

* file created by gams<br />

* BASE<br />

OPTIMAL = MINIMIZE<br />

STATUS = UNKNOWN<br />

NAME = GAMS<br />

ROWS = 7<br />

COLUMNS = 7<br />

NONZEROS = 20<br />

LENGTH = 8<br />

OBJECTIV = +OBJ<br />

ENDBASE<br />

*** ROWS<br />

C 7 0. 0.<br />

B 0. 0.<br />

SSE 3 -1.E20 350.<br />

B 0. 0.<br />

SSA 3 -1.E20 600.<br />

B 0. 0.<br />

DNE 2 325. 1.E20<br />

B 0. 0.<br />

DCH 2 300. 1.E20<br />

B 0. 0.<br />

DTO 2 275. 1.E20<br />

B 0. 0.<br />

+OBJ 1 -1.E20 1.E20<br />

B 0. 0.<br />

*** COLUMNS<br />

XSENE 3 0. 1.E20<br />

L 0. 0.<br />

XSECH 3 0. 1.E20<br />

L 0. 0.<br />

XSETO 3 0. 1.E20<br />

L 0. 0.XSANE 3 0. 1.E20<br />

L 0. 0.<br />

XSACH 3 0. 1.E20<br />

L 0. 0.<br />

XSATO 3 0. 1.E20<br />

L 0. 0.<br />

Z 2 -1.E20 1.E20<br />

B 0. 0.


GAMS/MPSWRITE MPSWRITE 9<br />

*** NONZEROS<br />

C XSENE -0.22500000E+00<br />

SSE XSENE 1.<br />

DNE XSENE 1.<br />

C XSECH -0.15300000E+00<br />

SSE XSECH 1.<br />

DCH XSECH 1.<br />

C XSETO -0.16200000E+00<br />

SSE XSETO 1.<br />

DTO XSETO 1.<br />

C XSANE -0.22500000E+00<br />

SSA XSANE 1.<br />

DNE XSANE 1.<br />

C XSACH -0.16200000E+00<br />

SSA XSACH 1.<br />

DCH XSACH 1.<br />

C XSATO -0.12600000E+00<br />

SSA XSATO 1.<br />

DTO XSATO 1.<br />

C Z 1.<br />

+OBJ Z 1.<br />

Figure 5: ANALYZE LP file for 'trnsport.gms'<br />

* SYNTAX FILE GENERATED BY GAMS/MPSWRITE<br />

00 Universal Set<br />

SE SEATTLE<br />

SA SAN-DIEGO<br />

NE NEW-YORK<br />

CH CHICAGO<br />

TO TOPEKA<br />

I. canning plants<br />

SE SEATTLE<br />

SA SAN-DIEGO<br />

J. markets<br />

NE NEW-YORK<br />

CH CHICAGO<br />

TO TOPEKA<br />

ROW SYNTAX<br />

C define objective function<br />

S observe supply limit at plant i &I.:2<br />

D satisfy demand at market j &J.:2<br />

COLUMN SYNTAX<br />

X shipment quantities in cases &I.:2 &J.:4<br />

Z total transportation costs in thousands of dollars<br />

ENDATA<br />

Figure 6: ANALYZE Syntax file for 'trnsport.gms'<br />

A further explanation on how to use these two files can be found in Section 10 and in greater<br />

detail in the ANALYZE manual.


MPSWRITE 10 GAMS/MPSWRITE<br />

7 The GAMS/MPSWRITE <strong>Options</strong><br />

There are 17 options that can be used to alter the structure and content of the various output<br />

files. They are classified by purpose and presented below.<br />

7.1 Name Generation<br />

namewid integer Maximum width of the names generated. NAMEWID can have<br />

a maximum width of 16. If any of the standard MPS files is<br />

being generated the maximum width is reduced to 8<br />

(domps=1, dobas=1).<br />

(default = 8)<br />

nametyp integer Type of names created for variables and equations.<br />

(default = 1)<br />

1 The generated names correspond to the ordinality number of<br />

the corresponding row or column<br />

2 More intelligent name generation base on GAMS names<br />

mcase integer Mixed case name generation.<br />

(default=0)<br />

0 MPS names are all upper case<br />

1 MPS names use lower case<br />

Objlast Integer Objective row last in MPS file.<br />

(default=0)<br />

0 Objective row first<br />

1 Objective row last<br />

7.2 Name Design<br />

Used if nametyp=2. A more detailed exposition on name generation can be found in the<br />

next section.<br />

labminwid integer Minimum width for labels (set members) in new name<br />

(default = 1)<br />

labmaxwid integer Maximum width for labels (set members) in new name. The<br />

upper bound for this is 4<br />

(default = 4)<br />

labperc real 100x is the percentage of clashes that are permitted in label<br />

names before renaming. In the name generation algorithm, if<br />

the percentage of clashes is more than labperc, then the<br />

width of the newly generated code for the label is increased.<br />

(default = 0.10)


GAMS/MPSWRITE MPSWRITE 11<br />

stemminwid integer Minimum width of generated row or column.<br />

(default = 1)<br />

stemmaxwid integer Maximum width of generated row or column name. The upper<br />

bound is 4.<br />

(default = 4)<br />

stemperc real 100x is the percentage of clashes permitted in row or column<br />

names before renaming (similarly to labperc).<br />

(default = 0.10)<br />

textopt integer Option to process special @ symbol in GAMS symbol text<br />

during the generation of ANALYZE syntax files.<br />

(default = 0)<br />

0 Ignore @ in the GAMS symbol text.<br />

1 ANALYZE specific option related to the gams.syn file. The<br />

macro symbol @ in the GAMS symbol text is recognized and<br />

processed accordingly.<br />

fixed integer Option for fields in LP file. This is again an ANALYZE<br />

specific option.<br />

(default = 1)<br />

7.3 <strong>Using</strong> Files<br />

0 Fixed fields for writing LP file.<br />

1 The LP files is squeezed to remove extra spaces<br />

usesol integer Option to control the use of primal and dual information.<br />

(default= 0)<br />

0 Use matrix file for obtaining dual information. This is then<br />

used for the generation of the basis file and the ANALYZE LP<br />

file. MPSWRITE will also generate an empty solution file on<br />

completion to pass return information required by GAMS.<br />

1 Use the GAMS solution file to obtain primal and dual solution<br />

values. If the model contains nonlinear terms, the partial<br />

derivatives (matrix coefficients) and the right-hand-side are<br />

also updated to reflect the linearization around the new point. If<br />

the matrix is written in ANALYZE LP format, the solution<br />

status will be the one from the solution file.<br />

7.4 Writing Files<br />

domps integer Option to output of the problem matrix.<br />

(default= 1)<br />

0 No matrix file is written<br />

1 MPS file written as gams.mps<br />

2 Matrix file in ANALYZE LP format is written as gams.lp


MPSWRITE 12 GAMS/MPSWRITE<br />

btol real Basis tolerance when generating the ANALYZE file<br />

gams.lp. Optimization involves calculations with floating<br />

point numbers and is inherently noisy. MPSWRITE recomputes<br />

the row activity levels for linear and nonlinear<br />

models and the basis tolerance btol is used to determine the<br />

final basis status. In order to avoid spurious warnings in<br />

subsequent steps like ANALYZE, activity levels are reset to the<br />

value of the appropriate bound when within the tolerance level.<br />

(default = 1.0d-8)<br />

pinf real The value to be use for plus infinity when writing an<br />

ANALYZE LP file.<br />

(default = 1.0d20)<br />

dobas integer controls the output of a file containing basis information<br />

(default = 1)<br />

0 No basis file is written<br />

1 Basis file is written as gams.bas. As explained earlier, the<br />

basis file can be written based on primal and dual information<br />

from the solution file if usesol=1 or else the information is<br />

obtained from the matrix file. Potential complications arise<br />

with superbasic variables in nonlinear problems. Although most<br />

modern LP codes allow work with a similar concept, the<br />

standard format of the MPS basis file does not allow for this<br />

situation. The OSL specific 'BS' format is used to indicate<br />

superbasic activity.<br />

domap integer controls the output of a file containing name mapping<br />

information.<br />

(default = 1)<br />

0 no mapping file is written<br />

1 mapping file containing generated name followed by GAMS<br />

name is written in gams.map.<br />

2 syntax file with ANALYZE format is written in gams.syn.<br />

Superbasic activities cannot be represented with the<br />

ANALYZE LP format. When such activities are encountered,<br />

one of the bounds is reset to the activity level, the activity is<br />

made non-basic, and a comment is inserted with an appropriate<br />

message and the value of the original bound. The lower or<br />

upper bound is selected to make the modified problem optimal<br />

(according to the sign of the marginal).<br />

Therefore, by default, the MPS, basis and mapping files are generated as gams.mps,<br />

gams.bas, and gams.map respectively. The dual information for the generation of the<br />

basis file is read from the GAMS matrix file. The generated names have a default width of<br />

8. The effect of various options on the output files is demonstrated through [TRNSPORT]<br />

in Section 9.


GAMS/MPSWRITE MPSWRITE 13<br />

8 Name Generation Examples<br />

In GAMS, the row and column names have an identifier followed by up to 10 indices<br />

which are also identifiers. An example of a name from [TRNSPORT] is<br />

x(Seattle,New-York). As a result of this, the total length of a single row or column<br />

name can be hundreds of characters long.<br />

In ANALYZE, the names can be up to 16 characters. Character positions determine the<br />

meaning, for instance XSANE may mean shipping (X) from San-Diego (SA) to New-York<br />

(NE). The ANALYZE command 'explain' shows you the meaning of a name in this way<br />

based on the information in the syntax file. In MPS files, the names are shorter and have to<br />

be less than 8 characters.<br />

The way these names are generated from the GAMS name is as follows: Stems are<br />

generated for the equation/variable names. The width of these stems is within the limits<br />

provided by the users through the option file. stemminwid and stemmaxwid provide<br />

the minimum and maximum width for the rows and column names while labminwid and<br />

labmaxwid provide similar limits for the label names. The procedure for the generation<br />

of these stems is given below.<br />

Initially, the first few characters of the name are chosen to form the name stem or Label<br />

name, the width corresponding to the minimum width permitted through the option file. If<br />

all the stems that are created in this manner are unique, then the procedure is complete and<br />

the stem names created are used. If there are repetitions or non-uniqueness in the stems<br />

created then the fraction of these repetitions (the number of repetitions divided by the<br />

number of names) is checked against the maximum permissible fraction allowed through<br />

the option file through the setting of labperc and stemperc. If the fraction is above<br />

the permissible limit, then the size of the stem or label is increased by one until the<br />

maximum width is reached or the fraction is less than the permissible limit. When this is<br />

achieved, the remaining clashes are resolved by leaving the first occurrence of the stem or<br />

label name as it is and altering the other occurrences of the stem or label name. These<br />

alterations are done by first changing the last character of the stem to a unique one. If this<br />

is not possible, the procedure attempts to change the second to the last character to make<br />

the stem or label unique. If one cannot still find a unique name, progressively the more<br />

significant positions of the stem or label are changed until a unique name is found. Only<br />

alphanumeric characters are considered during the generation process.<br />

In order to illustrate the procedure explained above, consider the following set of label names<br />

NEW-YORK, BOSTON, PITTSBURGH, NEWARK, WASHINGTON, BUFFALO


MPSWRITE 14 GAMS/MPSWRITE<br />

Let the values of labminwid, labmaxwid and labperc be 1,3 and 0.1 respectively.<br />

Initially stems of width 1 corresponding to labminwid are created: N, B, P, N, W, B. The<br />

result is that 0.33 of the number of names are repeated (2 repetition of a total of 6 names).<br />

This is greater than the permitted maximum of 0.1. The width of the label is therefore<br />

increased to two, creating the labels, NE, BO, PI, NE, WA, BU. This now increases the<br />

number of unique label names but still leaves the fraction of repetitions at 0.17 and so the<br />

label width has to be further increased by one to 3. The resulting label are NEW, BOS, PIT,<br />

NEW, WAS, BUF. The fraction of repetitions is unchanged, but a further increase in the<br />

label width is not possible because the maximum allowable width (labmaxwid) has been<br />

reached. The remaining non-unique label names have to be altered. The first occurrence of<br />

NEW is left unchanged while the second occurrence is altered to NEA, which is a unique<br />

name. This leads to the following set of unique label names that are finally obtained:<br />

NEW, BOS, PIT, NEA, WAS, BUF<br />

In this manner, unique label names are generated for the label names and stem names for<br />

the equation/variable names separately. Once this is done, the name associated with any<br />

particular row or column is created by putting together the stem name of the<br />

equation/variable and the label of all the indices associated with it in order. For example in<br />

the variable x(Seattle, New-York) in [TRNSPORT], if the stem name of the<br />

variable is X and the stem names of the associated labels are SE and NE, the name of the<br />

corresponding column is XSENE.<br />

However, there is a limit of namewid characters that the row/column name can have. If<br />

the width of the labels or equation/variable stems is too large or if the row/column has too<br />

many indices, the size of the row or column created in the manner described above may<br />

lead to a width greater than namewid. For example, consider that namewid is 16 and<br />

that the width of the stems of the labels and equation/variable names created by the<br />

procedure described above is 3. Now consider the variable y(i,j,k,l,m) which has 5<br />

index positions. The width of any instance of the variable would be 3 + 5(3) = 18. This is<br />

clearly above the limit set by the option file. In such a situation, the latter indices are<br />

collapsed to form new index sets and labels so as to limit the name width to a maximum of<br />

namewid. In the example above, the index sets l and m are collapsed to create a new set<br />

and new unique labels for each occurring combination of labels of l and m in<br />

y(i,j,k,l,m). The new label names are generated by simply using the ordinality of<br />

the row or column.<br />

The effect of these procedures is shown in the examples below.<br />

Example 3: Consider solving trnsport.gms with the following entries in the option file<br />

nametyp 2<br />

namewid 8<br />

labminwid 1<br />

labmaxwid 3<br />

labperc 0.25<br />

stemminwid 2<br />

stemmaxwid 3<br />

stemperc 0.25


GAMS/MPSWRITE MPSWRITE 15<br />

The resulting GAMS to MPS mapping file that explains the new names is shown in Figure<br />

7. Notice here that due to the increase in the allowable percentage of clashes in the stems<br />

of the label names and the equation/variable names, stems with just one character were<br />

sufficient for the label names. A stemminwid of 2 forced the stems of the<br />

equation/variable names to have two characters. It must be pointed out that all extra<br />

positions are padded with the '.' character. So the variables X and Z being smaller than<br />

stemminwid were changed to X. and Z. . X.SN therefore refers to the GAMS column<br />

X(Seattle, New-York).<br />

6 7<br />

CO COST<br />

SUS SUPPLY SEATTLE<br />

SUA SUPPLY SAN-DIEGO<br />

DEN DEMAND NEW-YORK<br />

DEC DEMAND CHICAGO<br />

DET DEMAND TOPEKA<br />

X.SN X SEATTLE NEW-YORK<br />

X.SC X SEATTLE CHICAGO<br />

X.ST X SEATTLE TOPEKA<br />

X.AN X SAN-DIEGO NEW-YORK<br />

X.AC X SAN-DIEGO CHICAGO<br />

X.AT X SAN-DIEGO TOPEKA<br />

Z. Z<br />

Figure 7: GAMS to MPS Mapping file with LABPERC, STEMPERC = 0.25<br />

Example 4: In the option file described above, the width of the permissible name is set to<br />

3 by namewid 3. Also, to create ANALYZE files, add domap 2 and domps 2. Since<br />

the equation/variable name stems still had to have at least two characters because of<br />

stemminwid, this example is used to show the effect of the collapsing process on the<br />

names generated. The variable x(i,j) requires the collapsing of its indices since two<br />

indices would not be able to fit into the last remaining position. The resulting ANALYZE<br />

syntax file (gams.syn) that explains the new labels and set that were created due to the<br />

collapsing is shown in Figure 8. The new set (called 3) consists of the 6 members of<br />

x(i,j) and has labels whose names depend on the ordinality of the occurrence of label<br />

combinations in (I,J). The new row/column names that are generated can be seen in the<br />

ANALYZE LP file (gams.lp) shown in Figure 9. Notice now that a 3 character name<br />

X.1 represent x(Seattle, New-York). This example may seem rather forced due<br />

to the relatively small number of indices in the equations and variables, however it should<br />

be realized that when the number of indices becomes larger, this process is inevitable.


MPSWRITE 16 GAMS/MPSWRITE<br />

* SYNTAX FILE GENERATED BY GAMS/MPSWRITE<br />

0 Universal Set<br />

S SEATTLE<br />

A SAN-DIEGO<br />

N NEW-YORK<br />

C CHICAGO<br />

T TOPEKA<br />

I canning plants<br />

S SEATTLE<br />

A SAN-DIEGO<br />

J markets<br />

N NEW-YORK<br />

C CHICAGO<br />

T TOPEKA<br />

3 NEW SET FOR X<br />

1 SEATTLE .NEW-YORK<br />

2 SEATTLE .CHICAGO<br />

3 SEATTLE .TOPEKA<br />

4 SAN-DIEGO .NEW-YORK<br />

5 SAN-DIEGO .CHICAGO<br />

6 SAN-DIEGO .TOPEKA<br />

ROW SYNTAX<br />

CO define objective function<br />

SU observe supply limit at plant i &I:3<br />

DE satisfy demand at market j &J:3<br />

COLUMN SYNTAX<br />

X. shipment quantities in cases &3:3<br />

Z. total transportation costs in thousands of dollars<br />

ENDATA<br />

Figure 8: ANALYZE Syntax file with NAMEWID=3<br />

* file created by gams<br />

* BASE<br />

OPTIMAL = MINIMIZE<br />

STATUS = UNKNOWN<br />

NAME = GAMS<br />

ROWS = 7<br />

COLUMNS = 7<br />

NONZEROS = 20<br />

LENGTH = 3<br />

OBJECTIV = +OBJ<br />

ENDBASE<br />

*** ROWS<br />

CO 7 0. 0.<br />

B 0. 0.<br />

SUS 3 -1.E20 350.<br />

B 0. 0.<br />

SUA 3 -1.E20 600.<br />

B 0. 0.<br />

DEN 2 325. 1.E20<br />

B 0. 0.<br />

DEC 2 300. 1.E20<br />

B 0. 0.<br />

DET 2 275. 1.E20<br />

B 0. 0.<br />

+OBJ 1 -1.E20 1.E20<br />

B 0. 0.


GAMS/MPSWRITE MPSWRITE 17<br />

*** COLUMNS<br />

X.1 3 0. 1.E20<br />

L 0. 0.<br />

X.2 3 0. 1.E20<br />

L 0. 0.<br />

X.3 3 0. 1.E20<br />

L 0. 0.<br />

X.4 3 0. 1.E20<br />

L 0. 0.<br />

X.5 3 0. 1.E20<br />

L 0. 0.<br />

X.6 3 0. 1.E20<br />

L 0. 0.<br />

Z. 2 -1.E20 1.E20<br />

B 0. 0.<br />

*** NONZEROS<br />

CO X.1 -0.22500000E+00<br />

SUS X.1 1.<br />

DEN X.1 1.<br />

CO X.2 -0.15300000E+00<br />

SUS X.2 1.<br />

DEC X.2 1.<br />

CO X.3 -0.16200000E+00<br />

SUS X.3 1.<br />

DET X.3 1.<br />

CO X.4 -0.22500000E+00<br />

SUA X.4 1.<br />

DEN X.4 1.<br />

CO X.5 -0.16200000E+00<br />

SUA X.5 1.<br />

DEC X.5 1.<br />

CO X.6 -0.12600000E+00<br />

SUA X.6 1.<br />

DET X.6 1.<br />

CO Z. 1.<br />

+OBJ Z. 1.<br />

Figure 9: ANALYZE LP file with NAMEWID=3<br />

$TITLE A TRANSPORTATION PROBLEM (TRNSPORT,SEQ=1)<br />

$OFFUPPER<br />

SETS<br />

I canning plants / SEATTLE, SAN-DIEGO /<br />

J markets / NEW-YORK, CHICAGO, TOPEKA / ;<br />

PARAMETERS<br />

A(I) capacity of plant i in cases<br />

/ SEATTLE 350<br />

SAN-DIEGO 600 /<br />

B(J) demand at market j in cases<br />

/ NEW-YORK 325<br />

CHICAGO 300<br />

TOPEKA 275 / ;


MPSWRITE 18 GAMS/MPSWRITE<br />

TABLE D(I,J) distance in thousands of miles<br />

NEW-YORK CHICAGO TOPEKA<br />

SEATTLE 2.5 1.7 1.8<br />

SAN-DIEGO 2.5 1.8 1.4 ;<br />

SCALAR F freight in dollars per case per thousand miles /90/ ;<br />

PARAMETER C(I,J) transport cost in thousands of dollars per case ;<br />

C(I,J) = F * D(I,J) / 1000 ;<br />

VARIABLES<br />

X(I,J) shipment quantities in cases from @i to @j<br />

Z total transportation costs in thousands of dollars ;<br />

POSITIVE VARIABLE X ;<br />

EQUATIONS<br />

COST define objective function<br />

SUPPLY(I) observe supply limit at plant @i<br />

DEMAND(J) satisfy demand at market @j ;<br />

COST .. Z =E= SUM((I,J), C(I,J)*X(I,J)) ;<br />

SUPPLY(I) .. SUM(J, X(I,J)) =L= A(I) ;<br />

DEMAND(J) .. SUM(I, X(I,J)) =G= B(J) ;<br />

MODEL TRANSPORT /ALL/ ;<br />

OPTION LP = MPSWRITE ;<br />

SOLVE TRANSPORT USING LP MINIMIZING Z ;<br />

DISPLAY X.L, X.M ;<br />

Figure 10: Modified 'trnsport.gms' for Example (A.3)<br />

* SYNTAX FILE GENERATED BY GAMS/MPSWRITE<br />

00 Universal Set<br />

SE SEATTLE<br />

SA SAN-DIEGO<br />

NE NEW-YORK<br />

CH CHICAGO<br />

TO TOPEKA<br />

I. canning plants<br />

SE SEATTLE<br />

SA SAN-DIEGO<br />

J. markets<br />

NE NEW-YORK<br />

CH CHICAGO<br />

TO TOPEKA<br />

ROW SYNTAX<br />

C define objective function<br />

S observe supply limit at plant &I.:2<br />

D satisfy demand at market &J.:2<br />

COLUMN SYNTAX<br />

X shipment quantities in cases from &I.:2 to &J.:4<br />

Z total transportation costs in thousands of dollars<br />

ENDATA<br />

Figure 11: ANALYZE Syntax file with TEXTOPT=1


GAMS/MPSWRITE MPSWRITE 19<br />

* file created by gams<br />

* BASE<br />

OPTIMAL = MINIMIZE<br />

STATUS = UNKNOWN<br />

NAME = GAMS<br />

ROWS = 7<br />

COLUMNS = 7<br />

NONZEROS = 20<br />

LENGTH = 8<br />

OBJECTIV = +OBJ<br />

ENDBASE<br />

*** ROWS<br />

C 7 0.00000000E+00 0.00000000E+00<br />

B 0.00000000E+00 0.00000000E+00<br />

SSE 3 -0.10000000E+21 0.35000000E+03<br />

B 0.00000000E+00 0.00000000E+00<br />

SSA 3 -0.10000000E+21 0.60000000E+03<br />

B 0.00000000E+00 0.00000000E+00<br />

DNE 2 0.32500000E+03 0.10000000E+21<br />

B 0.00000000E+00 0.00000000E+00<br />

DCH 2 0.30000000E+03 0.10000000E+21<br />

B 0.00000000E+00 0.00000000E+00<br />

DTO 2 0.27500000E+03 0.10000000E+21<br />

B 0.00000000E+00 0.00000000E+00<br />

+OBJ 1 -0.10000000E+21 0.10000000E+21<br />

B 0.00000000E+00 0.00000000E+00<br />

*** COLUMNS<br />

XSENE 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

XSECH 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

XSETO 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

XSANE 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

XSACH 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

XSATO 3 0.00000000E+00 0.10000000E+21<br />

L 0.00000000E+00 0.00000000E+00<br />

Z 2 -0.10000000E+21 0.10000000E+21<br />

B 0.00000000E+00 0.00000000E+00<br />

*** NONZEROS<br />

C XSENE -0.22500000E+00<br />

SSE XSENE 0.10000000E+01<br />

DNE XSENE 0.10000000E+01<br />

C XSECH -0.15300000E+00<br />

SSE XSECH 0.10000000E+01<br />

DCH XSECH 0.10000000E+01<br />

C XSETO -0.16200000E+00<br />

SSE XSETO 0.10000000E+01<br />

DTO XSETO 0.10000000E+01<br />

C XSANE -0.22500000E+00<br />

SSA XSANE 0.10000000E+01<br />

DNE XSANE 0.10000000E+01<br />

C XSACH -0.16200000E+00<br />

SSA XSACH 0.10000000E+01<br />

DCH XSACH 0.10000000E+01<br />

C XSATO -0.12600000E+00<br />

SSA XSATO 0.10000000E+01<br />

DTO XSATO 0.10000000E+01<br />

C Z 0.10000000E+01<br />

+OBJ Z 0.10000000E+01


MPSWRITE 20 GAMS/MPSWRITE<br />

Figure 12: ANALYZE LP file with FIXED=1<br />

Example 5: This example is used to demonstrate the effect of the options fixed and<br />

textopt. Consider the same gams input file trnsport.gms with the following<br />

modification: replace the text following the definition of the variable x(i,j) by<br />

'shipment quantities in cases from @i to @j' and that following the<br />

definitions of equations supply and demand by 'observe supply at plant<br />

@i' and 'satisfy demand at market @j' respectively. The resulting gams file is shown in<br />

Figure 10. The macro character @ is used as a flag for GAMS/MPSWRITE to recognize<br />

the following characters as a set name and replace them by a string recognizable by<br />

ANALYZE. Now include the option file shown below.<br />

nametyp 2<br />

dobas 0<br />

domap 2<br />

domps 2<br />

textopt 1<br />

fixed 1<br />

The options, textopt 1 and fixed 1 have been added relative to the previous<br />

examples. The resulting ANALYZE syntax file is shown in Figure 11. Notice the string<br />

that replaces the macros @i and @j. In the string, ANALYZE recognizes the characters<br />

between '&' and ':' as representing the stem of the index name corresponding to the string<br />

following the '@' macro in the gams file, and the number immediately following the ':' as<br />

the number of characters from the left of the row/column name that the particular set label<br />

stem begins. As an illustration, in the row syntax for X, we see the strings &I.:2 and<br />

&J.:4. This means that in any row of X, say XSENE, the label corresponding to I begins<br />

at the second position (SE) while the label corresponding to J begins at the fourth position<br />

(NE). This is used by the ANALYZE command EXPLAIN. For further information on this<br />

topic, consult the ANALYZE manual.<br />

The ANALYZE LP file shown in Figure 11 demonstrates the effect of the option fixed<br />

which uses a fixed field format for the LP file. This makes the file size larger but at the<br />

same time is easier to read.


GAMS/MPSWRITE MPSWRITE 21<br />

9 INTEGRATING GAMS AND ANALYZE<br />

In this section a number of examples are given to demonstrate the use of ANALYZE with<br />

GAMS. This description is intended for users already familiar with ANALYZE. For more<br />

information on ANALYZE you should consult the User's Guide for ANALYZE: Harvey J.<br />

Greenberg, A Computer-Assisted Analaysis System for Mathematical Programming<br />

Models and Solutions: A Users’s Guide for ANALYZE, Kluwer Academic Publishers,<br />

Boston, 1993.<br />

GAMS takes advantage of a special ANALYZE feature which allows 16 character long<br />

names instead of the usual 8 character MPS type names. In order to use this feature, the<br />

problem has to presented in an ANALYZE specific format, the ANALYZE LP file, which<br />

needs further conversion into an ANALYZE PCK file format before it can be read by<br />

ANALYZE. The ANALYZE Utility LPRENAME is needed to rewrite the GAMS<br />

generated LP file into an ANALYZE PCK file<br />

The general steps to translate an LP file are:<br />

>LPRENAME<br />

< LPRENAME credits etc ><br />

LPRENAME... LOAD GAMS<br />

LPRENAME... WRITE GAMS<br />

LPRENAME... QUIT<br />

The command 'LOAD GAMS' instructs LPRENAME to read the input file GAMS.LP. The<br />

command write will generate a file called gams.pck which is a binary (packed)<br />

version of gams.lp. For the purposes of this document, this is all you needs to know<br />

about the LPRENAME function, further details can be found in Chapter 8 of the<br />

ANALYZE User's Guide.<br />

There are various ways how you can generate the appropriate input files for ANALYZE<br />

and different methods of invoking the analysis software. The choice will depend on the<br />

kind of analysis is required and how frequent the GAMS-ANALYZE link will be used.<br />

Several examples will be used to illustrate different GAMS/ANALYZE set-ups for DOS<br />

or Windows type environments.<br />

Example 6: This example shows how to generate the ANALYZE LP and SYN files and<br />

save them for further processing. An MPSWRITE option file will be used to specify the<br />

types of files required and name design option selected. The 'mpswrite.opt' file will<br />

contain:<br />

* design meaningful names<br />

nametyp 2<br />

* analyze can use wide names<br />

namewid 16<br />

* write an analyze syntax file<br />

domap 2<br />

* write an analyze matrix file<br />

domps 2<br />

* don't write basis file<br />

dobas 0


MPSWRITE 22 GAMS/MPSWRITE<br />

In the GAMS file a solve statement is used to call the standard MPSWRITE subsystem.<br />

Note that MPSWRITE is invoked after a solve statement in order to pass correct dual<br />

information. Once again [TRNSPORT] is used for illustration. The following slice of code<br />

is added at the end of the file:<br />

solve transport minimizing z using lp;<br />

display x.l, x.m;<br />

* additions follow below<br />

option lp=mpswrite;<br />

solve transport minimizing z using lp<br />

After running the modified TRNSPORT model you may want to save the file gams.lp<br />

and gams.syn under a different name. For example, you could just copy and translate<br />

the files as follows:<br />

>GAMS TRNSPORT<br />

< GAMS log ><br />

>COPY GAMS.LP TRNSPORT.LP<br />

>COPY GAMS.SYN TRNSPORT.SYN<br />

>LPRENAME<br />

< translate .lp file into a .pck file<br />

.. LOAD TRNSPORT<br />

.. WRITE TRNSPORT<br />

.. QUIT<br />

>ANALYZE<br />

< various credits etc.><br />

ANALYZE... READIN PAC TRNSPORT<br />

ANALYZE... READIN SYN TRNSPORT<br />

< now we are ready to use ANALYZE commands ><br />

ANALYZE... QUIT<br />

Example 7: This example shows how you can partially automate the name generation,<br />

MPS renaming and loading of the model into ANALYZE.<br />

@echo off<br />

:<br />

echo * MPSWRITE Option File > mpswrite.opt<br />

echo nametyp 2 >> mpswrite.opt<br />

echo namewid 16 >> mpswrite.opt<br />

echo dobas 0 >> mpswrite.opt<br />

echo domap 2 >> mpswrite.opt<br />

echo domps 2 >> mpswrite.opt<br />

echo textopt 1 >> mpswrite.opt<br />

gams trnsport lp=mpswrite<br />

:<br />

echo LOAD GAMS > profile.exc<br />

echo WRITE GAMS >> profile.exc<br />

echo QUIT >> profile.exc<br />

lprename<br />

:<br />

echo $ ============ > profile.exc<br />

echo $ GAMS/ANALYZE >> profile.exc<br />

echo $ ============ >> profile.exc<br />

echo READIN PAC GAMS >> profile.exc<br />

echo READIN SYN GAMS >> profile.exc<br />

echo SUMMARY >> profile.exc<br />

echo $ >> profile.exc<br />

echo $ SUBMAT ROW * >> profile.exc<br />

echo $ SUBMAT COL * >> profile.exc<br />

echo $ PICTURE >> profile.exc<br />

echo $ >> profile.exc<br />

echo $ HELP >> profile.exc<br />

echo $ >> profile.exc


GAMS/MPSWRITE MPSWRITE 23<br />

if not exist credits.doc goto ok<br />

: need to remove credits to prevent early prompt<br />

if exist credits.hld delete credits.hld<br />

rename credits.doc credits.hld<br />

:ok<br />

analyze<br />

The option and profile files for GAMS, LPRENAME and ANALYZE are created on the<br />

fly. The exact content of these files will depend on the version and installation of<br />

ANALYZE.<br />

Example 8: This example shows how you can ANALYZE as a GAMS solver . You need<br />

to consult your GAMS and ANALYZE system administrator to install the scripts with the<br />

right name and in the proper place. For example, when running under Windows 95 we<br />

could add a solver called ANALYZE that uses the same settings as MPSWRITE in the<br />

appropriate GAMS configuration file. The new ANALYZE script then could look as<br />

follows.<br />

@echo off<br />

: gmsan_95.bat: Command Line Interface for Win 95<br />

echo * ANALYZE defaults > analyze.opt<br />

echo nametyp 2 >> analyze.opt<br />

echo namewid 16 >> analyze.opt<br />

echo dobas 0 >> analyze.opt<br />

echo domap 2 >> analyze.opt<br />

echo domps 2 >> analyze.opt<br />

echo textopt 1 >> analyze.opt<br />

echo * MPSWRITE Option >> analyze.opt<br />

: append mpswrite options<br />

if exist mpswrite.opt copy analyze.opt + mpswrite.opt analyze.opt<br />

gmsmp_nx.exe %4<br />

if exist analyze.opt del analyze.opt<br />

:<br />

echo LOAD GAMS > profile.exc<br />

echo WRITE GAMS >> profile.exc<br />

echo QUIT >> profile.exc<br />

:lprename > lprename.out<br />

LPRENAME<br />

:<br />

echo $ ============ > profile.exc<br />

echo $ GAMS/ANALYZE >> profile.exc<br />

echo $ ============ >> profile.exc<br />

echo READIN PAC GAMS >> profile.exc<br />

echo READIN SYN GAMS >> profile.exc<br />

echo SUMMARY >> profile.exc<br />

echo $ >> profile.exc<br />

echo RETURN >> profile.exc<br />

if not exist credits.doc goto ok<br />

: need to remove credits to prevent early prompt<br />

if exist credits.hld delete credits.hld<br />

rename credits.doc credits.hld<br />

:ok<br />

ANALYZE


MPSWRITE 24 GAMS/MPSWRITE<br />

Note that LPRENAME and ANALYZE are assumed to be accessible through the path or<br />

can be found in the current directory. If this is not the case, you has to enter the<br />

appropriate path to the script file above. Also note the use of the ANALYZE profile<br />

feature. The profile.exc file is executed only after the credits have been processed.<br />

The only way to avoid having to respond with Y/N to ANALYZE questions interactively<br />

is the renaming of the credits.doc file. If no credits.doc file is found by<br />

ANALYZE, it will immediately look for a profile.exc file and execute the commands<br />

included in this file.<br />

Let us return to the [TRNSPORT] example and call ANALYZE after we have computed<br />

an optimal solution as follows:<br />

solve transport using lp minimizing cost;<br />

option lp=analayze;<br />

solve transport using lp minmizing cost;<br />

The second solve statement will generate the appropriate input files for ANALYZE and<br />

load the problem into the analyze workspace. Not that ANALYZE has recognized that the<br />

model is optimal and the model is ready to be inspected using ANALYZE commands as<br />

shown below:<br />

PLEASE LOCK CAPS<br />

============<br />

GAMS/ANALYZE<br />

============<br />

Packed LP read from GAMS.PCK<br />

Syntax read from GAMS.SYN<br />

ANALYZE Version 10.0<br />

by<br />

Harvey J. Greenberg<br />

Problem GAMS is to MINIMIZE +OBJ.<br />

RHS=none RANGE=none BOUND=none<br />

Source file: GAMS.PCK<br />

Synatx file: GAMS.SYN<br />

Solution: GAMS Solution Status: OPTIMAL<br />

LP Statistics:<br />

7 Rows (1 free) and 7 Columns (1 fixed)<br />

20 Nonzeroes (11 distinct... 14 ones).<br />

Density = 40.8163% Entropy = 55% Unity = 70%<br />

Row C has the most nonzeroes = 7.<br />

Column XSENE has the most nonzeroes = 3.<br />

RETURN...back to terminal<br />

ANALYZE ...<br />

<strong>Using</strong> the flexibility of both systems, GAMS and ANALYZE, you can further enhance the<br />

above examples and provide application tailored analytic capabilities for standard GAMS<br />

applications.


INTRODUCTION OSL 1<br />

GAMS/OSL<br />

1 Introduction 2<br />

2 How To Run A Model With Osl 2<br />

3 Overview Of Osl 2<br />

3.1 THE SIMPLEX METHOD 2<br />

3.2 THE INTERIOR POINT METHODS 4<br />

3.3 THE NETWORK METHOD 4<br />

4 <strong>Gams</strong> <strong>Options</strong> 5<br />

4.1 OPTIONS SPECIFIED THROUGH THE OPTION STATEMENT 5<br />

4.2 OPTIONS SPECIFIED THROUGH MODEL SUFFIXES 5<br />

5 Summary Of Osl <strong>Options</strong> 7<br />

5.1 LP ALGORITHMIC OPTIONS 7<br />

5.2 MIP ALGORITHMIC OPTIONS 7<br />

5.3 SCREEN AND OUTPUT FILE OPTIONS 8<br />

5.4 ADVANCED LP OPTIONS 8<br />

5.5 EXAMPLES OF GAMS/OSL OPTION FILE 9<br />

6 Detailed Description Of Osl <strong>Options</strong> 10<br />

7 Special Notes 22<br />

7.1 CUTS GENERATED DURING BRANCH AND BOUND 22<br />

7.2 PRESOLVE PROCEDURE REMOVING ALL CONSTRAINTS FROM THE MODEL 22<br />

7.3 DELETING THE TREE FILES 22<br />

8 The <strong>Gams</strong>/Osl Log File 24<br />

9 Examples Of Mps Files Written By <strong>Gams</strong>/Osl. 26


2 OSL INTRODUCTION<br />

1 INTRODUCTION<br />

This document describes the GAMS interface to OSL.<br />

OSL is the IBM Optimization Subroutine Library, containing high performance solvers for<br />

LP (linear programming), MIP (mixed integer programming) and QP (quadratic<br />

programming) problems. GAMS does not support the QP capabilities of OSL, you have to<br />

use a general non-linear solver like MINOS or CONOPT for that.<br />

OSL offers quite a few algorithms and tuning parameters. Most of these are accessible by<br />

the GAMS user through an option file. In most cases GAMS/OSL should perform<br />

satisfactory without using any options.<br />

2 HOW TO RUN A MODEL WITH OSL<br />

OSL is capable of solve models of the following types: LP, RMIP and MIP. If you did not<br />

specify OSL as a default LP, RMIP or MIP solver, then you can use the following<br />

statement in your GAMS model:<br />

option lp=osl; { or RMIP or MIP }<br />

It should appear before the SOLVE statement.<br />

3 OVERVIEW OF OSL<br />

OSL offers several algorithms for solving LP problems: a primal simplex method (the<br />

default), a dual simplex method, and three interior point methods: primal, primal-dual and<br />

primal-dual predictor-corrector. For network problems there is a network solver.<br />

Normally the primal simplex method is a good method to start with. The simplex method<br />

is a very robust method, and in most cases you should get good performance with this<br />

solver. For large models that have to be solved many times it may be worthwhile to see if<br />

one of the other methods gives better results. Also changing the tuning parameters may<br />

influence the performance. The method option can be used to use another algorithm than<br />

the default one.<br />

3.1 The Simplex Method<br />

The most used method is the primal simplex method. It is very fast, and allows for restarts<br />

from an advanced basis. In case the GAMS model has multiple solves, and there are<br />

relatively minor changes between those LP models, then the solves after the first one will<br />

use basis information from the previous solve to do a ’jump start’. This is completely<br />

automated by GAMS and normally you should not have to worry about this.


OVERVIEW OF OSL OSL 3<br />

In case of a ’cold start’ (the first solve) you will see on the screen the message<br />

’Crash...’. This will try to create a better basis than the scratch (’all-slack’) basis the<br />

Simplex method would normally get. The crash routine does not use much time, so it is<br />

often beneficial to crash. Crashing is usually not used for subsequent solves because that<br />

would destroy the advanced basis. The default rule is to crash when GAMS does not pass<br />

on a basis, and not to crash otherwise. Notice that in a GAMS model you can use the<br />

bratio option to influence GAMS whether or not to provide the solver with a basis. The<br />

default behavior can be changed by the crash option in the option file.<br />

By default the model is also scaled. Scaling is most of the time beneficial. It can prevent<br />

the algorithm from breaking down when the matrix elements have a wide range: i.e.<br />

elements with a value of 1.0e-6 and also of 1.0e+6. It can also reduce the solution time.<br />

The presolver is called to try to reduce the size of the model. There are some simple<br />

reductions that can be applied before the model is solved, like replacing singleton<br />

constraints (i.e. x =l= 5 ;) by bounds (if there was already a tighter bound on X we<br />

just can remove this equation). Although most modelers will already use the gams<br />

facilities to specify bounds (x.lo and x.up), in many cases there are still possibilities to<br />

do these reductions. In addition to these reductions OSL can also remove some redundant<br />

rows, and substitute out certain equations. The presolver has several options which can be<br />

set through the presolve option.<br />

The presolve may destroy an advanced basis. Sometimes this will result in very<br />

expensive restarts. As a default, the presolve is not used if an advanced basis is<br />

available. If using the presolve procedure is more useful than the use of an<br />

advanced basis, one can still force a presolve by using an option file.<br />

GAMS/OSL uses the order: scale, presolve, crash. There is no possibility to change this<br />

order unless you have access to the source of the GAMS/OSL program.<br />

After the model is solved we have to call the postsolver in case the presolver was used.<br />

The postsolver will reintroduce the variables and equations the presolve substituted out,<br />

and will calculate their values. This solution is an optimal solution, but not a basic<br />

solution. By default we call simplex again to find an optimal basis. This allows us to<br />

restart from this solution. It is possible to turn off this last step by the option postsolve<br />

0.<br />

Occasionally you may want to use the dual simplex method (for instance when the model<br />

is highly primal degenerate, and not dual degenerate, or when you have many more rows<br />

than columns). You can use the method dsimplex option to achieve this. In general<br />

the primal simplex (the default) is more appropriate: most models do not have the<br />

characteristics above, and the OSL primal simplex algorithm is numerically more stable.<br />

The other options for the Simplex method like the refactorization frequency, the Devex<br />

option, the primal and the dual weight and the change weight option are only to be<br />

changed in exceptional cases.


4 OSL OVERVIEW OF OSL<br />

3.2 The interior point methods<br />

OSL also provides you with three interior point solvers. You should try them out if your<br />

model is large and if it is not a restart. The primal-dual barrier method with predictorcorrector<br />

is in general the best algorithm. This can be set by method interior3. Note<br />

that the memory estimate of OSL is in many cases not sufficient to solve a model with this<br />

method. You can override OSL’s estimate by adding to using the workspace model suffix<br />

as shown in Section 4.2. We have seen models where we had to ask for more than twice as<br />

much as the estimate. It is worthwhile to check how the interior points are doing on your<br />

model especially when your model is very large.<br />

3.3 The network method<br />

A linear model can be reformulated to a network model if<br />

The objective variable is a free variable,<br />

The objective variable only appears in the objective function, and not somewhere else<br />

in the model. This in fact defines the objective function.<br />

The objective function is of the form =e=.<br />

Each variable appears twice in the matrix (that is excluding the objective function)<br />

once with a coefficient of +1 and once with a coefficient of -1. In case there is a<br />

column with two entries that are the same, GAMS/OSL will try a row scaling. If there<br />

are no matrix entries left for a column (only in the objective function) or there is only<br />

one entry, GAMS/OSL will try to deal with this by adding extra rows to the model.


GAMS OPTIONS OSL 5<br />

4 GAMS OPTIONS<br />

The following GAMS options are used by GAMS/OSL.<br />

4.1 <strong>Options</strong> specified through the Option statement<br />

The following options are specified through the option statement. For example,<br />

set iterlim = 100 ;<br />

sets the iterations limit to 100.<br />

iterlim Sets the iteration limit. The algorithm will terminate and pass<br />

on the current solution to GAMS. In case a pre-solve is done,<br />

the post-solve routine will be invoked before reporting the<br />

solution.<br />

reslim Sets the time limit in seconds. The algorithm will terminate<br />

and pass on the current solution to GAMS. In case a pre-solve<br />

is done, the post-solve routine will be invoked before<br />

reporting the solution.<br />

optca Absolute optimality criterion for a MIP problem.<br />

optcr Relative optimality criterion for a MIP problem.<br />

bratio Determines whether or not to use an advanced basis.<br />

sysout Will echo the OSL messages to the GAMS listing file. This<br />

option is useful to find out exactly what OSL is doing.<br />

4.2 <strong>Options</strong> specified through model suffixes<br />

The following options are specified through the use of model suffix. For example,<br />

mymodel.workspace = 10 ;<br />

sets the amount of memory used to 10 MB. Mymodel is the name of the model as<br />

specified by the model statement. In order to be effective, the assignment of the model<br />

suffix should be made between the model and solve statements.<br />

workspace Gives OSL x MB of workspace. Overrides the memory<br />

estimation.<br />

optfile Instructs OSL to read the option file osl.opt.


6 OSL GAMS OPTIONS<br />

cheat Cheat value: each new integer solution must be at least x<br />

better than the previous one. Can speed up the search, but you<br />

may miss the optimal solution. The cheat parameter is<br />

specified in absolute terms (like the OPTCA option). The<br />

OSL option improve overrides the GAMS cheat parameter.<br />

cutoff Cutoff value. When the Branch and Bound search starts, the<br />

parts of the tree with an objective worse than x are deleted.<br />

Can speed up the initial phase in the Branch and Bound.<br />

prioropt Instructs OSL to use priority branching information passed by<br />

GAMS through the variable.prior parameters.


SUMMARY OF OSL OPTIONS OSL 7<br />

5 SUMMARY OF OSL OPTIONS<br />

All these options should be entered in the option file osl.opt after setting the<br />

mymodel.optfile parameter to 1. The option file is not case sensitive and the<br />

keywords must be given in full. Examples for using the option file can be found at the end<br />

of this section.<br />

5.1 LP algorithmic options<br />

crash crash an initial basis<br />

iterlim iteration limit<br />

method solution method<br />

pdgaptol barrier method primal-dual gap tolerance<br />

presolve perform presolve. Reduce size of model.<br />

reslim resource limit<br />

simopt simplex option<br />

scale perform scaling<br />

toldinf dual feasibility tolerance<br />

tolpinf primal feasibility tolerance<br />

workspace memory allocation<br />

5.2 MIP algorithmic options<br />

bbpreproc branch and bound preprocessing<br />

cutoff cutoff or incumbent value<br />

cuts allocate space for extra cuts<br />

degscale scale factor for degradation<br />

improve required level of improvement over current best integer solution<br />

incore keep tree in core<br />

iweight weight for each integer infeasibility<br />

maxnodes maximum number of nodes allowed<br />

maxsols maximum number of integer solutions allowed<br />

optca absolute optimality criterion<br />

optcr relative optimality criterion<br />

strategy MIP strategy<br />

target target value for objective<br />

threshold supernode processing threshold<br />

tolint integer tolerance


8 OSL SUMMARY OF OSL OPTIONS<br />

5.3 Screen and output file options<br />

bastype format of basis file<br />

creatbas create basis file<br />

creatmps create MPS file<br />

iterlog iteration log<br />

mpstype type of MPS file<br />

networklog network log<br />

nodelog MIP log<br />

5.4 Advanced LP <strong>Options</strong><br />

adjac save storage on AA T<br />

chabstol absolute pivot tolerance for Cholesky factorization<br />

chweight rate of change of multiplier in composite objective function<br />

chtinytol cut-off tolerance in Cholesky factorization<br />

crossover when to switch to simplex<br />

densecol dense column threshold<br />

densethr density threshold in Cholesky<br />

devex devex pricing method<br />

droprowct constraint dropping threshold<br />

dweight proportion of feasible objective used in dual infeasible solution<br />

factor refactorization frequency<br />

fastits switching frequency to devex<br />

fixvar1 tolerance for fixing variables in barrier method when infeasible<br />

fixvar2 tolerance for fixing variables in barrier method when feasible<br />

formntype formation of normal matrix<br />

maxprojns maximum number of null space projections<br />

muinit initial value for μ<br />

mulimit lower limit for μ<br />

mulinfac multiple of μ to add to the linear objective<br />

mufactor reduction factor for μ in primal barrier method<br />

nullcheck null space checking switch<br />

objweight weight given to true objective function in phase I of primal<br />

barrier<br />

pdstepmult step-length multiplier for primal-dual barrier<br />

pertdiag diagonal perturbation in Cholesky factorization


SUMMARY OF OSL OPTIONS OSL 9<br />

possbasis potential basis flag<br />

postsolve basis solution required<br />

projtol pweight projection error tolerance starting value for multiplier in<br />

composite objective function<br />

rgfactor reduced gradient target reduction<br />

rglimit reduced gradient limit<br />

simopt simplex option<br />

stepmult step-length multiplier for primal barrier algorithm<br />

5.5 Examples of GAMS/OSL option file<br />

The following option file osl.opt may be used to force OSL to perform branch and bound<br />

preprocessing, a maximum of 4 integer solutions and to provide a log of the branch and<br />

bound search at every node.<br />

bbpreproc 1<br />

maxsols 4<br />

nodelog 1


10 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

6 DETAILED DESCRIPTION OF OSL OPTIONS<br />

adjac integer Generation of AAT (default = 1)<br />

0<br />

Save storage on forming AA T in the interior point<br />

methods.<br />

1 Use a fast way of computing AA T .<br />

bastype integer Format of basis file (default = 0)<br />

0 do not write level values to the basis file<br />

1 write level values to the basis file<br />

bbpreproc integer Preprocess the Branch and Bound tree (default = 0).<br />

0 Do not preprocess the 0-1 structure.<br />

1 Use super-nodes, copy matrix<br />

2 Regular branch-and-bound, copy matrix<br />

3 Use super-nodes, overwrite matrix<br />

4 Regular branch-and-bound, overwrite matrix<br />

Note: The pre-processor examines only the 0-1 structure. On models with<br />

only general integer1 variables (i.e. integer variables with other bounds<br />

than 0 and 1) the preprocessor will not do any good. A message is written<br />

if this happens.<br />

The preprocessor needs space for extra cuts. If no space available, the<br />

branch-and-bound search may fail. Use the cuts option to specify how<br />

much extra room has to be allocated for additional cuts. Notice that the<br />

presolve already may reduce the size of the model, and so create extra<br />

space for additional cuts.<br />

chabstol real Absolute pivot tolerance for the Cholesky<br />

factorization. Range - [1.0e-30, 1.0e-6]<br />

(default = 1.0e-15)<br />

chtinytol real Cut-off tolerance in the Cholesky factorization.<br />

Range - [1.0e-30, 1.0e-6]<br />

(default = 1.0e-18)<br />

chweight real Rate of change for multiplier in composite<br />

objective function. Range - [1e-12,1]<br />

(default = 0.5)<br />

Note: This value determines the rate of change for pweight or<br />

dweight. It is a nonlinear factor based on case-dependent heuristics. The<br />

default of 0.5 gives a reasonable change if progress towards feasibility is<br />

slow. A value of 1.0 would give a greater change, while 0.1 would give a<br />

smaller change, and 0.01 would give very slow change.


DETAILED DESCRIPTION OF OSL OPTIONS OSL 11<br />

Crash integer Crash an initial basis. This option should not be<br />

used with a network or interior point solver. The<br />

option is ignored if GAMS provides a basis. To<br />

tell GAMS never to provide the solver with a<br />

basis use ’option bratio = 1;’ in the<br />

GAMS program.<br />

(default = 1 if no basis provided by GAMS)<br />

0 No crash is performed<br />

1 Dual feasibility may not be maintained<br />

2 Dual feasibility is maintained<br />

3 The sum of infeasibilities is not allowed to<br />

increase<br />

4 Dual feasibility is maintained and the sum of<br />

infeasibilities is not allowed to increase<br />

Note: The options to maintain dual feasibility do not have much impact<br />

due to the way GAMS sets up the model. Normally, only options 0 or 1<br />

need be used.<br />

creatbas character string Create a basis file in MPS style. String is the file<br />

name. Can only be used if GAMS passes on a<br />

basis (i.e. not the first solve, and if BRATIO test<br />

is passed). The basis is written just after reading<br />

in the problem and after the basis is setup. The<br />

option bastype determines whether values are<br />

written to the basis file. On UNIX precede the<br />

filename by ../ due to a bug in OSL (see Section<br />

7.3).<br />

creatmps character string<br />

This option can be used to create an MPS file of<br />

the GAMS model. This can be useful to test out<br />

other solvers or to pass on problems to the<br />

developers. String is file name. Its should not be<br />

longer than 30 characters. Example: creatmps<br />

trnsport.mps. The option mpstype<br />

determines which type (1 or 2 nonzeroes per line)<br />

is being used. The MPS file is written just after<br />

reading in the problem, before scaling and<br />

presolve. On UNIX precede the filename by ../<br />

due to a bug in OSL (see Section 7.3).<br />

Examples of the MPS files generated by OSL are<br />

in the Appendix.


12 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

crossover integer Switching to simplex from the Interior Point<br />

method. Use of this option can be expensive. It<br />

should be used if there is a need to restart from<br />

the optimal solution in the next solve statement.<br />

(default = 1)<br />

0 Interior point method does not switch to Simplex<br />

1 Interior point method switches to Simplex when<br />

numerical problems arise.<br />

2 Interior point method switches to Simplex at<br />

completion or when in trouble.<br />

3 As in 2, but also if after analyzing the matrix it<br />

seems the model is more appropriate for the<br />

Simplex method.<br />

4 Interior point immediately creates a basis and<br />

switches over to Simplex.<br />

cutoff real Cutoff or incumbent value. Overrides the<br />

CUTOFF in GAMS.<br />

cuts integer Allocate space for extra cuts generated by Branch<br />

and Bound preprocessor.<br />

(default = 10 + m/10, where m is the number of<br />

rows)<br />

degscale real The scale factor for all degradation.<br />

Range - [0.0, maxreal]<br />

(default = 1.0)<br />

densecol integer Dense column threshold. Columns with more nonzeroes<br />

than DENSECOL are treated differently.<br />

Range - [10, maxint]<br />

(default = 99,999)<br />

densethr real Density threshold in Cholesky. If DENSETHR ≤<br />

0 then everything is considered dense, if<br />

DENSETHR ≥ 1 then everything is considered<br />

sparse.<br />

Range - [-maxreal, maxreal]<br />

(default - 0.1)<br />

devex integer Devex pricing<br />

(default = 1)<br />

0 Switch off Devex pricing (not recommended for<br />

normal use).<br />

1 Approximate Devex method.<br />

2 Primal: Exact Devex method, Dual: Steepest Edge<br />

using inaccurate initial norms.


DETAILED DESCRIPTION OF OSL OPTIONS OSL 13<br />

Droprowct integer<br />

dweight real<br />

3 Exact Steepest Edge method.<br />

-1,-2,-3 As 1,2,3 for the dual. For the primal the default is<br />

used until feasible, and then mode 1,2 or 3.<br />

Note: Devex pricing is only used by the simplex method. Devex pricing is<br />

used after the first "cheap" iterations which use a random pricing strategy.<br />

In the primal case a negative value tells OSL to use the non-default devex<br />

pricing strategy only in phase II (in phase I the default devex strategy is<br />

used then). For the primal 1 (the default) or -2 are usually good settings,<br />

for the dual 3 is often a good choice.<br />

The constraint dropping threshold.<br />

Range - [1,30]<br />

(default = 1)<br />

Proportion of the feasible objective that is used<br />

when the solution is dual infeasible.<br />

Range - [0.0,1.0]<br />

(default = 0.1)<br />

factor integer<br />

Refactorization frequency. A factorization of the<br />

basis matrix will occur at least each factor<br />

iterations. OSL may decide to factorize earlier<br />

based on some heuristics (loss of precision, space<br />

considerations)<br />

Range - [0,999]<br />

(default = 100 + m/100, where m is the number of<br />

rows)<br />

fastits integer When positive, OSL switches to Devex after<br />

fastits random iterations, (to be precise at the<br />

first refactorization after that) but before it will<br />

price out a random subset, with correct reduced<br />

costs. When negative, OSL takes a subset of the<br />

columns at each refactorization, and uses this as a<br />

working set.<br />

Range - [-maxint,+maxint].<br />

(default = 0)<br />

Note: By default OSL primal simplex initially does cheap iterations using<br />

a random pricing strategy, then switches to Devex either because the<br />

success rate of the random pricing becomes low, or because the storage<br />

requirements are becoming high. This option allows you to change this<br />

pricing change. This option is only used when simplex was used with<br />

simopt 2 which is the default for models solved from scratch. For more<br />

information see the OSL manual.


14 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

fixvar1 real Tolerance for fixing variables in the barrier<br />

method if the problem is still infeasible.<br />

Range - [0.0, 1.0e-3]<br />

(default = 1.0e-7)<br />

fixvar2 real Tolerance for fixing variables when the problem<br />

is feasible. Range - [0.0, 1.0e-3]<br />

(default: 1.0e-8)<br />

formntype integer Formationof normal matrix<br />

(default = 0)<br />

0 Do formation of normal matrix in the interior<br />

point methods in a way that exploits<br />

vectorization. Uses lots of memory, but will<br />

switch to option 2 if needed.<br />

1 Save memory in formation of normal matrix.<br />

Improve real The next integer solution should be at least<br />

improve better than the current one. Overrides<br />

the cheat parameter in GAMS.<br />

Incore integer Keep tree in core<br />

(default = 0)<br />

0 The Branch & Bound routine will save tree<br />

information on the disk. This is needed for larger<br />

and more difficult models.<br />

1 The tree is kept in core. This is faster, but you<br />

may run out of memory. For larger models use<br />

incore 0, or use the workspace model suffix<br />

to request lots of memory. GAMS/OSL can not<br />

recover if you are running out of memory, i.e. we<br />

cannot save the tree to disk, and go on with<br />

incore 0.<br />

Iterlim integer Iteration limit. Overrides the GAMS iterlim<br />

option. Notice that the interior point codes<br />

require much less iterations than the simplex<br />

method. Most LP models can be solved with the<br />

interior point method with 50 iterations. MIP<br />

models may often need much more than 1000 LP<br />

iterations.<br />

(default = 1000)


DETAILED DESCRIPTION OF OSL OPTIONS OSL 15<br />

iterlog integer LP iteration log frequency. How many iterations<br />

are performed before a new line to the log file<br />

(normally the screen) is written. The log shows<br />

the iteration number, the primal infeasibility, and<br />

the objective function. The same log frequency is<br />

passed on to OSL, so in case you have option<br />

sysout=on in the GAMS source, you will see<br />

an OSL log in the listing file with the same<br />

granularity. The resource usage is checked only<br />

when an iteration log is written. In case you set<br />

this option parameter to a very large value, a<br />

resource limit overflow will not be detected.<br />

(default = 20)<br />

iweight real Weight for each integer infeasibility.<br />

Range [0.0, maxreal]<br />

(default = 1.0)<br />

maxnodes integer 2Maximum number of nodes allowed<br />

(default = 9,999,999)<br />

maxprojns integer Maximum number of null space projections.<br />

Range - [1,10]<br />

(default = 3)<br />

maxsols integer Maximum number of integer solution allowed.<br />

(default = 9,999,999)<br />

method character Solution Method<br />

(default = psimplex)<br />

psimplex Uses primal simplex method as LP solver<br />

dsimplex Uses dual simplex<br />

simplex Uses either the primal or dual simplex based on<br />

model characteristics<br />

network Uses the network solver<br />

interior0 Uses an appropriate barrier method<br />

interior1 Uses the primal barrier method<br />

interior2 Uses the primal-dual barrier method<br />

interior3 Uses the primal-dual predictor-corrector barrier<br />

method.


16 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

Note: In case any of the simplex algorithms is used the listing file will list<br />

the algorithm chosen by OSL (this was not possible for interior0).<br />

the best interior point algorithm overall seems to be the primal-dual with<br />

predictor-corrector. Note that if an interior point method has been selected<br />

for a MIP problem, only the relaxed LP is solved by the interior point<br />

method.<br />

Mpstype integer Format of MPS file<br />

(default = 2)<br />

1 one nonzero per line in the COLUMN section<br />

2 .two nonzeroes per line in the COLUMN section.<br />

Mufactor real The reduction factor for μ in the primal barrier<br />

method.<br />

Range - [1.0e-6,0.99999]<br />

(default = 0.1)<br />

muinit real Initial value for μ in primal barrier method.<br />

Range - [1.0e-20, 1.0e6]<br />

(default = 0.1)<br />

mulimit real Lower limit for μ.<br />

Range - [1.0e-6,1.0]<br />

(default = 1.0e-8)<br />

mulinfac real Multiple of μ to add to the linear objective.<br />

Range - [0.0, 1.0]<br />

(default = 0.0)<br />

networklog integer Iteration log frequency for the network solver.<br />

See iterlog.<br />

(default = 100)<br />

nodelog integer MIP node log frequency. Node log is written to<br />

the screen when a new integer is found or after<br />

nodelog nodes.<br />

(default = 20)<br />

Note: OSL writes a log line each node. This is captured in the status file,<br />

and displayed in the listing file when you have option sysout=on;<br />

in the GAMS source. The status file can become huge when many nodes<br />

are being examined.<br />

Nullcheck integer Null space checking switch. See the OSL manual.<br />

Objweight real Weight given to true objective function in phase 1<br />

of the primal barrier method.<br />

Range - [0.0, 1.0e8]<br />

(default = 0.1)


DETAILED DESCRIPTION OF OSL OPTIONS OSL 17<br />

optca real Absolute optimality criterion. The definition of<br />

this criterion is the same as described in the<br />

GAMS manual.<br />

(default = 0.0)<br />

optcr real Relative optimality criterion. The definition of<br />

this criterion is the same as described in the<br />

GAMS manual.<br />

(default = 0.10)<br />

pdgaptol real Barrier method primal-dual gap tolerance.<br />

Range - [1.0e-12, 1.0e-1]<br />

(default = 1.0e-7)<br />

pdstepmult real Step-length multiplier for pri2mal-dual barrier.<br />

Range - [0.01, 0.99999]<br />

(default = 0.99995)<br />

pertdiag real Diagonal perturbation in the Cholesky<br />

factorization.<br />

Range - [0.0, 1.0e-6]<br />

(default = 1.0e-12)<br />

possbasis integer Potential basis flag. If greater than zero, variables<br />

that are remote from their bounds are marked<br />

potentially basic.<br />

Range - [0,maxint]<br />

(default = 1)<br />

postsolve integer Basis solution required. If the presolver was not<br />

used this option is ignored.<br />

(default = 1)<br />

0 No basis solution is required. The reported<br />

solution will be optimal but it will not be a basis<br />

solution. This will require less work than being<br />

forced to find a basic optimal solution.<br />

1 Basis solution is required. After the postsolve, the<br />

Simplex method is called again to make sure that<br />

the final optimal solution is also a basic solution


18 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

presolve integer Perform model reduction before starting<br />

optimization procedure. This should not be with<br />

network solver. Ifby accident, all the constraints<br />

can be removed from the model, OSL will not be<br />

able to solve it and will abort. presolve can<br />

sometimes detect infeasibilities which can cause<br />

it to fail, in which case the normal solver<br />

algorithm is called for the full problem.<br />

(default = 0 for cold starts, -1 for restarts)<br />

-1 Do not use presolve<br />

0 Remove redundant rows, the variables summing<br />

to zero are fixed. If just one variable in a row is<br />

not fixed, the row is removed and appropriate<br />

bounds are imposed on that variable.<br />

1 As 0, and doubleton rows are eliminated (rows of<br />

the form pxj + qxk = b).<br />

2 As 0, and rows of the form x1 = x2 + x3 ..., x>0<br />

are eliminated.<br />

3 All of the above are performed.<br />

Note: The listing file will report how successful the presolve was. The<br />

presolver writes information needed to restore the original to a diskfile,<br />

which is located in the gams scratch directory. The postsolve routine will<br />

read this file after the smaller problem is solved, and then simplex is<br />

called again to calculate an optimal basis solution of the original model. If<br />

no basis solution is required use the option postsolve 0.<br />

Projtol real Projection error tolerance.<br />

Range - [0.0, 1.0]<br />

(default 1.0e-6)<br />

pweight real Starting value of the multiplier in composite<br />

objective function.<br />

Range: [1e-12,1e10].<br />

(default = 0.1)<br />

Note:OSL uses in phase I when the model is still infeasible a composite<br />

objective function of the form phase_I_objective + pweight *<br />

phase_II_objective. The phase I objective has a +1 or -1 where the basic<br />

variables exceed their lower or upperbounds. This gives already a little<br />

bit weight to the phase II objective. pweight starts with 0.1 (or<br />

whatever you specify as starting value) and is decreased continuously. It<br />

is decreased by a large amount if the infeasibilities increase and by small<br />

amounts if progress to feasibility is slow.


DETAILED DESCRIPTION OF OSL OPTIONS OSL 19<br />

Reslim integer Resource limit. Overrides the GAMS reslim<br />

option.<br />

(default = 1000s)<br />

rgfactor real Reduced gradient target reduction factor.<br />

Range - [1.0e-6,0.999999]<br />

(default = 0.1)<br />

rglimit real Reduced gradient limit.<br />

Range - [0.0, 1.0]<br />

(default = 0.0)<br />

scale integer Scaling done on the model. Scaling cannot be<br />

used for network models. It is advised to use<br />

scaling when using the primal barrier method,<br />

the other barrier methods (primal-dual and<br />

primal-dual predictor-corrector) in general<br />

should not be used with scaling.<br />

(default = 1 except for methods mentioned<br />

above)<br />

0 Do not scale the model<br />

1 Scale the model<br />

simopt integer Simplex option<br />

(default = 2 if no basis provided by GAMS and<br />

crashing off, = 0 otherwise)<br />

0 Use an existing basis<br />

1 Devex pricing is used<br />

2 Random pricing is used for the first "fast"<br />

iterations, then a switch is made to Devex<br />

pricing<br />

3 Use previous values to reconstruct a basis.<br />

Note: The dual simplex method can only have options 0 or 1.<br />

stepmult real Step-length multiplier for primal barrier<br />

algorithm.<br />

Range - [0.01, 0.99999]<br />

(default = 0.99)<br />

strategy integer MIP strategy for deciding branching.


20 OSL DETAILED DESCRIPTION OF OSL OPTIONS<br />

1 Perform probing only on satisfied 0-1 variables.<br />

This is the default setting in OSL. When a 0-1<br />

variable is satisfied, OSL will do probing to<br />

determine what other variables can be fixed as a<br />

result. If this bit is not set, OSL will perform<br />

probing on all 0-1 variables. If they are still<br />

fractional, OSL will try setting them both ways<br />

and use probing to build an implication list for<br />

each direction.<br />

2 Use solution strategies that assume a valid<br />

integer solution has been found. OSL uses<br />

different strategies when looking for the first<br />

integer solution than when looking for a better<br />

one. If you already have a solution from a<br />

previous run and have set a cutoff value, this<br />

option will cause OSL to operate as though it<br />

already has an integer solution. This is beneficial<br />

for restarting and should reduce the time<br />

necessary to reach the optimal integer solution.<br />

4 Take the branch opposite the maximum pseudocost.<br />

Normally OSL will branch on the node<br />

whose smaller pseudo-cost is highest. This has<br />

the effect of choosing a node where both<br />

branches cause significant degradation in the<br />

objective function, probably allowing the tree to<br />

be pruned earlier. With this option, OSL will<br />

branch on the node whose larger pseudo-cost is<br />

highest. The branch taken will be in the opposite<br />

direction of this cost. This has the effect of<br />

forcing the most nearly integer values to integers<br />

earlier and may be useful when any integer<br />

solution is desired, even if not optimal. Here the<br />

tree could potentially grow much larger, but if<br />

the search is successful and any integer solution<br />

is adequate, then most of it will never have to be<br />

explored.<br />

8 Compute new pseudo-costs as variables are<br />

branched on. Pseudo-costs are normally left as is<br />

during the solution process. Setting this option<br />

will cause OSL to make new estimates, using<br />

heuristics, as each branch is selected.


DETAILED DESCRIPTION OF OSL OPTIONS OSL 21<br />

16 Compute new pseudo-costs for unsatisfied<br />

variables. Pseudo-costs are normally left as is<br />

during the solution process. Setting this option<br />

will cause OSL to make new estimates, using<br />

heuristics, for any unsatisfied variables’ pseudocosts<br />

in both directions. This is done only the<br />

first time the variable is found to be unsatisfied.<br />

In some cases, variables will be fixed to a bound<br />

by this process, leading to better performance in<br />

the branch and bound routine. This work is<br />

equivalent to making two branches for every<br />

variable investigated.<br />

32 Compute pseudo-costs for satisfied variables as<br />

well as unsatisfied variables. Here, if 16 is also<br />

on, OSL will compute new estimated pseudocosts<br />

for the unsatisfied as well as the unsatisfied<br />

ones. Again, this is very expensive, but can<br />

improve performance on some problems.<br />

Note: strategy can be a combination of the above options by adding<br />

up the values for the individual options. 48 for instance will select 16 and<br />

32.<br />

Target real Target value for the objective.<br />

(default = 5% worse than the relaxed solution)<br />

threshold real Supernode processing threshold.<br />

Range [0.0, maxreal]<br />

(default = 0)<br />

tolpinf real Primal infeasibility tolerance. Row and column<br />

levels less than this values outside their bounds<br />

are still considered feasible.<br />

Range: [1e-12,1e-1].<br />

(default = 1e-8)<br />

toldinf real Dual infeasibility tolerance. Functions as<br />

optimality tolerance for primal simplex.<br />

Range: [1e-12,1e-1].<br />

(default = 1e-7)<br />

tolint real Integer tolerance<br />

Range - [1.0e-12, 1.0e-1]<br />

(default: 1.0e-6)<br />

workspace real Memory allocation. Overrides the memory<br />

estimation and the workspace model suffix.<br />

Workspace is defined in Megabytes.


22 OSL SPECIAL NOTES<br />

7 SPECIAL NOTES<br />

This section covers some special topics of interest1 to users of OSL.<br />

7.1 Cuts generated during branch and bound<br />

The OSL documentation does not give an estimate for the number of cuts that can be<br />

generated when the bbpreproc is used. By default we now allow #rows/10 extra rows to<br />

be added. Use the cuts option to change this. Also we were not able to find out how<br />

many cuts OSL really adds, so we can not report this. You will see a message on the<br />

screen and in the listing file when OSL runs out of space to add cuts. When this happens, I<br />

have seen the branch & bound fail later on with the message: OSL data was overwritten,<br />

so make sure your cuts option is large enough. For this reason we have as a default: no<br />

preprocessing.<br />

Note that when cuts is 0 and presolve is turned on, there may still be enough room<br />

for the preprocessor to add cuts, because the presolve reduced the number of rows.<br />

7.2 Presolve procedure removing all constraints from the model<br />

The PRESOLVE can occasionally remove all constraints from a model. This is the case<br />

for instance for the first solve in [WESTMIP]. An artificial model with the same behavior<br />

is shown below. OSL will not recover from this. Turn off the presolve procedure by using<br />

the option presolve -1 to prevent this from happening. An appropriate message is<br />

written when this happens.<br />

Variable x ;<br />

Equation e ;<br />

e.. x =e= 1 ;<br />

model m /all/ ;<br />

option lp = osl ;<br />

solve m using lp minimizing x ;<br />

7.3 Deleting the tree files<br />

The MIP solver EKKMSLV uses two files to store the tree. Due to a bug in OSL we are<br />

unable to store these files in the GAMS scratch directory (the open statement we issue is<br />

ignored by OSL). Therefore after you solved a MIP with OSL with INCORE 0 (the<br />

default), two files fort.82 and fort.83 would be in the current directory. The shell<br />

script gamsosl.run will try to overcome this by doing a cd to the scratch directory. If<br />

this fails, you will see an error message on the screen.


SPECIAL NOTES OSL 23<br />

An undesirable side effect of this is that all options that relate to user specified files have<br />

to be preceded by ../ to have the file going to the directory where the user started gams<br />

from. If you do not do this, the file will go to the current directory, which is the scratch<br />

directory, and the file will be removed when GAMS terminates. The affected commands<br />

are creatmps and creatmbas. So if you want to write an MPS file with the name<br />

myfile.mps, use the option creatmps ../myfile.mps.


24 OSL THE GAMS/OSL LOG FILE<br />

8 THE GAMS/OSL LOG FILE<br />

The iteration log file that normally appears on the screen has the following appearance on<br />

the screen for LP problems:<br />

Reading data...<br />

Starting OSL...<br />

Scale...<br />

Presolve...<br />

Crashing...<br />

Primal Simplex...<br />

Iter Objective Sum Infeasibilities<br />

20 2155310.000000 48470.762716<br />

40 1845110.000000 37910.387877<br />

60 1553010.000000 26711.895409<br />

80 191410.000000 0.0<br />

100 280780.000000 0.0<br />

120 294070.000000 0.0<br />

Postsolve...<br />

Primal Simplex...<br />

121 294070.000000 Normal Completion<br />

Optimal<br />

For MIP problems, similar information is provided for the relaxed problem, and in<br />

addition the branch and bound information is provided at regular intervals. The screen log<br />

has the following appearance:<br />

Reading data...<br />

Starting OSL...<br />

Scale...<br />

Presolve...<br />

Crashing...<br />

Primal Simplex...<br />

Iter Objective Sum Infeasibilities<br />

20 19.000000 8.500000<br />

40 9.500000 6.250000<br />

**** Not much progress: perturbing the problem<br />

60 3.588470 3.802830<br />

80 0.500000 2.000000<br />

100 2.662888 0.166163<br />

116 0.000000 Relaxed Objective<br />

Branch&Bound...<br />

Iter Nodes Rel Gap Abs Gap Best Found<br />

276 19 n/a 6.0000 6.0000 New<br />

477 39 n/a 6.0000 6.0000<br />

700 59 n/a 6.0000 6.0000<br />

901 79 n/a 6.0000 6.0000


THE GAMS/OSL LOG FILE OSL 25<br />

1119 99 65.0000 5.9091 6.0000<br />

1309 119 65.0000 5.9091 6.0000<br />

1538 139 17.0000 5.6667 6.0000<br />

1701 159 17.0000 5.6667 6.0000<br />

1866 179 17.0000 5.6667 6.0000<br />

2034 199 17.0000 5.6667 6.0000<br />

2158 219 11.0000 5.5000 6.0000<br />

2362 239 11.0000 5.5000 6.0000<br />

2530 259 8.0000 5.3333 6.0000<br />

2661 275 5.0000 3.3333 4.0000 Done<br />

Postsolve...<br />

Fixing integer variables...<br />

Primal Simplex...<br />

2661 4.000000 Normal Completion<br />

Integer Solution<br />

The solution satisfies the termination tolerances<br />

The branch and bound information consists of the number of iterations, the number of<br />

nodes, the current relative gap, the current absolute gap and the current best integer<br />

solution.


26 OSL EXAMPLES OF MPS FILES WRITTEN BY GAMS/OSL.<br />

9 EXAMPLES OF MPS FILES WRITTEN BY<br />

GAMS/OSL.<br />

This appendix shows the different output formats for MPS and basis files. We will not<br />

explain the MPS format or the format of the basis file: we will merely illustrate the<br />

function of the options mpstype and bastype. Running [TRNSPORT] with the<br />

following option file<br />

mpsfile trnsport.mps<br />

will result in the following MPS file:<br />

NAME GAMS/OSL<br />

ROWS<br />

N OBJECTRW<br />

E R0000001<br />

L R0000002<br />

L R0000003<br />

G R0000004<br />

G R0000005<br />

G R0000006<br />

COLUMNS<br />

C0000001 R0000001 -0.225000 R0000002 1.000000<br />

C0000001 R0000004 1.000000<br />

C0000002 R0000001 -0.153000 R0000002 1.000000<br />

C0000002 R0000005 1.000000<br />

C0000003 R0000001 -0.162000 R0000002 1.000000<br />

C0000003 R0000006 1.000000<br />

C0000004 R0000001 -0.225000 R0000003 1.000000<br />

C0000004 R0000004 1.000000<br />

C0000005 R0000001 -0.162000 R0000003 1.000000<br />

C0000005 R0000005 1.000000<br />

C0000006 R0000001 -0.126000 R0000003 1.000000<br />

C0000006 R0000006 1.000000<br />

C0000007 OBJECTRW 1.000000 R0000001 1.000000<br />

RHS<br />

RHS1 R0000002 350.000000 R0000003 600.000000<br />

RHS1 R0000004 325.000000 R0000005 300.000000<br />

RHS1 R0000006 275.000000<br />

BOUNDS<br />

FR BOUND1 C0000007 0.000000<br />

ENDATA<br />

MPS names have to be 8 characters or less. GAMS names can be much longer, for<br />

instance: X("Seattle","New-York"). We don’t try to make the names recognizable, but just<br />

give them labels like R0000001 etc.<br />

Setting the option mpstype 1 gives:


EXAMPLES OF MPS FILES WRITTEN BY GAMS/OSL. OSL 27<br />

NAME GAMS/OSL<br />

ROWS<br />

N OBJECTRW<br />

E R0000001<br />

L R0000002<br />

L R0000003<br />

G R0000004<br />

G R0000005<br />

G R0000006<br />

COLUMNS<br />

C0000001 R0000001 -0.225000<br />

C0000001 R0000002 1.000000<br />

C0000001 R0000004 1.000000<br />

C0000002 R0000001 -0.153000<br />

C0000002 R0000002 1.000000<br />

C0000002 R0000005 1.000000<br />

C0000003 R0000001 -0.162000<br />

C0000003 R0000002 1.000000<br />

C0000003 R0000006 1.000000<br />

C0000004 R0000001 -0.225000<br />

C0000004 R0000003 1.000000<br />

C0000004 R0000004 1.000000<br />

C0000005 R0000001 -0.162000<br />

C0000005 R0000003 1.000000<br />

C0000005 R0000005 1.000000<br />

C0000006 R0000001 -0.126000<br />

C0000006 R0000003 1.000000<br />

C0000006 R0000006 1.000000<br />

C0000007 OBJECTRW 1.000000<br />

C0000007 R0000001 1.000000<br />

RHS<br />

RHS1 R0000002 350.000000<br />

RHS1 R0000003 600.000000<br />

RHS1 R0000004 325.000000<br />

RHS1 R0000005 300.000000<br />

RHS1 R0000006 275.000000<br />

BOUNDS<br />

FR BOUND1 C0000007 0.000000<br />

ENDATA<br />

To illustrate the creation of a basis file, we first solve the transport model as usual, but we<br />

save work files so we can restart the job:<br />

gams trnsport save=t<br />

Then we create a new file called t2.gms with the following content:<br />

transport.optfile=1;<br />

solve trnsport using lp minimizing z;<br />

and we run gams t2 restart=t after creating an option file containing the line<br />

creatbas trnsport.bas. This results in the following basis file being generated.


28 OSL EXAMPLES OF MPS FILES WRITTEN BY GAMS/OSL.<br />

NAME<br />

XL C0000002 R0000001<br />

XU C0000004 R0000004<br />

XU C0000006 R0000005<br />

XU C0000007 R0000006<br />

ENDATA<br />

When we change the option to bastype 1 we get:<br />

NAME<br />

BS R0000002 0.000000<br />

BS R0000003 0.000000<br />

LL C0000001 0.000000<br />

XL C0000002 R0000001 0.000000 0.000000<br />

LL C0000003 0.000000<br />

XU C0000004 R0000004 0.000000 325.000000<br />

LL C0000005 0.000000<br />

XU C0000006 R0000005 0.000000 300.000000<br />

XU C0000007 R0000006 0.000000 275.000000<br />

ENDATA


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

GAMS/OSL Stochastic Extensions User Notes<br />

Table of Contents<br />

● Introduction<br />

● How to run a model with GAMS/OSLSE<br />

● Overview of OSLSE<br />

● Model formulation<br />

❍ Intended use<br />

❍ Model requirements<br />

● Future Work / Wish List<br />

● GAMS options<br />

● Summary of OSLSE options<br />

● The GAMS/OSLSE log file<br />

● Detailed description of GAMS/OSLSE options<br />

Introduction<br />

This document describes the GAMS 2.50 link to the IBM OSL Stochastic Extensions solvers.<br />

The GAMS/OSLSE link combines the high level modeling capabilities of GAMS with a powerful<br />

decomposition solver for stochastic linear and quadratic programs. It allows the user to formulate the<br />

deterministic equivalent stochastic program using general GAMS syntax and to pass this on to the solver<br />

without having to be concerned with the programming details of such a task. The default solver options<br />

used, while suitable for most solves, can be changed via a simple options file.<br />

Those seeking an introduction to the ideas and techniques of stochastic programming should consult the<br />

IBM Stochastic Extensions web page or the recently published texts Introduction to Stochastic<br />

Programming (J.R. Birge and F.V. Louveaux, Springer-Verlag, New York, 1997) or Stochastic<br />

Programming (P. Kall and S.W. Wallace, John Wiley and Sons, Chichester, England, 1994).<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (1 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

How to run a model with GAMS/OSLSE<br />

The following statements can be used inside your GAMS model:<br />

option lp = oslse; for linear programs, or<br />

option nlp = oslse; for quadratic programs<br />

These statements should appear before the solve statement you wish to affect.<br />

Overview of OSLSE<br />

OSL Stochastic Extensions (OSLSE) allows the input of a stochastic program and its solution via a<br />

nested decomposition solver that implements the L-shaped method of Van-Slyke & Wets, a.k.a. Benders<br />

decomposition. The stochastic program is assumed to have an event tree structure that is crucial to the<br />

efficient input and solution of the problem. In addition to linear programs, a convex separable quadratic<br />

term is allowed in the objective function (i.e. a diagonal Q matrix). At this time, only discrete probability<br />

distributions are accommodated.<br />

Model formulation<br />

To properly understand and use the GAMS/OSLSE link, one must understand the event tree<br />

representation of a stochastic program. The OSLSE subroutine library assumes that the SP has this form,<br />

and depends on this fact to efficiently store and solve the problem.<br />

To construct an event tree, the variables in the SP are first partitioned, where the variables in each subset<br />

represent decisions made at a particular time period and based on one realization of the stochastic<br />

parameters known at that time period. Each of these subsets form a node. Thus, for a two-stage SP with<br />

2 possible realizations of a stochastic parameter, there will be one node at time period 1, and two nodes<br />

at time period 2, where the variables in the last two nodes represent the same decision made in the face of<br />

different data.<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (2 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

Given the set of nodes and the variables belonging to them, we partition the set of constraints or<br />

equations in the problem according to this rule: a constraint belongs to the node of latest time stage<br />

containing a variable used in the constraint. Put more simply, the constraint goes in the latest node<br />

possible. Note that for valid stochastic programs, this rule is not ambiguous: a constraint cannot involve<br />

variables in different nodes of the same time period. Thus, a two-stage SP may have constraints<br />

involving only those variables from the node at time period 1 (e.g. initial investmest limitations, etc.) that<br />

are also assigned to this node, and constraints involving first- and second-stage variables (e.g. recourse<br />

constraints) that are assigned to the nodes in time period 2.<br />

Once all variables and equations are partitioned among the nodes, we can define an ancestor relationship<br />

among the nodes as follows: node i is an ancestor of node j if an equation in node j uses a variable in<br />

node i. This ancestor relationship does not explicitly define a tree, but in a valid stochastic program, it<br />

does so implicitly. The root of the tree is the node (at time period 1) that has no ancestors, while the<br />

nodes at time period 2 are descendents of this root node and of no others, so they can be made children of<br />

the root.<br />

Intended use<br />

The GAMS/OSLSE link is intended to be used for stochastic models with 2 or more stages, where<br />

decisions (variables) from one time stage determine the feasibility or optimality of decisions made in<br />

later time stages. The key requirement is that the modeler group the variables and constraints by node.<br />

If this is done correctly, the link takes care of all the other details. The quickest and easiest way to begin<br />

using the GAMS/OSLSE link is to meet this key requirement and follow the examples below. Those<br />

wishing to push the limits of the link or to understand the error messages for bad formulations, etc., will<br />

want to read the next subsection on model requirements more carefully.<br />

The modeler must identify one set in the GAMS model as the node set. This is done by using the<br />

descriptive text nodes (not case sensitive) for the set in question. The set name and set elements can be<br />

any legal GAMS values. Each equation and variable in the model (with the exception of the objective)<br />

can then be indexed by this node set, so that the link can partition the set of rows and columns by node<br />

and construct an event tree for the model. Note that model formulation that results is completely general,<br />

so that it can be passed to any LP or NLP solver. Only the OSLSE solver is making use of this special<br />

partitioning information.<br />

As an example, consider the simple two-stage model (the stochastic nature is suppressed for simplicity):<br />

max x i i = 1, 2<br />

s.t. x root


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

An equivalent GAMS model is the following:<br />

set<br />

N 'nodes' / root,one,two /,<br />

LEAF(N) / one, two /,<br />

ROOT(N) / root /;<br />

variables<br />

x(N),<br />

obj;<br />

equations<br />

objdef,<br />

g(N),<br />

f(N);<br />

objdef.. sum {LEAF, x(LEAF)} =e= obj;<br />

f(ROOT).. x(ROOT) =l= 1;<br />

g(LEAF).. x(LEAF) =e= x('root') + 1;<br />

model eg1 / f, g, objdef /;<br />

solve eg1 using lp maximizing obj;<br />

Note that in the model above, every variable except the objective is indexed by the set N, and similarly<br />

for the equations. The objective is treated specially, so that it can involve variables from the same time<br />

period and different nodes. It is not used in determining the ancestor relationship between the nodes, and<br />

in fact, should never be indexed.<br />

It is not strictly necessary to index all variables and equations by node. Any nodeless variables or<br />

equations (i.e. those not indexed by node) are placed into the root node of the event tree. So, a<br />

completely equivalent formulation for the above model would be:<br />

set<br />

N 'nodes' / root, one,two /,<br />

LEAF(N) / one, two /;<br />

variables<br />

xr 'root variable'<br />

x(N),<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (4 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

obj;<br />

equations<br />

objdef,<br />

f,<br />

g(N);<br />

objdef.. sum {LEAF, x(LEAF)} =e= obj;<br />

g(LEAF).. x(LEAF) =e= 1 + xr;<br />

f.. xr =l= 1;<br />

model eg1 / f, g, objdef /;<br />

option lp = oslse;<br />

solve eg1 using lp maximizing obj;<br />

There are a number of stochastic programming examples on the OSLSE Web page. Formulations of<br />

these and other models in GAMS and suitable for use with GAMS/OSLSE can be downloaded here.<br />

Model requirements<br />

While there is an implicit event tree in all stochastic models of the type we are considering, it is not<br />

necessary for the modeler to construct this tree explicitly. In some cases, it is good modeling practice to<br />

do so, but this is not a requirement made by GAMS/OSLSE. All that is necessary is that (some)<br />

variables and constraints be indexed by node. The link will extract the ancestor relationships between<br />

the nodes by looking at the constraints and the variables they use. From these ancestor relationships, a<br />

tree is formed, and the resulting model is passed to and solved by the routines in OSLSE.<br />

The above process can fail for a number of reasons. For example, the ancestor relationships between the<br />

nodes could be circular (e.g. A -> B -> A), in which case a tree cannot exist. Such a model will be<br />

rejected by GAMS/OSLSE. At the opposite extreme, the model could consist of many independent<br />

nodes, so that no ancestor relationships exist. In this case, each node is the root of a one-node tree. The<br />

link will recognize these multiple roots and create an artificial super-root that adopts each of the root<br />

nodes as its children, so that we have a proper tree.<br />

The OSLSE routines make some other assumptions about the correct form of the model that are checked<br />

by the link. Each scenario must be complete (i.e. must end in the final time stage), so that all leaf nodes<br />

in the tree exist at the same level. Furthermore, each node in a given time stage must have the same<br />

nonzero structure. This implies that the number of rows, columns, and nonzeros in these nodes is the<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (5 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

same as well. If any of these counts differ, some diagnostic messages are printed out and the solve is<br />

terminated.<br />

There are very few requirements put on the form of the objective function. It must be a scalar (i.e.<br />

unindexed) row with an equality relationship in which the objective variable appears linearly. This<br />

objective variable must not appear in any other rows of the model. Variables from any node of the model<br />

may appear in this row, since the objective row will be removed from the model before the model is<br />

passed on for solution.<br />

It is possible to formulate models with separable quadratic terms in the objective. Such quadratic terms<br />

must appear only in the objective function. In addition, they must be convex (i.e. positive terms for<br />

minimization models, negative for maximization). Furthermore, the quadratic terms in each node of a<br />

stage must be identical (i.e. scenarios cannot differ in their quadratic part; all quadratic terms are<br />

inherited from the core scenario).<br />

Future Work / Wish List<br />

In this section we list features and functionality that are either incomplete, unfinished, or not yet<br />

implemented. They are not listed in any particular order, nor are they necessarily going to be part of any<br />

future release. We invite your comments on the importance of these items to you.<br />

Currently, it is not possible to "warm start" the stochastic solvers in OSLSE (i.e. they do not make use of<br />

any current information about a basis or level or marginal values).<br />

Currently, it is not possible to return complete basis information to GAMS. The basis status is not made<br />

available by the OSLSE link. This is evident on degenerate models, for example: when x is at bound and<br />

has a zero reduced cost, is x in the basis or not?<br />

The current link allows the objective function to take a limited quadratic form. The quadratic terms must<br />

be separable and must be convex (i.e. coefficients positive for a minimization, negative for a<br />

maximization). Furthermore, the quadratic terms on the same variables in different scenarios must be<br />

identical. This is in contrast to the linear terms, where the objective weights can change with the<br />

scenario. OSLSE only allows quadratic terms to be specified for the core model; new scenarios do not<br />

include these terms, but inherit them from the core. The link currently checks for proper agreement<br />

between these quadratic terms, and rejects the model if this is not maintained.<br />

The current link does not allow the user to specify an iterations limit or a resource limit, or to cause a<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (6 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

user interrupt. The solution subroutines do not support this. Also, the iteration count is not made<br />

available.<br />

When the OSLSE solution routines detect infeasibility or unboundedness, no solution information is<br />

passed back (i.e. levels and marginals). This is reported properly to GAMS via the model status<br />

indicator.<br />

It is not possible to solve a "trivial" stochastic program, i.e. one with only one stage. The solution<br />

subroutines do not allow this. This should be allowed in a future release of OSLSE. Currently, the link<br />

rejects single-stage models with an explanatory message.<br />

The number of rows in any stage must be positive. The link checks this and rejects offending models<br />

with an explanatory message.<br />

The nonzero structure of each node in a stage must be identical. This is a requirement imposed by<br />

OSLSE. It would be possible to construct, in the link, a nonzero structure consisting of the union of all<br />

nodes in a stage, but this has not been done in this version. A workaround for this is to use the EPS value<br />

in the GAMS model to include zeros into nodes where they did not exist (see the example models for<br />

more detail).<br />

GAMS options<br />

There are a number of options one can set in a GAMS model that influence how a solver runs (e.g<br />

iteration limit, resource limit). The relevant GAMS options for the OSLSE link are:<br />

option iterlim = N;<br />

option reslim = X;<br />

is used to set the iteration limit. Not clear yet whether this is a limit on<br />

Bender's iterations, total pivots, or what.<br />

is used to set a resource limit (in seconds) for the solver. Currently not<br />

implemented.<br />

Summary of GAMS/OSLSE options<br />

The GAMS/OSLSE options are summarized here. The section of detailed descriptions can be found later<br />

in this document.<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (7 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

benders_major_iterations limits # of master problems solved<br />

cut_stage earliest stage not in master<br />

ekk_crsh_method OSL crash method<br />

ekk_nwmt_type OSL matrix compression<br />

ekk_prsl perform presolve on submodels<br />

ekk_prsl_type controls reductions attempted in presolve<br />

ekk_scale perform scaling on submodels<br />

ekk_sslv_algorithm simplex variant for initial submodel solution<br />

ekk_sslv_init simplex option for initial submodel solution<br />

large_master_bounds avoid unboundedness, large swings in master solution<br />

maximum_submodels limits amount of decomposition<br />

nested_decomp use nested Benders<br />

optimality_gap termination criterion<br />

print_tree_all print nodes, rows, and columns<br />

print_tree_cols print list of columns in model, grouped by node<br />

print_tree_nodes print list of nodes in model<br />

print_tree_rows print list of rows in model, grouped by node<br />

submodel_row_minimum minimum # of rows in any submodel<br />

write_spl_file name of SPL file to write<br />

The GAMS/OSLSE log file<br />

This section will not be implemented until the log file format is "finalized".<br />

Detailed description of GAMS/OSLSE options<br />

The options below are valid options for GAMS/OSLSE. To use them, enter the option name and its<br />

value, one per line, in a file called oslse.opt and indicate you would like to use this options file by<br />

including the line .optfile = 1; in your GAMS input file or using the optfile=1<br />

argument on the command line. Is the name of the options file case sensitive? It is only necessary to<br />

enter the first three letters of each token of an option name, so the option benders_major_iterations can<br />

be entered as ben_maj_iter. Comment lines (those beginning with a #) and blank lines are ignored.<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (8 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

benders_major_iterations (integer)<br />

At most this many iterations of Bender's will be done. Another stopping criterion for Bender's is the<br />

optimality gap; usually, the gap is reduced to this tolerance before the iteration limit is reached.<br />

(default = 30)<br />

cut_stage (integer)<br />

All nodes prior to this time stage are placed in the master problem, while the nodes in this time stage or<br />

later are placed into the subproblems. The root node is assumed to be at time stage one. Note that this<br />

option keeps nodes in the master, but cannot necessarily keep them out. For example, the master<br />

problem will always include one entire scenario, and other scenarios can be added to the master as a<br />

result of other options settings, e.g. submodel_row_min or maximum_submodels.<br />

(default = 2)<br />

ekk_crsh_method (integer)<br />

Sets type of OSL crash to perform before first solution of submodels.<br />

(default = 3)<br />

0 No crash is performed.<br />

1 Dual feasibility may not be maintained.<br />

2 Dual feasibility is maintained, if possible, by not pivoting in variables that are in the objective<br />

function.<br />

3 Dual feasibility may not be maintained, but the sum of the infeasibilities will never increase.<br />

4 Dual feasibility is maintained, if possible, by not pivoting in variables that are in the objective<br />

function. In addition, the sum of the infeasibilities will never increase.<br />

ekk_nwmt_type (integer)<br />

Controls whether a new, compressed version of the submodel matrices will be created prior to calling the<br />

LP submodel solver.<br />

(default = 1)<br />

0 Do not compress.<br />

1 A copy of the matrix is stored by indices.<br />

2 A copy of the matrix is stored by columns.<br />

3 A copy of the matrix is stored in vector block format.<br />

4 A copy of the matrix is stored in vector-block-of-1's format (if the matrix is composed of all 0's and<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (9 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

1's).<br />

5 A copy of the matrix is stored in network format (multiple blocks are compressed into a single<br />

network).<br />

ekk_prsl (yes/no)<br />

Perform presolve for each submodel of the decomposed problem. Only significant for when using the<br />

non-nested decomposition solver (i.e. when nested_decomp is set to no, its default value).<br />

(default = no)<br />

ekk_prsl_type (integer)<br />

Controls type of presolve reduction to perform.<br />

(default = 3)<br />

0 Redundant rows are eliminated, positive variables summing to zero are fixed.<br />

1 Type 0 reductions, and the doubleton rows are eliminated.<br />

2 Type 0 reductions, and variable substitutions performed.<br />

3 Type 0, 1, & 2 reductions are done.<br />

ekk_scale (yes/no)<br />

Perform scaling for each submodel of the decomposed problem.<br />

(default = no)<br />

ekk_sslv_algorithm (integer)<br />

Specifies variant of simplex algorithm to be used for first solution of submodels.<br />

(default = 1)<br />

0 Variant chosen automatically.<br />

1 Primal Simplex.<br />

2 Dual Simplex.<br />

ekk_sslv_init (integer)<br />

Simplex option.<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (10 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

(default = 2)<br />

0 Use an existing basis (not recommended).<br />

1 Devex pricing is used.<br />

2 Random pricing is used for the first "fast" iterations, then a switch is made to Devex pricing.<br />

3 Use previous values to reconstruct a basis.<br />

Note:The dual simplex method can only have options 0 or 1.<br />

large_master_bounds (integer)<br />

If set to 1, large bounds are placed on the variables in the master problem. This can help to keep the<br />

solutions of the master problem from varying too wildly from one iteration to the next.<br />

(default = 0)<br />

maximum_submodels (integer)<br />

The problem will be decomposed into at most N submodels. Once this limit is reached, the<br />

decomposition code assigns subproblems or nodes to existing submodels. Note that for the (default)<br />

nested decomposition solver, the settings of submodel_row_minimum and cut_stage take precedence.<br />

(default = 100)<br />

nested_decomp (yes/no)<br />

If yes, use the nested Benders decomposition solution subroutine. Otherwise, use the straight Benders<br />

routine.<br />

(default = yes)<br />

optimality_gap (real)<br />

The decomposition solvers will termination when the optimality gap is less than this tolerance.<br />

(default = 1e-8)<br />

print_tree_* (yes/no)<br />

Prints information about the event tree on the listing file. The node listing indicates what nodes are in the<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (11 of 12) [2/26/02 4:25:28 PM]


GAMS/OSL Stochastic Extensions 2.0 User Notes<br />

tree, their parents and children, and what GAMS label is used to index a given node. The row and<br />

column listings, printed separately from the node listing, print a list of the rows and columns of the<br />

GAMS model, grouped by node.<br />

(default = no)<br />

submodel_row_minimum (integer)<br />

If the argument N > 0, nodes are added to a submodel until the number of rows in the submodel exceeds<br />

N. This option applies only to the nested decomposition solver. If N > 0, it overrides the setting of<br />

cut_stage and maximum_submodels.<br />

(default = 0)<br />

write_spl_file (string)<br />

Once the problem has been passed successfully to the OSLSE routines, the problem will be written to a<br />

file of the name given as the argument to this option keyword. The filename is not given an SPL<br />

extension, but is taken as-is. The SPL file format is a format internal to OSLSE used for compact storage<br />

of a particular model.<br />

(SPL file not written by default)<br />

file:///F|/products/oslselnk/ver006/docs/solvers/oslse.htm (12 of 12) [2/26/02 4:25:28 PM]


GAMS/PATH User Guide<br />

Version 4.3<br />

Michael C. Ferris<br />

Todd S. Munson<br />

March 20, 2000


Contents<br />

1 Complementarity 3<br />

1.1 Transportation Problem . . . . . . . . . . . . . . . . . . . . . 4<br />

1.1.1 GAMS Code . . . . . . . . . . . . . . . . . . . . . . . . 7<br />

1.1.2 E xtension: Model Generalization . . . . . . . . . . . . 7<br />

1.1.3 Nonlinear Complementarity Problem . . . . . . . . . . 8<br />

1.2 Walrasian Equilibrium . . . . . . . . . . . . . . . . . . . . . . 8<br />

1.2.1 GAMS Code . . . . . . . . . . . . . . . . . . . . . . . . 9<br />

1.2.2 E xtension: Intermediate Variables . . . . . . . . . . . . 9<br />

1.2.3 Mixed Complementarity Problem . . . . . . . . . . . . 11<br />

1.3 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />

1.3.1 Listing File . . . . . . . . . . . . . . . . . . . . . . . . 14<br />

1.3.2 Redefined E quations . . . . . . . . . . . . . . . . . . . 16<br />

1.4 Pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

2 PATH 20<br />

2.1 Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

2.1.1 Diagnostic Information . . . . . . . . . . . . . . . . . . 24<br />

2.1.2 Crash Log . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

2.1.3 Major Iteration Log . . . . . . . . . . . . . . . . . . . . 24<br />

2.1.4 Minor Iteration Log . . . . . . . . . . . . . . . . . . . . 25<br />

2.1.5 Restart Log . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.1.6 Solution Log . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

2.2 Status File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

2.3 User Interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />

2.4 <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />

2.5 PATHC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32<br />

2.5.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . 33<br />

2.5.2 Constrained Nonlinear Systems . . . . . . . . . . . . . 33<br />

1


3 Advanced Topics 35<br />

3.1 Formal Definition of MCP . . . . . . . . . . . . . . . . . . . . 35<br />

3.2 Algorithmic Features . . . . . . . . . . . . . . . . . . . . . . . 36<br />

3.2.1 Merit Functions . . . . . . . . . . . . . . . . . . . . . . 36<br />

3.2.2 Crashing Method . . . . . . . . . . . . . . . . . . . . . 37<br />

3.2.3 Nonmontone Searches . . . . . . . . . . . . . . . . . . 38<br />

3.2.4 Linear Complementarity Problems . . . . . . . . . . . 39<br />

3.2.5 Other Features . . . . . . . . . . . . . . . . . . . . . . 41<br />

3.3 Difficult Models . . . . . . . . . . . . . . . . . . . . . . . . . . 42<br />

3.3.1 Ill-Defined Models . . . . . . . . . . . . . . . . . . . . 42<br />

3.3.2 Poorly Scaled Models . . . . . . . . . . . . . . . . . . . 47<br />

3.3.3 Singular Models . . . . . . . . . . . . . . . . . . . . . . 49<br />

A Case Study: Von Thunen Land Model 51<br />

A.1 Classical Model . . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />

A.2 Intervention Pricing . . . . . . . . . . . . . . . . . . . . . . . . 55<br />

A.3 Nested Production and Maintenance . . . . . . . . . . . . . . 56<br />

2


Chapter 1<br />

Complementarity<br />

A fundamental problem of mathematics is to find a solution to a square system<br />

of nonlinear equations. Two generalizations of nonlinear equations have<br />

been developed, a constrained nonlinear system which incorporates bounds<br />

on the variables, and the complementarity problem. This document is primarily<br />

concerned with the complementarity problem.<br />

The complementarity problem adds a combinatorial twist to the classic<br />

square system of nonlinear equations, thus enabling a broader range of situations<br />

to be modeled. In its simplest form, the combinatorial problem is to<br />

choose from 2n inequalities a subset of n that will be satisfied as equations.<br />

These problems arise in a variety of disciplines including engineering and<br />

economics [20] where we might want to compute Wardropian and Walrasian<br />

equilibria, and optimization where we can model the first order optimality<br />

conditions for nonlinear programs [29, 30]. Other examples, such as bimatrix<br />

games [31] and options pricing [27], abound.<br />

Our development of complementarity is done by example. We begin by<br />

looking at the optimality conditions for a transportation problem and some<br />

extensions leading to the nonlinear complementarity problem. We then discuss<br />

a Walrasian equilibrium model and use it to motivate the more general<br />

mixed complementarity problem. We conclude this chapter with information<br />

on solving the models using the PATH solver and interpreting the results.<br />

3


1.1 Transportation Problem<br />

The transportation model is a linear program where demand for a single<br />

commodity must be satisfied by suppliers at minimal transportation cost.<br />

The underlying transportation network is given as a set A of arcs, where<br />

(i, j) ∈Ameans that there is a route from supplier i to demand center j.<br />

The problem variables are the quantities xi,j shippedovereacharc(i, j) ∈A.<br />

The linear program can be written mathematically as<br />

<br />

minx≥0<br />

(i,j)∈A ci,jxi,j<br />

subject to <br />

j:(i,j)∈A xi,j ≤ si, ∀i<br />

<br />

i:(i,j)∈A xi,j ≥ dj, ∀j.<br />

(1.1)<br />

where ci,j is the unit shipment cost on the arc (i, j), si is the available supply<br />

at i, anddj is the demand at j.<br />

The derivation of the optimality conditions for this linear program begins<br />

by associating with each constraint a multiplier, alternatively termed a dual<br />

variable or shadow price. These multipliers represent the marginal price on<br />

changes to the corresponding constraint. We label the prices on the supply<br />

constraint p s and those on the demand constraint p d . Intuitively, for each<br />

supply node i<br />

0 ≤ p s i,si ≥ <br />

j:(i,j)∈A<br />

Considerthecasewhensi > <br />

j:(i,j)∈A xi,j, that is there is excess supply at<br />

i. Then, in a competitive marketplace, no rational person is willing to pay<br />

xi,j.<br />

for more supply at node i; it is already over-supplied. Therefore, p s i =0.<br />

Alternatively, when si = <br />

j:(i,j)∈A xi,j, thatisnodeiclears, we might be<br />

willing to pay for additional supply of the good. Therefore, ps i ≥ 0. We write<br />

these two conditions succinctly as:<br />

0 ≤ ps i ⊥ si ≥ <br />

j:(i,j)∈A xi,j, ∀i<br />

where the ⊥ notation is understood to mean that at least one of the adjacent<br />

inequalities must be satisfied as an equality. For example, either 0 = p s i ,the<br />

first case, or si = <br />

j:(i,j)∈A xi,j, the second case.<br />

Similarly, at each node j, the demand must be satisfied in any feasible<br />

solution, that is <br />

xi,j ≥ dj.<br />

i:(i,j)∈A<br />

4


Furthermore, the model assumes all prices are nonnegative, 0 ≤ p d j .Ifthere<br />

is too much of the commodity supplied, <br />

i:(i,j)∈A xi,j >dj, then, in a competitive<br />

marketplace, the price p d j will be driven down to 0. Summing these<br />

relationships gives the following complementarity condition:<br />

0 ≤ p d j<br />

⊥ <br />

i:(i,j)∈A xi,j ≥ dj, ∀j.<br />

The supply price at i plus the transportation cost ci,j from i to j must<br />

exceed the market price at j. That is, ps i + ci,j ≥ pd j. Otherwise, in a<br />

competitive marketplace, another producer will replicate supplier i increasing<br />

the supply of the good in question which drives down the market price. This<br />

chain would repeat until the inequality is satisfied. Furthermore, if the cost of<br />

delivery strictly exceeds the market price, that is ps i + ci,j >pd j , then nothing<br />

is shipped from i to j because doing so would incur a loss and xi,j =0.<br />

Therefore,<br />

0 ≤ xi,j ⊥ ps i + ci,j ≥ pd j , ∀(i, j) ∈A.<br />

We combine the three conditions into a single problem,<br />

0 ≤ ps i ⊥ si ≥ <br />

0 ≤ p<br />

j:(i,j)∈A xi,j, ∀i<br />

d j ⊥ <br />

0 ≤ xi,j ⊥<br />

i:(i,j)∈A xi,j ≥ dj,<br />

p<br />

∀j<br />

s i + ci,j ≥ pd j, ∀(i, j) ∈A.<br />

(1.2)<br />

This model defines a linear complementarity problem that is easily recognized<br />

as the complementary slackness conditions [6] of the linear program<br />

(1.1). For linear programs the complementary slackness conditions are both<br />

necessary and sufficient for x to be an optimal solution of the problem (1.1).<br />

Furthermore, the conditions (1.2) are also the necessary and sufficient optimality<br />

conditions for a related problem in the variables (p s ,p d )<br />

maxps ,pd≥0 subject to ci,j ≥ pd j − psi <br />

j djpd j − <br />

i sips i<br />

, ∀(i, j) ∈A<br />

termed the dual linear program (hence the nomenclature “dual variables”).<br />

Looking at (1.2) a bit more closely we can gain further insight into complementarity<br />

problems. A solution of (1.2) tells us the arcs used to transport<br />

goods. A priori we do not need to specify which arcs to use, the solution itself<br />

indicates them. This property represents the key contribution of a complementarity<br />

problem over a system of equations. If we know what arcs to send<br />

flow down, we can just solve a simple system of linear equations. However,<br />

5


sets i canning plants,<br />

j markets ;<br />

parameter<br />

s(i) capacity of plant i in cases,<br />

d(j) demand at market j in cases,<br />

c(i,j) transport cost in thousands of dollars per case ;<br />

$include transmcp.dat<br />

positive variables<br />

x(i,j) shipment quantities in cases<br />

p_demand(j) price at market j<br />

p_supply(i) price at plant i;<br />

equations<br />

supply(i) observe supply limit at plant i<br />

demand(j) satisfy demand at market j<br />

rational(i,j);<br />

supply(i) .. s(i) =g= sum(j, x(i,j)) ;<br />

demand(j) .. sum(i, x(i,j)) =g= d(j) ;<br />

rational(i,j) .. p_supply(i) + c(i,j) =g= p_demand(j) ;<br />

model transport / rational.x, demand.p_demand, supply.p_supply /;<br />

solve transport using mcp;<br />

Figure 1.1: A simple MCP model in GAMS, transmcp.gms<br />

the key to the modeling power of complementarity is that it chooses which<br />

of the inequalities in (1.2) to satisfy as equations. In economics we can use<br />

this property to generate a model with different regimes and let the solution<br />

determine which ones are active. A regime shift could, for example, be a<br />

back stop technology like windmills that become profitable if a CO2 tax is<br />

increased.<br />

6


1.1.1 GAMS Code<br />

The GAMS code for the complementarity version of the transportation problem<br />

is given in Figure 1.1; the actual data for the model is assumed to be<br />

given in the file transmcp.dat. Note that the model written corresponds<br />

very closely to (1.2). In GAMS, the ⊥ sign is replaced in the model statement<br />

with a “.”. It is precisely at this point that the pairing of variables<br />

and equations shown in (1.2) occurs in the GAMS code. For example, the<br />

function defined by rational is complementary to the variable x. Toinform<br />

a solver of the bounds, the standard GAMS statements on the variables can<br />

be used, namely (for a declared variable z(i)):<br />

z.lo(i) = 0;<br />

or alternatively<br />

positive variable z;<br />

Further information on the GAMS syntax can be found in [35]. Note that<br />

GAMS requires the modeler to write F(z) =g= 0 whenever the complementary<br />

variable is lower bounded, and does not allow the alternative form 0 =l=<br />

F(z).<br />

1.1.2 Extension: Model Generalization<br />

While many interior point methods for linear programming exploit this complementarity<br />

framework (so-called primal-dual methods [37]), the real power<br />

of this modeling format is the new problem instances it enables a modeler<br />

to create. We now show some examples of how to extend the simple model<br />

(1.2) to investigate other issues and facets of the problem at hand.<br />

Demand in the model of Figure 1.1 is independent of the prices p. Since<br />

the prices p are variables in the complementarity problem (1.2), we can easily<br />

replace the constant demand d with a function d(p) in the complementarity<br />

setting. Clearly, any algebraic function of p that can be expressed in GAMS<br />

can now be added to the model given in Figure 1.1. For example, a linear<br />

demand function could be expressed using<br />

<br />

xi,j ≥ dj(1 − p<br />

i:(i,j)∈A<br />

d j ), ∀j.<br />

Note that the demand is rather strange if p d j exceeds 1. Other more reasonable<br />

examples for d(p) are easily derived from Cobb-Douglas or CES utilities. For<br />

7


those examples, the resulting complementarity problem becomes nonlinear in<br />

the variables p. Details of complementarity for more general transportation<br />

models can be found in [13, 16].<br />

Another feature that can be added to this model are tariffs or taxes.<br />

In the case where a tax is applied at the supply point, the third general<br />

inequality in (1.2) is replaced by<br />

p s i (1 + ti)+ci,j ≥ p d j , ∀(i, j) ∈A.<br />

The taxes can be made endogenous to the model, details are found in [35].<br />

The key point is that with either of the above modifications, the complementarity<br />

problem is not just the optimality conditions of a linear program.<br />

In many cases, there is no optimization problem corresponding to the complementarity<br />

conditions.<br />

1.1.3 Nonlinear Complementarity Problem<br />

We now abstract from the particular example to describe more carefully the<br />

complementarity problem in its mathematical form. All the above examples<br />

can be cast as nonlinear complementarity problems (NCPs) defined as<br />

follows:<br />

(NCP) Given a function F : R n → R n , find z ∈ R n such that<br />

0 ≤ z ⊥ F (z) ≥ 0.<br />

Recall that the ⊥ sign signifies that one of the inequalities is satisfied as<br />

an equality, so that componentwise, ziFi(z) = 0. We frequently refer to<br />

this property as zi is “complementary” to Fi. A special case of the NCP<br />

that has received much attention is when F is a linear function, the linear<br />

complementarity problem [8].<br />

1.2 Walrasian Equilibrium<br />

A Walrasian equilibrium can also be formulated as a complementarity problem<br />

(see [33]). In this case, we want to find a price p ∈ R m andanactivity<br />

level y ∈ R n such that<br />

0 ≤ y ⊥ L(p) := −A T p ≥ 0<br />

0 ≤ p ⊥ S(p, y) := b + Ay − d(p) ≥ 0<br />

8<br />

(1.3)


$include walras.dat<br />

positive variables p(i), y(j);<br />

equations S(i), L(j);<br />

S(i).. b(i) + sum(j, A(i,j)*y(j)) - c(i)*sum(k, g(k)*p(k)) / p(i)<br />

=g= 0;<br />

L(j).. -sum(i, p(i)*A(i,j)) =g= 0;<br />

model walras / S.p, L.y /;<br />

solve walras using mcp;<br />

Figure 1.2: Walrasian equilibrium as an NCP, walras1.gms<br />

where S(p, y) represents the excess supply function and L(p) representsthe<br />

loss function. Complementarity allows us to choose the activities yj to run<br />

(i.e. only those that do not make a loss). The second set of inequalities state<br />

that the price of a commodity can only be positive if there is no excess supply.<br />

These conditions indeed correspond to the standard exposition of Walras’<br />

law which states that supply equals demand if we assume all prices p will be<br />

positive at a solution. Formulations of equilibria as systems of equations do<br />

not allow the model to choose the activities present, but typically make an<br />

a priori assumption on this matter.<br />

1.2.1 GAMS Code<br />

A GAMS implementation of (1.3) is given in Figure 1.2. Many large scale<br />

models of this nature have been developed. An interested modeler could, for<br />

example, see how a large scale complementarity problem was used to quantify<br />

the effects of the Uruguay round of talks [26].<br />

1.2.2 Extension: Intermediate Variables<br />

In many modeling situations, a key tool for clarification is the use of intermediate<br />

variables. As an example, the modeler may wish to define a<br />

variable corresponding to the demand function d(p) in the Walrasian equilibrium<br />

(1.3). The syntax for carrying this out is shown in Figure 1.3 where<br />

we use the variables d to store the demand function referred to in the excess<br />

9


$include walras.dat<br />

positive variables p(i), y(j);<br />

variables d(i);<br />

equations S(i), L(j), demand(i);<br />

demand(i)..<br />

d(i) =e= c(i)*sum(k, g(k)*p(k)) / p(i) ;<br />

S(i).. b(i) + sum(j, A(i,j)*y(j)) - d(i) =g= 0 ;<br />

L(j).. -sum(i, p(i)*A(i,j)) =g= 0 ;<br />

model walras / demand.d, S.p, L.y /;<br />

solve walras using mcp;<br />

Figure 1.3: Walrasian equilibrium as an MCP, walras2.gms<br />

supply equation. The model walras now contains a mixture of equations<br />

and complementarity constraints. Since constructs of this type are prevalent<br />

in many practical models, the GAMS syntax allows such formulations.<br />

Note that positive variables are paired with inequalities, while free variables<br />

are paired with equations. A crucial point misunderstood by many<br />

experienced modelers is that the bounds on the variable determine the relationships<br />

satisfied by the function F . Thus in Figure 1.3, d is a free variable<br />

and therefore its paired equation demand is an equality. Similarly, since p is<br />

nonnegative, its paired relationship S is a (greater-than) inequality.<br />

A simplification is allowed to the model statement in Figure 1.3. In many<br />

cases, it is not significant to match free variables explicitly to equations; we<br />

only require that there are the same number of free variables as equations.<br />

Thus, in the example of Figure 1.3, the model statement could be replaced<br />

by<br />

model walras / demand, S.p, L.y /;<br />

This extension allows existing GAMS models consisting of a square system<br />

of nonlinear equations to be easily recast as a complementarity problem -<br />

the model statement is unchanged. GAMS generates a list of all variables<br />

appearing in the equations found in the model statement, performs explicitly<br />

defined pairings and then checks that the number of remaining equations<br />

equals the number of remaining free variables. However, if an explicit match<br />

10


is given, the PATH solver can frequently exploit the information for better<br />

solution. Note that all variables that are not free and all inequalities must<br />

be explicitly matched.<br />

1.2.3 Mixed Complementarity Problem<br />

A mixed complementarity problem (MCP) is specified by three pieces of data,<br />

namely the lower bounds ℓ, the upper bounds u and the function F .<br />

(MCP) Given lower bounds ℓ ∈{R ∪{−∞}} n , upper bounds u ∈{R ∪<br />

{∞}} n and a function F : R n → R n , find z ∈ R n such that precisely<br />

one of the following holds for each i ∈{1,...,n}:<br />

Fi(z) =0 and ℓi ≤ zi ≤ ui<br />

Fi(z) > 0 and zi = ℓi<br />

Fi(z) < 0 and zi = ui.<br />

These relationships define a general MCP (sometimes termed a rectangular<br />

variational inequality [25]). We will write these conditions compactly as<br />

ℓ ≤ x ≤ u ⊥ F (x).<br />

Note that the nonlinear complementarity problem of Section 1.1.3 is a special<br />

case of the MCP. For example, to formulate an NCP in the GAMS/MCP<br />

format we set<br />

or declare<br />

z.lo(I) = 0;<br />

positive variable z;<br />

Another special case is a square system of nonlinear equations<br />

(NE) Given a function F : R n → R n find z ∈ R n such that<br />

F (z) =0.<br />

In order to formulate this in the GAMS/MCP format we just declare<br />

free variable z;<br />

11


In both the above cases, we must not modify the lower and upper bounds on<br />

the variables later (unless we wish to drastically change the problem under<br />

consideration).<br />

An advantage of the extended formulation described above is the pairing<br />

between “fixed” variables (ones with equal upper and lower bounds) and a<br />

component of F . If a variable zi is fixed, then Fi(z) is unrestricted since<br />

precisely one of the three conditions in the MCP definition automatically<br />

holds when zi = ℓi = ui. Thus if a variable is fixed in a GAMS model,<br />

the paired equation is completely dropped from the model. This convenient<br />

modeling trick can be used to remove particular constraints from a model at<br />

generation time. As an example, in economics, fixing a level of production<br />

will remove the zero-profit condition for that activity.<br />

Simple bounds on the variables are a convenient modeling tool that translates<br />

into efficient mathematical programming tools. For example, specialized<br />

codes exist for the bound constrained optimization problem<br />

min f(x) subject to ℓ ≤ x ≤ u.<br />

The first order optimality conditions for this problem class are precisely<br />

MCP(∇f(x), [ℓ, u]). We can easily see this condition in a one dimensional<br />

setting. If we are at an unconstrained stationary point, then ∇f(x) =0.<br />

Otherwise, if x is at its lower bound, then the function must be increasing as<br />

x increases, so ∇f(x) ≥ 0. Conversely, if x is at its upper bound, then the<br />

function must be increasing as x decreases, so that ∇f(x) ≤ 0. The MCP<br />

allows such problems to be easily and efficiently processed.<br />

Upper bounds can be used to extend the utility of existing models. For<br />

example, in Figure 1.3 it may be necessary to have an upper bound on the<br />

activity level y. In this case, we simply add an upper bound to y in the<br />

model statement, and replace the loss equation with the following definition:<br />

y.up(j) = 10;<br />

L(j).. -sum(i, p(i)*A(i,j)) =e= 0 ;<br />

Here, for bounded variables, we do not know beforehand if the constraint will<br />

be satisfied as an equation, less than inequality or greater than inequality,<br />

since this determination depends on the values of the solution variables. We<br />

adopt the convention that all bounded variables are paired to equations.<br />

Further details on this point are given in Section 1.3.1. However, let us<br />

interpret the relationships that the above change generates. If yj =0,the<br />

12


loss function can be positive since we are not producing in the jth sector.<br />

If yj is strictly between its bounds, then the loss function must be zero by<br />

complementarity; this is the competitive assumption. However, if yj is at<br />

its upper bound, then the loss function can be negative. Of course, if the<br />

market does not allow free entry, some firms may operate at a profit (negative<br />

loss). For more examples of problems, the interested reader is referred to<br />

[10, 19, 20].<br />

1.3 Solution<br />

We will assume that a file named transmcp.gms has been created using<br />

the GAMS syntax which defines an MCP model transport as developed in<br />

Section 1.1. The modeler has a choice of the complementarity solver to use.<br />

We are going to further assume that the modeler wants to use PATH.<br />

There are two ways to ensure that PATH is used as opposed to any other<br />

GAMS/MCP solver. These are as follows:<br />

1. Add the following line to the transmcp.gms file prior to the solve<br />

statement<br />

option mcp = path;<br />

PATH will then be used instead of the default solver provided.<br />

2. Rerun the gamsinst program from the GAMS system directory and<br />

choose PATH as the default solver for MCP.<br />

To solve the problem, the modeler executes the command:<br />

gams transmcp<br />

where transmcp can be replaced by any filename containing a GAMS model.<br />

Many other command line options for GAMS exist; the reader is referred to<br />

[4] for further details.<br />

At this stage, control is handed over to the solver which creates a log<br />

providing information on what the solver is doing as time elapses. See Chapter<br />

2 for details about the log file. After the solver terminates, a listing file<br />

is generated containing the solution to the problem. We now describe the<br />

output in the listing file specifically related to complementarity problems.<br />

13


Code String Meaning<br />

1 Normal completion <strong>Solver</strong> returned to GAMS without an error<br />

2 Iteration interrupt <strong>Solver</strong> used too many iterations<br />

3 Resource interrupt <strong>Solver</strong> took too much time<br />

4 Terminated by solver <strong>Solver</strong> encountered difficulty and was unable<br />

to continue<br />

8 User interrupt The user interrupted the solution process<br />

Table 1.1: <strong>Solver</strong> Status Codes<br />

Code String Meaning<br />

1 Optimal <strong>Solver</strong> found a solution of the problem<br />

6 Intermediate infeasible <strong>Solver</strong> failed to solve the problem<br />

1.3.1 Listing File<br />

Table 1.2: Model Status Codes<br />

The listing file is the standard GAMS mechanism for reporting model results.<br />

This file contains information regarding the compilation process, the form of<br />

the generated equations in the model, and a report from the solver regarding<br />

the solution process.<br />

We now detail the last part of this output, an example of which is given<br />

in Figure 1.4. We use “...” to indicate where we have omitted continuing<br />

similar output.<br />

After a summary line indicating the model name and type and the solver<br />

name, the listing file shows a solver status and a model status. Table 1.1<br />

and Table 1.2 display the relevant codes that are returned under different<br />

circumstances. A modeler can access these codes within the transmcp.gms<br />

file using transport.solstat and transport.modelstat respectively.<br />

After this, a listing of the time and iterations used is given, along with<br />

a count on the number of evaluation errors encountered. If the number of<br />

evaluation errors is greater than zero, further information can typically be<br />

found later in the listing file, prefaced by “****”. Information provided by<br />

the solver is then displayed.<br />

Next comes the solution listing, starting with each of the equations in the<br />

model. For each equation passed to the solver, four columns are reported,<br />

namely the lower bound, level, upper bound and marginal. GAMS moves all<br />

parts of a constraint involving variables to the left hand side, and accumulates<br />

14


S O L V E S U M M A R Y<br />

MODEL TRANSPORT<br />

TYPE MCP<br />

SOLVER PATH FROM LINE 45<br />

**** SOLVER STATUS 1 NORMAL COMPLETION<br />

**** MODEL STATUS 1 OPTIMAL<br />

RESOURCE USAGE, LIMIT 0.057 1000.000<br />

ITERATION COUNT, LIMIT 31 10000<br />

EVALUATION ERRORS 0 0<br />

Work space allocated -- 0.06 Mb<br />

---- EQU RATIONAL<br />

LOWER LEVEL UPPER MARGINAL<br />

seattle .new-york -0.225 -0.225 +INF 50.000<br />

seattle .chicago -0.153 -0.153 +INF 300.000<br />

seattle .topeka -0.162 -0.126 +INF .<br />

...<br />

---- VAR X shipment quantities in cases<br />

LOWER LEVEL UPPER MARGINAL<br />

seattle .new-york . 50.000 +INF .<br />

seattle .chicago . 300.000 +INF .<br />

...<br />

**** REPORT SUMMARY : 0 NONOPT<br />

0 INFEASIBLE<br />

0 UNBOUNDED<br />

0 REDEFINED<br />

0 ERRORS<br />

Figure 1.4: Listing File for solving transmcp.gms<br />

15


the constants on the right hand side. The lower and upper bounds correspond<br />

to the constants that GAMS generates. For equations, these should be equal,<br />

whereas for inequalities one of them should be infinite. The level value of the<br />

equation (an evaluation of the left hand side of the constraint at the current<br />

point) should be between these bounds, otherwise the solution is infeasible<br />

and the equation is marked as follows:<br />

seattle .chicago -0.153 -2.000 +INF 300.000 INFES<br />

The marginal column in the equation contains the value of the the variable<br />

that was matched with this equation.<br />

For the variable listing, the lower, level and upper columns indicate the<br />

lower and upper bounds on the variables and the solution value. The level<br />

value returned by PATH will always be between these bounds. The marginal<br />

column contains the value of the slack on the equation that was paired with<br />

this variable. If a variable appears in one of the constraints in the model<br />

statement but is not explicitly paired to a constraint, the slack reported here<br />

contains the internally matched constraint slack. The definition of this slack<br />

is the minimum of equ.l - equ.lower and equ.l - equ.upper, where equ is the<br />

paired equation.<br />

Finally, a summary report is given that indicates how many errors were<br />

found. Figure 1.4 is a good case; when the model has infeasibilities, these<br />

can be found by searching for the string “INFES” as described above.<br />

1.3.2 Redefined Equations<br />

Unfortunately, this is not the end of the story. Some equations may have the<br />

following form:<br />

LOWER LEVEL UPPER MARGINAL<br />

new-york 325.000 350.000 325.000 0.225 REDEF<br />

This should be construed as a warning from GAMS, as opposed to an error.<br />

In principle, the REDEF should only occur if the paired variable to the<br />

constraint had a finite lower and upper bound and the variable is at one of<br />

those bounds. In this case, at the solution of the complementarity problem<br />

the “equation (=e=)” may not be satisfied. The problem occurs because of a<br />

limitation in the GAMS syntax for complementarity problems. The GAMS<br />

equations are used to define the function F . The bounds on the function F<br />

16


variables x;<br />

equations d_f;<br />

x.lo = 0;<br />

x.up = 2;<br />

d_f.. 2*(x - 1) =e= 0;<br />

model first / d_f.x /;<br />

solve first using mcp;<br />

Figure 1.5: First order conditions as an MCP, first.gms<br />

are derived from the bounds on the associated variable. Before solving the<br />

problem, for finite bounded variables, we do not know if the associated function<br />

will be positive, negative or zero at the solution. Thus, we do not know<br />

whether to define the equation as “=e=”, “=l=” or “=g=”. GAMS therefore<br />

allows any of these, and informs the modeler via the “REDEF” label that<br />

internally GAMS has redefined the bounds so that the solver processes the<br />

correct problem, but that the solution given by the solver does not satisfy<br />

the original bounds. However, in practice, a REDEF can also occur when the<br />

equation is defined using “=e=” and the variable has a single finite bound.<br />

This is allowed by GAMS, and as above, at a solution of the complementarity<br />

problem, the variable is at its bound and the function F does not satisfy the<br />

“=e=” relationship.<br />

Note that this is not an error, just a warning. The solver has solved the<br />

complementarity problem specified by this equation. GAMS gives this report<br />

to ensure that the modeler understands that the complementarity problem<br />

derives the relationships on the equations from the bounds, not from the<br />

equation definition.<br />

1.4 Pitfalls<br />

As indicated above, the ordering of an equation is important in the specification<br />

of an MCP. Since the data of the MCP is the function F and the bounds<br />

ℓ and u, it is important for the modeler to pass the solver the function F and<br />

not −F .<br />

17


For example, if we have the optimization problem,<br />

min (x − 1)2<br />

x∈[0,2]<br />

then the first order optimality conditions are<br />

0 ≤ x ≤ 2 ⊥ 2(x − 1)<br />

which has a unique solution, x = 1. Figure 1.5 provides correct GAMS code<br />

for this problem. However, if we accidentally write the valid equation<br />

d_f.. 0 =e= 2*(x - 1);<br />

the problem given to the solver is<br />

0 ≤ x ≤ 2 ⊥ −2(x − 1)<br />

which has three solutions, x =0,x =1,andx = 2. This problem is in fact<br />

the stationary conditions for the nonconvex quadratic problem,<br />

max<br />

x∈[0,2] (x − 1)2 ,<br />

not the problem we intended to solve.<br />

Continuing with the example, when x is a free variable, we might conclude<br />

that the ordering of the equation is irrelevant because we always have<br />

the equation, 2(x − 1) = 0, which does not change under multiplication by<br />

−1. In most cases, the ordering of equations (which are complementary to<br />

free variables) does not make a difference since the equation is internally<br />

“substituted out” of the model. In particular, for defining equations, such as<br />

that presented in Figure 1.3, the choice appears to be arbitrary.<br />

However, in difficult (singular or near singular) cases, the substitution<br />

cannot be performed, and instead a perturbation is applied to F , in the hope<br />

of “(strongly) convexifying” the problem. If the perturbation happens to be<br />

in the wrong direction because F was specified incorrectly, the perturbation<br />

actually makes the problem less convex, and hence less likely to solve. Note<br />

that determining which of the above orderings of the equations makes most<br />

sense is typically tricky. One rule of thumb is to check whether if you replace<br />

the “=e=” by “=g=”, and then increase “x”, is the inequality intuitively<br />

more likely to be satisfied. If so, you probably have it the right way round,<br />

if not, reorder.<br />

18


Furthermore, underlying model convexity is important. For example, if<br />

we have the linear program<br />

minx cT x<br />

subject to Ax = b, x ≥ 0<br />

we can write the first order optimality conditions as either<br />

0 ≤ x ⊥ −A T µ + c<br />

µ free ⊥ Ax − b<br />

or, equivalently,<br />

0 ≤ x ⊥ −AT µ + c<br />

µ free ⊥ b − Ax<br />

because we have an equation. The former is a linear complementarity problem<br />

with a positive semidefinite matrix, while the latter is almost certainly<br />

indefinite. Also, if we need to perturb the problem because of numerical<br />

problems, the former system will become positive definite, while the later<br />

becomes highly nonconvex and unlikely to solve.<br />

Finally, users are strongly encouraged to match equations and free variables<br />

when the matching makes sense for their application. Structure and<br />

convexity can be destroyed if it is left to the solver to perform the matching.<br />

For example, in the above example, we could loose the positive semidefinite<br />

matrix with an arbitrary matching of the free variables.<br />

19


Chapter 2<br />

PATH<br />

Newton’s method, perhaps the most famous solution technique, has been extensively<br />

used in practice to solve to square systems of nonlinear equations.<br />

The basic idea is to construct a local approximation of the nonlinear equations<br />

around a given point, xk , solve the approximation to find the Newton<br />

point, xN , update the iterate, xk+1 = xN , and repeat until we find a solution<br />

to the nonlinear system. This method works extremely well close to<br />

a solution, but can fail to make progress when started far from a solution.<br />

To guarantee progress is made, a line search between xk and xN is used to<br />

enforce sufficient decrease on an appropriately defined merit function. Typically,<br />

1<br />

2 F (x)2 is used.<br />

PATH uses a generalization of this method on a nonsmooth reformulation<br />

of the complementarity problem. To construct the Newton direction, we use<br />

the normal map [34] representation<br />

F (π(x)) + x − π(x)<br />

associated with the MCP, where π(x) represents the projection of x onto [ℓ, u]<br />

in the Euclidean norm. We note that if x is a zero of the normal map, then<br />

π(x) solves the MCP. At each iteration, a linearization of the normal map,<br />

a linear complementarity problem, is solved using a pivotal code related to<br />

Lemke’s method.<br />

Versions of PATH prior to 4.x are based entirely on this formulation using<br />

the residual of the normal map<br />

F (π(x)) + x − π(x)<br />

20


as a merit function. However, the residual of the normal map is not differentiable,<br />

meaning that if a subproblem is not solvable then a “steepest<br />

descent” step on this function cannot be taken. PATH 4.x considers an alternative<br />

nonsmooth system [21], Φ(x) =0,whereΦi(x) =φ(xi,Fi(x)) and<br />

φ(a, b) := √ a 2 + b 2 −a−b. The merit function, Φ(x) 2 ,inthiscaseisdifferentiable,<br />

and is used for globalization purposes. When the subproblem solver<br />

fails, a projected gradient direction for this merit function is searched. It is<br />

shown in [14] that this provides descent and a new feasible point to continue<br />

PATH, and convergence to stationary points and/or solutions of the MCP is<br />

provided under appropriate conditions.<br />

The remainder of this chapter details the interpretation of output from<br />

PATH and ways to modify the behavior of the code. To this end, we will assume<br />

that the modeler has created a file named transmcp.gms which defines<br />

an MCP model transport asdescribedinSection1.1andisusingPATH<br />

4.x to solve it. See Section 1.3 for information on changing the solver.<br />

2.1 Log File<br />

We will now describe the behavior of the PATH algorithm in terms of the<br />

output typically produced. An example of the log for a particular run is<br />

given in Figure 2.1 and Figure 2.2.<br />

The first few lines on this log file are printed by GAMS during its compilation<br />

and generation phases. The model is then passed off to PATH at<br />

the stage where the “Executing PATH” line is written out. After some basic<br />

memory allocation and problem checking, the PATH solver checks if the<br />

modeler required an option file to be read. In the example this is not the<br />

case. If PATH is directed to read an option file (see Section 2.4 below), then<br />

the following output is generated after the PATH banner.<br />

Reading options file PATH.OPT<br />

> output_linear_model yes;<br />

<strong>Options</strong>: Read: Line 2 invalid: hi_there;<br />

Read of options file complete.<br />

If the option reader encounters an invalid option (as above), it reports this<br />

but carries on executing the algorithm. Following this, the algorithm starts<br />

working on the problem.<br />

21


--- Starting compilation<br />

--- trnsmcp.gms(46) 1 Mb<br />

--- Starting execution<br />

--- trnsmcp.gms(27) 1 Mb<br />

--- Generating model transport<br />

--- trnsmcp.gms(45) 1 Mb<br />

--- 11 rows, 11 columns, and 24 non-zeroes.<br />

--- Executing PATH<br />

Work space allocated -- 0.06 Mb<br />

Reading the matrix.<br />

Reading the dictionary.<br />

Path v4.3: GAMS Link ver037, SPARC/SOLARIS<br />

11 row/cols, 35 non-zeros, 28.93% dense.<br />

Path 4.3 (Sat Feb 26 09:38:08 2000)<br />

Written by Todd Munson, Steven Dirkse, and Michael Ferris<br />

INITIAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . -0.0000e+00 var: (x.seattle.new-york)<br />

Maximum of F. . . . . . . . . . 6.0000e+02 eqn: (supply.san-diego)<br />

Maximum of Grad F . . . . . . . 1.0000e+00 eqn: (demand.new-york)<br />

var: (x.seattle.new-york)<br />

INITIAL JACOBIAN NORM STATISTICS<br />

Maximum Row Norm. . . . . . . . 3.0000e+00 eqn: (supply.seattle)<br />

Minimum Row Norm. . . . . . . . 2.0000e+00 eqn: (rational.seattle.new-york)<br />

Maximum Column Norm . . . . . . 3.0000e+00 var: (p_supply.seattle)<br />

Minimum Column Norm . . . . . . 2.0000e+00 var: (x.seattle.new-york)<br />

Crash Log<br />

major func diff size residual step prox (label)<br />

0 0 1.0416e+03 0.0e+00 (demand.new-york)<br />

1 1 3 3 1.0029e+03 1.0e+00 1.0e+01 (demand.new-york)<br />

pn_search terminated: no basis change.<br />

Figure 2.1: Log File from PATH for solving transmcp.gms<br />

22


Major Iteration Log<br />

major minor func grad residual step type prox inorm (label)<br />

0 0 2 2 1.0029e+03 I 9.0e+00 6.2e+02 (demand.new-york)<br />

1 1 3 3 8.3096e+02 1.0e+00 SO 3.6e+00 4.5e+02 (demand.new-york)<br />

...<br />

15 2 17 17 1.3972e-09 1.0e+00 SO 4.8e-06 1.3e-09 (demand.chicago)<br />

FINAL STATISTICS<br />

Inf-Norm of Complementarity . . 1.4607e-08 eqn: (rational.seattle.chicago)<br />

Inf-Norm of Normal Map. . . . . 1.3247e-09 eqn: (demand.chicago)<br />

Inf-Norm of Minimum Map . . . . 1.3247e-09 eqn: (demand.chicago)<br />

Inf-Norm of Fischer Function. . 1.3247e-09 eqn: (demand.chicago)<br />

Inf-Norm of Grad Fischer Fcn. . 1.3247e-09 eqn: (rational.seattle.chicago)<br />

FINAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 3.0000e+02 var: (x.seattle.chicago)<br />

Maximum of F. . . . . . . . . . 5.0000e+01 eqn: (supply.san-diego)<br />

Maximum of Grad F . . . . . . . 1.0000e+00 eqn: (demand.new-york)<br />

var: (x.seattle.new-york)<br />

** EXIT - solution found.<br />

Major Iterations. . . . 15<br />

Minor Iterations. . . . 31<br />

Restarts. . . . . . . . 0<br />

Crash Iterations. . . . 1<br />

Gradient Steps. . . . . 0<br />

Function Evaluations. . 17<br />

Gradient Evaluations. . 17<br />

Total Time. . . . . . . 0.020000<br />

Residual. . . . . . . . 1.397183e-09<br />

--- Restarting execution<br />

Figure 2.2: Log File from PATH for solving transmcp.gms (continued)<br />

23


2.1.1 Diagnostic Information<br />

Some diagnostic information is initially generated by the solver at the starting<br />

point. Included is information about the initial point and function evaluation.<br />

The log file here tells the value of the largest element of the starting point<br />

and the variable where it occurs. Similarly, the maximum function value is<br />

displays along with the equation producing it. The maximum element in the<br />

gradient is also presented with the equation and variable where it is located.<br />

The second block provides more information about the Jacobian at the<br />

starting point. These can be used to help scale the model. See Chapter 3 for<br />

complete details.<br />

2.1.2 Crash Log<br />

The first phase of the code is a crash procedure attempting to quickly determine<br />

which of the inequalities should be active. This procedure is documented<br />

fully in [12], and an exaple of the Crash Log can be seen in Figure 2.1.<br />

The first column of the crash log is just a label indicating the current iteration<br />

number, the second gives an indication of how many function evaluations<br />

have been performed so far. Note that precisely one Jacobian (gradient) evaluation<br />

is performed per crash iteration. The number of changes to the active<br />

set between iterations of the crash procedure is shown under the “diff” column.<br />

The crash procedure terminates if this becomes small. Each iteration<br />

of this procedure involves a factorization of a matrix whose size is shown<br />

in the next column. The residual is a measure of how far the current iterate<br />

is from satisfying the complementarity conditions (MCP); it is zero at a<br />

solution. See Section 3.2.1 for further information. The column “step” corresponds<br />

to the steplength taken in this iteration - ideally this should be 1.<br />

If the factorization fails, then the matrix is perturbed by an identity matrix<br />

scaled by the value indicated in the “prox” column. The “label” column indicates<br />

which row in the model is furthest away from satisfying the conditions<br />

(MCP). Typically, relatively few crash iterations are performed. Section 2.4<br />

gives mechanisms to affect the behavior of these steps.<br />

2.1.3 Major Iteration Log<br />

After the crash is completed, the main algorithm starts as indicated by the<br />

“Major Iteration Log” flag (see Figure 2.2). The columns that have the same<br />

24


Code Meaning<br />

C A cycle was detected.<br />

EAn error occurred in the linear solve.<br />

I The minor iteration limit was reached.<br />

N The basis became singular.<br />

R An unbounded ray was encountered.<br />

S The linear subproblem was solved.<br />

T Failed to remain within tolerance after factorization was performed.<br />

Table 2.1: Linear <strong>Solver</strong> Codes<br />

labels as in the crash log have precisely the same meaning described above.<br />

However, there are some new columns that we now explain. Each major<br />

iteration attempts to solve a linear mixed complementarity problem using a<br />

pivotal method that is a generalization of Lemke’s method [31]. The number<br />

of pivots performed per major iteration is given in the “minor” column.<br />

The “grad” column gives the cumulative number of Jacobian evaluations<br />

used; typically one evaluation is performed per iteration. The “inorm” column<br />

gives the value of the error in satisfying the equation indicated in the<br />

“label” column.<br />

At each iteration of the algorithm, several different step types can be<br />

taken, due to the use of nonmonotone searches [11, 15], which are used to<br />

improve robustness. In order to help the PATH user, we have added two code<br />

letters indicating the return code from the linear solver and the step type<br />

to the log file. Table 2.1 explains the return codes for the linear solver and<br />

Table 2.2 explains the meaning of each step type. The ideal output in this<br />

column is either “SO”, with “SD” and “SB” also being reasonable. Codes<br />

different from these are not catastrophic, but typically indicate the solver is<br />

having difficulties due to numerical issues or nonconvexities in the model.<br />

2.1.4 Minor Iteration Log<br />

If more than 500 pivots are performed, a minor log is output that gives more<br />

details of the status of these pivots. A listing from transmcp model follows,<br />

wherewehavesettheoutput minor iteration frequency option to 1.<br />

Minor Iteration Log<br />

minor t z w v art ckpts enter leave<br />

25


Code Meaning<br />

B A Backtracking search was performed from the current iterate to<br />

the Newton point in order to obtain sufficient decrease in the merit<br />

function.<br />

D The step was accepted because the Distance between the current<br />

iterate and the Newton point was small.<br />

G A gradient step was performed.<br />

I Initial information concerning the problem is displayed.<br />

M The step was accepted because the Merit function value is smaller<br />

than the nonmonotone reference value.<br />

O A step that satisfies both the distance and merit function tests.<br />

R A Restart was carried out.<br />

W A Watchdog step was performed in which we returned to the last<br />

point encountered with a better merit function value than the nonmonotone<br />

reference value (M, O, or B step), regenerated the Newton<br />

point, and performed a backtracking search.<br />

Table 2.2: Step Type Codes<br />

1 4.2538e-01 8 2 0 0 0 t[ 0] z[ 11]<br />

2 9.0823e-01 8 2 0 0 0 w[ 11] w[ 10]<br />

3 1.0000e+00 9 2 0 0 0 z[ 10] t[ 0]<br />

t is a parameter that goes from zero to 1 for normal starts in the pivotal<br />

code. When the parameter reaches 1, we are at a solution to the subproblem.<br />

The t column gives the current value for this parameter. The next columns<br />

report the current number of problem variables z and slacks corresponding<br />

to variables at lower bound w and at upper bound v. Artificial variables<br />

are also noted in the minor log, see [17] for further details. Checkpoints are<br />

times where the basis matrix is refactorized. The number of checkpoints is<br />

indicated in the ckpts column. Finally, the minor iteration log displays the<br />

entering and leaving variables during the pivot sequence.<br />

2.1.5 Restart Log<br />

The PATH code attempts to fully utilize the resources provided by the modeler<br />

to solve the problem. Versions of PATH after 3.0 have been much more<br />

aggressive in determining that a stationary point of the residual function has<br />

been encountered. When it is determined that no progress is being made, the<br />

26


problem is restarted from the initial point supplied in the GAMS file with<br />

a different set of options. These restarts give the flexibility to change the<br />

algorithm in the hopes that the modified algorithm leads to a solution. The<br />

ordering and nature of the restarts were determined by empirical evidence<br />

basedupontestsperformedonreal-worldproblems.<br />

The exact options set during the restart are given in the restart log, part<br />

of which is reproduced below.<br />

Restart Log<br />

proximal_perturbation 0<br />

crash_method none<br />

crash_perturb yes<br />

nms_initial_reference_factor 2<br />

proximal_perturbation 1.0000e-01<br />

If a particular problem solves under a restart, a modeler can circumvent the<br />

wasted computation by setting the appropriate options as shown in the log.<br />

Note that sometimes an option is repeated in this log. In this case, it is the<br />

last option that is used.<br />

2.1.6 Solution Log<br />

A solution report is now given by the algorithm for the point returned. The<br />

first component is an evaluation of several different merit functions. Next, a<br />

display of some statistics concerning the final point is given. This report can<br />

be used detect problems with the model and solution as detailed in Chapter 3.<br />

At the end of the log file, summary information regarding the algorithm’s<br />

performance is given. The string “** EXIT - solution found.” is an indication<br />

that PATH solved the problem. Any other EXIT string indicates a<br />

termination at a point that may not be a solution. These strings give an<br />

indication of what modelstat and solstat will be returned to GAMS. After<br />

this, the “Restarting execution” flag indicates that GAMS has been restarted<br />

and is processing the results passed back by PATH.<br />

2.2 Status File<br />

If for some reason the PATH solver exits without writing a solution, or the<br />

sysout flag is turned on, the status file generated by the PATH solver will<br />

be reported in the listing file. The status file is similar to the log file, but<br />

27


provides more detailed information. The modeler is typically not interested<br />

in this output.<br />

2.3 User Interrupts<br />

A user interrupt can be effected by typing Ctrl-C. We only check for interrupts<br />

every major iteration. If a more immediate response is wanted,<br />

repeatedly typing Ctrl-C will eventually kill the job. The number needed is<br />

controlled by the interrupt limit option. In this latter case, when a kill is<br />

performed, no solution is written and an execution error will be generated in<br />

GAMS.<br />

2.4 <strong>Options</strong><br />

The default options of PATH should be sufficient for most models; the technique<br />

for changing these options are now described. To change the default<br />

options on the model transport, the modeler is required to write a file<br />

path.opt in the working directory and either add a line<br />

transport.optfile = 1;<br />

before the solve statement in the file transmcp.gms, or use the commandline<br />

option<br />

gams transmcp optfile=1<br />

Unless the modeler has changed the WORKDIR parameter explicitly, the<br />

working directory will be the directory containing the model file.<br />

We give a list of the available options along with their defaults and meaning<br />

in Table 2.3, Table 2.4, and Table 2.5. Note that only the first three<br />

characters of every word are significant.<br />

GAMS controls the total number of pivots allowed via the iterlim option.<br />

If more pivots are needed for a particular model then either of the<br />

following lines should be added to the transmcp.gms file before the solve<br />

statement<br />

option iterlim = 2000;<br />

transport.iterlim = 2000;<br />

28


Option Default Explanation<br />

convergence tolerance 1e-6 Stopping criterion<br />

crash iteration limit 50 Maximum iterations allowed in<br />

crash<br />

crash merit function fischer Merit function used in crash method<br />

crash method pnewton pnewton or none<br />

crash minimum dimension 1 Minimum problem dimension to perform<br />

crash<br />

crash nbchange limit 1 Number of changes to the basis allowed<br />

crash perturb yes Perturb the problem using pnewton<br />

crash<br />

crash searchtype line Searchtype to use in the crash<br />

method<br />

cumulative iteration limit 10000 Maximum minor iterations allowed<br />

gradient searchtype arc Searchtype to use when a gradient<br />

step is taken<br />

gradient step limit 5 Maximum number of gradient step<br />

allowed before restarting<br />

interrupt limit 5 Ctrl-C’s required before killing job<br />

lemke start automatic Frequency of lemke starts (automatic,<br />

first, always)<br />

major iteration limit 500 Maximum major iterations allowed<br />

merit function fischer Merit function to use (normal or fischer)<br />

minor iteration limit 1000 Maximum minor iterations allowed<br />

in each major iteration<br />

nms yes Allow line searching, watchdoging,<br />

and nonmonotone descent<br />

nms initial reference factor 20 Controls size of initial reference<br />

value<br />

nms maximum watchdogs 5 Maximum number of watchdog steps<br />

allowed<br />

nms memory size 10 Number of reference values kept<br />

nms mstep frequency 10 Frequency at which m steps are performed<br />

Table 2.3: PATH <strong>Options</strong><br />

29


Option Default Explanation<br />

nms searchtype line Search type to use (line, or arc)<br />

output yes Turn output on or off. If output<br />

is turned off, selected parts can be<br />

turned back on.<br />

output crash iterations yes Output information on crash iterations<br />

output crash iterations frequency 1 Frequency at which crash iteration<br />

log is printed<br />

output errors yes Output error messages<br />

output factorization singularities yes Output linearly dependent<br />

columns determined by factorization<br />

output final degeneracy statistics no Print information regarding degeneracy<br />

at the solution<br />

output final point no Output final point returned from<br />

PATH<br />

output final point statistics yes Output information about the<br />

point, function, and Jacobian at<br />

the final point<br />

output final scaling statistics no Display matrix norms on the Jacobian<br />

at the final point<br />

output final statistics yes Output evaluation of available<br />

merit functions at the final point<br />

output final summary yes Output summary information<br />

output initial point no Output initial point given to<br />

PATH<br />

output initial point statistics yes Output information about the<br />

point, function, and Jacobian at<br />

the initial point<br />

output initial scaling statistics yes Display matrix norms on the Jacobian<br />

at the initial point<br />

output initial statistics no Output evaluation of available<br />

merit functions at the initial<br />

point<br />

Table 2.4: PATH <strong>Options</strong> (cont)<br />

30


Option Default Explanation<br />

output linear model no Output linear model each major<br />

iteration<br />

output major iterations yes Output information on major iterations<br />

output major iterations frequency 1 Frequency at which major iteration<br />

log is printed<br />

output minor iterations yes Output information on minor iterations<br />

output minor iterations frequency 500 Frequency at which minor iteration<br />

log is printed<br />

output model statistics yes Turns on or off printing of all<br />

the statistics generated about the<br />

model<br />

output options no Output all options and their values<br />

output preprocess yes Output preprocessing information<br />

output restart log yes Output options during restarts<br />

output warnings no Output warning messages<br />

preprocess yes Attempt to preprocess the model<br />

proximal perturbation 0 Initial perturbation<br />

restart limit 3 Maximum number of restarts (0 -<br />

3)<br />

return best point yes Return the best point encountered<br />

or the absolute last iterate<br />

time limit 3600 Maximum number of seconds algorithm<br />

is allowed to run<br />

Table 2.5: PATH <strong>Options</strong> (cont)<br />

31


Similarly if the solver runs out of memory, then the workspace allocated can<br />

be changed using<br />

transport.workspace = 20;<br />

The above example would allocate 20MB of workspace for solving the model.<br />

Problems with a singular basis matrix can be overcome by using the<br />

proximal perturbation option [3], and linearly dependent columns can be<br />

output with the output factorization singularities option. For more<br />

information on singularities, we refer the reader to Chapter 3.<br />

As a special case, PATH can emulate Lemke’s method [7, 31] for LCP<br />

with the following options:<br />

crash_method none;<br />

crash_perturb no;<br />

major_iteration_limit 1;<br />

lemke_start first;<br />

nms no;<br />

If instead, PATH is to imitate the Successive Linear Complementarity method<br />

(SLCP, often called the Josephy Newton method) [28, 33, 32] for MCP with<br />

an Armijo style linesearch on the normal map residual, then the options to<br />

use are:<br />

crash_method none;<br />

crash_perturb no;<br />

lemke_start always;<br />

nms_initial_reference_factor 1;<br />

nms_memory size 1;<br />

nms_mstep_frequency 1;<br />

nms_searchtype line;<br />

merit_function normal;<br />

Note that nms memory size 1 and nms initial reference factor 1 turn<br />

off the nonmonotone linesearch, while nms mstep frequency 1 turns off watchdoging<br />

[5]. nms searchtype line forces PATH to search the line segment between<br />

the initial point and the solution to the linear model, while merit function<br />

normal tell PATH to use the normal map for calculating the residual.<br />

2.5 PATHC<br />

PATHC uses a different link to the GAMS system with the remaining code<br />

identical. PATHC does not support MPSGE models, but enables the use<br />

32


of preprocessing and can be used to solve constrained systems of nonlinear<br />

equations. The output for PATHC is identical to the main distribution described<br />

in Section 2.1 with additional output for preprocessing. The options<br />

are the same between the two versions.<br />

2.5.1 Preprocessing<br />

The preprocessor is work in progress. The exact output in the final<br />

version may differ from that given below.<br />

The purpose of a preprocessor is to reduce the size and complexity of<br />

a model to achieve improved performance by the main algorithm. Another<br />

benefit of the analysis performed is the detection of some provably unsolvable<br />

problems. A comprehensive preprocessor has been incorporated into PATHC<br />

as developed in [18].<br />

The preprocessor reports its finding with some additional output to the<br />

log file. This output occurs before the initial point statistics. An example of<br />

the preprocessing on the forcebsm model is presented below.<br />

Zero: 0 Single: 112 Double: 0 Forced: 0<br />

Preprocessed size: 72<br />

The preprocessor looks for special polyhedral structure and eliminates variables<br />

using this structure. These are indicated with the above line of text.<br />

Other special structure is also detected and reported.<br />

On exit from the algorithm, we must generate a solution for the original<br />

problem. This is done during the postsolve. Following the postsolve, the<br />

residual using the original model is reported.<br />

Postsolved residual: 1.0518e-10<br />

This number should be approximately the same as the final residual reported<br />

on the presolved model.<br />

2.5.2 Constrained Nonlinear Systems<br />

Modelers typically add bounds to their variables when attempting to solve<br />

nonlinear problems in order to restrict the domain of interest. For example,<br />

many square nonlinear systems are formulated as<br />

F (z) =0,ℓ≤ z ≤ u,<br />

33


where typically, the bounds on z are inactive at the solution. This is not an<br />

MCP, but is an example of a “constrained nonlinear system” (CNS). It is<br />

important to note the distinction between MCP and CNS. The MCP uses<br />

the bounds to infer relationships on the function F . If a finite bound is active<br />

at a solution, the corresponding component of F is only constrained to be<br />

nonnegative or nonpositive in the MCP, whereas in CNS it must be zero.<br />

Thus there may be many solutions of MCP that do not satisfy F (z) =0.<br />

Only if z ∗ is a solution of MCP with ℓ < z ∗ < u is it guaranteed that<br />

F (z ∗ )=0.<br />

Internally, PATHC reformulates a constrained nonlinear system of equations<br />

to an equivalent complementarity problem. The reformulation adds<br />

variables, y, with the resulting problem written as:<br />

ℓ ≤ x ≤ u ⊥ −y<br />

y free ⊥ F (x).<br />

This is the MCP model passed on to the PATH solver.<br />

34


Chapter 3<br />

Advanced Topics<br />

This chapter discusses some of the difficulties encountered when dealing with<br />

complementarity problems. We start off with a very formal definition of a<br />

complementarity problem which is used in later sections on merit functions<br />

and ill-defined, poorly-scaled, and singular models.<br />

3.1 Formal Definition of MCP<br />

The mixed complementarity problem is defined by a function, F : D → R n<br />

where D ⊆ R n is the domain of F , and possibly infinite lower and upper<br />

bounds, ℓ and u. Let C := {x ∈ R n | ℓ ≤ x ≤ u}, a Cartesian product of<br />

closed (possibly infinite) intervals. The problem is given as<br />

MCP : find x ∈ C ∩ D s.t. 〈F (x), y− x〉 ≥0, ∀y ∈ C.<br />

This formulation is a special case of the variational inequality problem defined<br />

by F and a (nonempty, closed, convex) set C. Special choices of ℓ and u lead<br />

to the familiar cases of a system of nonlinear equations<br />

F (x) =0<br />

(generated by ℓ ≡−∞, u ≡ +∞) and the nonlinear complementarity problem<br />

0 ≤ x ⊥ F (x) ≥ 0<br />

(generated using ℓ ≡ 0, u ≡ +∞).<br />

35


3.2 Algorithmic Features<br />

We now describe some of the features of the PATH algorithm and the options<br />

affecting each.<br />

3.2.1 Merit Functions<br />

A solver for complementarity problems typically employs a merit function to<br />

indicate the closeness of the current iterate to the solution set. The merit<br />

function is zero at a solution to the original problem and strictly positive<br />

otherwise. Numerically, an algorithm terminates when the merit function is<br />

approximately equal to zero, thus possibly introducing spurious “solutions”.<br />

The modeler needs to be able to determine with some reasonable degree<br />

of accuracy whether the algorithm terminated at solution or if it simply<br />

obtained a point satisfying the desired tolerances that is not close to the<br />

solution set. For complementarity problems, we can provide several indicators<br />

with different characteristics to help make such a determination. If one<br />

of the indicators is not close to zero, then there is some evidence that the<br />

algorithm has not found a solution. We note that if all of the indicators are<br />

close to zero, we are reasonably sure we have found a solution. However, the<br />

modeler has the final responsibility to evaluate the “solution” and check that<br />

it makes sense for their application.<br />

For the NCP, a standard merit function is<br />

(−x)+, (−F (x))+, [(xi)+(Fi(x))+]i<br />

with the first two terms measuring the infeasibility of the current point and<br />

the last term indicating the complementarity error. In this expression, we use<br />

(·)+ to represent the Euclidean projection of x onto the nonnegative orthant,<br />

that is (x)+ =max(x, 0). For the more general MCP, we can define a similar<br />

function:<br />

<br />

<br />

<br />

<br />

<br />

<br />

xi − ℓi<br />

ui − xi<br />

<br />

<br />

x<br />

− π(x),<br />

(Fi(x))+ ,<br />

(−Fi(x))+ <br />

<br />

ℓi +1<br />

ui +1<br />

+<br />

i<br />

+<br />

<br />

i<br />

where π(x) represents the Euclidean projection of x onto C. Wecanseethat<br />

if we have an NCP, the function is exactly the one previously given and for<br />

nonlinear systems of equations, this becomes F (x).<br />

36


There are several reformulations of the MCP as systems of nonlinear (nonsmooth)<br />

equations for which the corresponding residual is a natural merit<br />

function. Some of these are as follows:<br />

• Generalized Minimum Map: x − π(x − F (x))<br />

• Normal Map: F (π(y)) + y − π(y)<br />

• Fischer Function: Φ(x), where Φi(x) :=φ(xi,Fi(x)) with<br />

φ(a, b) := √ a + b − a − b.<br />

Note that φ(a, b) = 0 if and only if 0 ≤ a ⊥ b ≥ 0. A straightforward<br />

extension of Φ to the MCP format is given for example in [14].<br />

In the context of nonlinear complementarity problems the generalized minimum<br />

map corresponds to the classic minimum map min(x, F (x)). Furthermore,<br />

for NCPs the minimum map and the Fischer function are both local<br />

error bounds and were shown to be equivalent in [36]. Figure 3.3 in the subsequent<br />

section plots all of these merit functions for the ill-defined example<br />

discussed therein and highlights the differences between them.<br />

The squared norm of Φ, namely Ψ(x) := 1 <br />

φ(xi,Fi) 2<br />

2 , is continuously<br />

differentiable on Rn provided F itself is. Therefore, the first order optimality<br />

conditions for the unconstrained minimization of Ψ(x), namely ∇Ψ(x) =0<br />

give another indication as to whether the point under consideration is a<br />

solution of MCP.<br />

The merit functions and the information PATH provides at the solution<br />

can be useful for diagnostic purposes. By default, PATH 4.x returns the best<br />

point with respect to the merit function because this iterate likely provides<br />

better information to the modeler. As detailed in Section 2.4, the default<br />

merit function in PATH 4.x is the Fischer function. To change this behavior<br />

the merit function option can be used.<br />

3.2.2 Crashing Method<br />

The crashing technique [12] is used to quickly identify an active set from the<br />

user-supplied starting point. At this time, a proximal perturbation scheme<br />

[1, 2] is used to overcome problems with a singular basis matrix. The proximal<br />

perturbation is introduced in the crash method, when the matrix factored<br />

37


is determined to be singular. The value of the perturbation is based on the<br />

current merit function value.<br />

Even if the crash method is turned off, for example via the option crash method<br />

none, perturbation can be added. This is determined by factoring the matrix<br />

that crash would have initially formed. This behavior is extremely useful<br />

for introducing a perturbation for singular models. It can be turned off by<br />

issuing the option crash perturb no.<br />

3.2.3 Nonmontone Searches<br />

The first line of defense against convergence to stationary points is the use<br />

of a nonmonotone linesearch [23, 24, 15]. In this case we define a reference<br />

value, R k and we use this value in test for sufficient decrease: test:<br />

Ψ(x k + tkd k ) ≤ R k + tk∇Ψ(x k ) T d k .<br />

Depending upon the choice of the reference value, this allows the merit function<br />

to increase from one iteration to the next. This strategy can not only<br />

improve convergence, but can also avoid local minimizers by allowing such<br />

increases.<br />

We now need to detail our choice of the reference value. We begin by<br />

letting {M1,...,Mm} be a finite set of values initialized to κΨ(x 0 ), where<br />

κ is used to determine the initial set of acceptable merit function values.<br />

The value of κ defaults to 1 in the code and can be modified with the<br />

nms initial reference factor option; κ = 1 indicates that we are not<br />

going to allow the merit function to increase beyond its initial value.<br />

Having defined the values of {M1,...,Mm} (where the code by default<br />

uses m = 10), we can now calculate a reference value. We must be careful<br />

when we allow gradient steps in the code. Assuming that d k is the Newton<br />

direction, we define i0 =argmaxMi and R k = Mi0. After the nonmonotone<br />

linesearch rule above finds tk, weupdatethememorysothatMi0 =Ψ(x k +<br />

tkd k ), i.e. we remove an element from the memory having the largest merit<br />

function value.<br />

When we decide to use a gradient step, it is beneficial to let x k = x best<br />

where x best is the point with the absolute best merit function value encountered<br />

so far. We then recalculate d k = −∇Ψ(x k ) using the best point and<br />

let R k =Ψ(x k ). That is to say that we force decrease from the best iterate<br />

found whenever a gradient step is performed. After a successful step we set<br />

38


Mi =Ψ(x k + tkd k ) for all i ∈ [1,...,m]. This prevents future iterates from<br />

returning to the same problem area.<br />

A watchdog strategy [5] is also available for use in the code. The method<br />

employed allows steps to be accepted when they are “close” to the current<br />

iterate. Nonmonotonic decrease is enforced every m iterations, where m is<br />

set by the nms mstep frequency option.<br />

3.2.4 Linear Complementarity Problems<br />

PATH solves a linear complementarity problem each major iteration. Let<br />

M ∈ℜ n×n , q ∈ℜ n ,andB =[l, u] begiven.(¯z, ¯w, ¯v) solves the linear mixed<br />

complementarity problem defined by M, q, andB if and only if it satisfies<br />

the following constrained system of equations:<br />

Mz − w + v + q =0 (3.1)<br />

w T (z − l) = 0 (3.2)<br />

v T (u − z) = 0 (3.3)<br />

z ∈ B,w ∈ℜ n + ,v ∈ℜn + , (3.4)<br />

where x + ∞ = ∞ for all x ∈ℜand 0 ·∞ = 0 by convention. A triple,<br />

(ˆz, ˆw, ˆv), satisfying equations (3.1) - (3.3) is called a complementary triple.<br />

The objective of the linear model solver is to construct a path from a<br />

given complementary triple (ˆz, ˆw, ˆv) toasolution(¯z, ¯w, ¯v). The algorithm<br />

used to solve the linear problem is identical to that given in [9]; however,<br />

artificial variables are incorporated into the model. The augmented system<br />

is then:<br />

(1 − t)<br />

Mz − w + v + Da + (sr)+q =0 (3.5)<br />

s<br />

w T (z − l) = 0 (3.6)<br />

v T (u − z) = 0 (3.7)<br />

z ∈ B,w ∈ℜ n + ,v ∈ℜn + ,a≡ 0,t∈ [0, 1] (3.8)<br />

where r is the residual, t is the path parameter, and a is a vector of artificial<br />

variables. The residual is scaled by s to improve numerical stability.<br />

The addition of artificial variables enables us to construct an initial invertible<br />

basis consistent with the given starting point even under rank deficiency.<br />

The procedure consists of two parts: constructing an initial guess as to the<br />

39


asis and then recovering from rank deficiency to obtain an invertible basis.<br />

The crash technique gives a good approximation to the active set. The<br />

first phase of the algorithm uses this information to construct a basis by<br />

partitioning the variables into three sets:<br />

1. W = {i ∈{1,...,n}|ˆzi = li and ˆwi > 0}<br />

2. V = {i ∈{1,...,n}|ˆzi = ui and ˆvi > 0}<br />

3. Z = {1,...,n}\W ∪ V<br />

Since (ˆz, ˆw, ˆv) is a complementary triple, Z ∩ W ∩ V = ∅ and Z ∪ W ∪<br />

V = {1,...,n}. <strong>Using</strong> the above guess, we can recover an invertible basis<br />

consistent with the starting point by defining D appropriately. The technique<br />

relies upon the factorization to tell the linearly dependent rows and columns<br />

of the basis matrix. Some of the variables may be nonbasic, but not at their<br />

bounds. For such variables, the corresponding artificial will be basic.<br />

We use a modified version of EXPAND [22] to perform the ratio test.<br />

Variables are prioritized as follows:<br />

1. t leaving at its upper bound.<br />

2. Any artificial variable.<br />

3. Any z, w, orv variable.<br />

If a choice as to the leaving variable can be made while maintaining numerical<br />

stability and sparsity, we choose the variable with the highest priority (lowest<br />

number above).<br />

When an artificial variable leaves the basis and a z-type variable enters,<br />

we have the choice of either increasing or decreasing that entering variable<br />

because it is nonbasic but not at a bound. The determination is made such<br />

that t increases and stability is preserved.<br />

If the code is forced to use a ray start at each iteration (lemke start<br />

always), then the code carries out Lemke’s method, which is known [7] not<br />

to cycle. However, by default, we use a regular start to guarantee that<br />

the generated path emanates from the current iterate. Under appropriate<br />

conditions, this guarantees a decrease in the nonlinear residual. However, it<br />

is then possible for the pivot sequence in the linear model to cycle. To prevent<br />

this undesirable outcome, we attempt to detect the formation of a cycle with<br />

40


the heuristic that if a variable enters the basis more that a given number of<br />

times, we are cycling. The number of times the variable has entered is reset<br />

whenever t increases beyond its previous maximum or an artificial variable<br />

leaves the basis. If cycling is detected, we terminate the linear solver at the<br />

largest value of t and return this point.<br />

Another heuristic is added when the linear code terminates on a ray.<br />

The returned point in this case is not the base of the ray. We move a<br />

slight distance up the ray and return this new point. If we fail to solve the<br />

linear subproblem five times in a row, a Lemke ray start will be performed<br />

in an attempt to solve the linear subproblem. Computational experience<br />

has shown this to be an effective heuristic and generally results in solving<br />

the linear model. <strong>Using</strong> a Lemke ray start is not the default mode, since<br />

typically many more pivots are required.<br />

For time when a Lemke start is actually used in the code, an advanced<br />

ray can be used. We basically choose the “closest” extreme point of the<br />

polytope and choose a ray in the interior of the normal cone at this point.<br />

This helps to reduce the number of pivots required. However, this can fail<br />

when the basis corresponding to the cell is not invertible. We then revert to<br />

the Lemke start.<br />

Since the EXPAND pivot rules are used, some of the variable may be<br />

nonbasic, but slightly infeasible, as the solution. Whenever the linear code<br />

finisher, the nonbasic variables are put at their bounds and the basic variable<br />

are recomputed using the current factorization. This procedure helps to find<br />

the best possible solution to the linear system.<br />

The resulting linear solver as modified above is robust and has the desired<br />

property that we start from (ˆz, ˆw, ˆv) and construct a path to a solution.<br />

3.2.5 Other Features<br />

Some other heuristics are incorporated into the code. During the first iteration,<br />

if the linear solver fails to find a Newton point, a Lemke start is used.<br />

Furthermore, under repeated failures during the linear solve, a Lemke starts<br />

will be attempted. A gradient step can also be used when we fail repeatedly.<br />

The proximal perturbation is shrunk each major iteration. However, when<br />

numerical difficulties are encountered, it will be increase to a fraction of the<br />

current merit function value. These are determined as when the linear solver<br />

returns the Reset or Singular status.<br />

Spacer steps are taken every major iteration, in which the iterate is chosen<br />

41


to be the best point for the normal map. The corresponding basis passed<br />

into the Lemke code is also updated.<br />

Scaling is done based on the diagonal of the matrix passed into the linear<br />

solver.<br />

We finally note, that we the merit function fails to show sufficient decrease<br />

over the last 100 iterates, a restart will be performed, as this indicates we<br />

are close to a stationary point.<br />

3.3 Difficult Models<br />

3.3.1 Ill-Defined Models<br />

A problem can be ill-defined for several different reasons. We concentrate on<br />

the following particular cases. We will call F well-defined at ¯x ∈ C if ¯x ∈ D<br />

and ill-defined at ¯x otherwise. Furthermore, we define F to be well-defined<br />

near ¯x ∈ C ifthereexistsanopenneighborhoodof¯x, N (¯x), such that<br />

C ∩N(¯x) ⊆ D. By saying the function is well-defined near ¯x, wearesimply<br />

stating that F is defined for all x ∈ C sufficiently close to ¯x. A function not<br />

well-defined near ¯x is termed ill-defined near ¯x.<br />

We will say that F has a well-defined Jacobian at ¯x ∈ C if there exists an<br />

open neighborhood of ¯x, N (¯x), such that N (¯x) ⊆ D and F is continuously<br />

differentiable on N (¯x). Otherwise the function has an ill-defined Jacobian<br />

at ¯x. We note that a well-defined Jacobian at ¯x implies that the MCP has a<br />

well-defined function near ¯x, but the converse is not true.<br />

PATH uses both function and Jacobian information in its attempt to solve<br />

the MCP. Therefore, both of these definitions are relevant. We discuss cases<br />

where the function and Jacobian are ill-defined in the next two subsections.<br />

We illustrate uses for the merit function information and final point statistics<br />

within the context of these problems.<br />

Function Undefined<br />

We begin with a one-dimensional problem for which F is ill-defined at x =0<br />

as follows:<br />

0 ≤ x ⊥ 1 ≥ 0. x<br />

Here x must be strictly positive because 1<br />

is undefined at x = 0. This<br />

x<br />

condition implies that F (x) must be equal to zero. Since F (x) is strictly<br />

42


positive variable x;<br />

equations F;<br />

F.. 1 / x =g= 0;<br />

model simple / F.x /;<br />

x.l = 1e-6;<br />

solve simple using mcp;<br />

Figure 3.1: GAMS Code for Ill-Defined Function<br />

positive for all x strictly positive, this problem has no solution.<br />

We are able to perform this analysis because the dimension of the problem<br />

is small. Preprocessing linear problems can be done by the solver in an<br />

attempt to detect obviously inconsistent problems, reduce problem size, and<br />

identify active components at the solution. Similar processing can be done<br />

for nonlinear models, but the analysis becomes more difficult to perform.<br />

Currently, PATH only checks the consistency of the bounds and removes<br />

fixed variables and the corresponding complementary equations from the<br />

model.<br />

A modeler might not know a priori that a problem has no solution and<br />

might attempt to formulate and solve it. GAMS code for this model is<br />

provided in Figure 3.1. We must specify an initial value for x in the code.<br />

If we were to not provide one, GAMS would use x = 0 as the default value,<br />

notice that F is undefined at the initial point, and terminate before giving<br />

the problem to PATH. The error message problem indicates that the function<br />

1<br />

x<br />

is ill-defined at x = 0, but does not determine whether the corresponding<br />

MCP problem has a solution.<br />

After setting the starting point, GAMS generates the model, and PATH<br />

proceeds to “solve” it. A portion of the output relating statistics about the<br />

solution is given in Figure 3.2. PATH uses the Fischer Function indicator as<br />

its termination criteria by default, but evaluates all of the merit functions<br />

given in Section 3.2.1 at the final point. The Normal Map merit function,<br />

and to a lesser extent, the complementarity error, indicate that the “solution”<br />

found does not necessarily solve the MCP.<br />

To indicate the difference between the merit functions, Figure 3.3 plots<br />

them all for the simple example. We note that as x approaches positive infin-<br />

43


FINAL STATISTICS<br />

Inf-Norm of Complementarity . . 1.0000e+00 eqn: (F)<br />

Inf-Norm of Normal Map. . . . . 1.1181e+16 eqn: (F)<br />

Inf-Norm of Minimum Map . . . . 8.9441e-17 eqn: (F)<br />

Inf-Norm of Fischer Function. . 8.9441e-17 eqn: (F)<br />

Inf-Norm of Grad Fischer Fcn. . 8.9441e-17 eqn: (F)<br />

FINAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 8.9441e-17 var: (X)<br />

Maximum of F. . . . . . . . . . 1.1181e+16 eqn: (F)<br />

Maximum of Grad F . . . . . . . 1.2501e+32 eqn: (F)<br />

var: (X)<br />

Figure 3.2: PATH Output for Ill-Defined Function<br />

ity, numerically, we are at a solution to the problem with respect to all of the<br />

merit functions except for the complementarity error, which remains equal<br />

to one. As x approaches zero, the merit functions diverge, also indicating<br />

that x = 0 is not a solution.<br />

The natural residual and Fischer function tend toward 0 as x ↓ 0. From<br />

these measures, we might think x = 0 is the solution. However, as previously<br />

remarked F is ill-defined at x =0. F and ∇F become very large, indicating<br />

that the function (and Jacobian) might not be well-defined. We might be<br />

tempted to conclude that if one of the merit function indicators is not close<br />

to zero, then we have not found a solution. This conclusion is not always the<br />

case. When one of the indicators is non-zero, we have reservations about the<br />

solution, but we cannot eliminate the possibility that we are actually close<br />

to a solution. If we slightly perturb the original problem to<br />

0 ≤ x ⊥ 1 ≥ 0 x+ɛ<br />

for a fixed ɛ>0, the function is well-defined over C = R n +<br />

and has a unique<br />

solution at x = 0. In this case, by starting at x>0 and sufficiently small,<br />

all of the merit functions, with the exception of the Normal Map, indicate<br />

that we have solved the problem as is shown by the output in Figure 3.4 for<br />

ɛ =1∗10−6 and x =1∗10−20 . In this case, the Normal Map is quite large and<br />

we might think that the function and Jacobian are undefined. When only the<br />

normal map is non-zero, we may have just mis-identified the optimal basis.<br />

By setting the merit function normal option, we can resolve the problem,<br />

identify the correct basis, and solve the problem with all indicators being<br />

44


esidual<br />

2<br />

1.8<br />

1.6<br />

1.4<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

Residual Functions for MCP<br />

Normal Map<br />

Complementarity<br />

Natural Residual<br />

Fischer−Burmeister<br />

0<br />

0 1 2 3<br />

x<br />

4 5 6<br />

Figure 3.3: Merit Function Plot<br />

FINAL STATISTICS<br />

Inf-Norm of Complementarity . . 1.0000e-14 eqn: (G)<br />

Inf-Norm of Normal Map. . . . . 1.0000e+06 eqn: (G)<br />

Inf-Norm of Minimum Map . . . . 1.0000e-20 eqn: (G)<br />

Inf-Norm of Fischer Function. . 1.0000e-20 eqn: (G)<br />

Inf-Norm of Grad Fischer Fcn. . 1.0000e-20 eqn: (G)<br />

FINAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 1.0000e-20 var: (X)<br />

Maximum of F. . . . . . . . . . 1.0000e+06 eqn: (G)<br />

Maximum of Grad F . . . . . . . 1.0000e+12 eqn: (G)<br />

var: (X)<br />

Figure 3.4: PATH Output for Well-Defined Function<br />

45


FINAL STATISTICS<br />

Inf-Norm of Complementarity . . 1.0000e-07 eqn: (F)<br />

Inf-Norm of Normal Map. . . . . 1.0000e-07 eqn: (F)<br />

Inf-Norm of Minimum Map . . . . 1.0000e-07 eqn: (F)<br />

Inf-Norm of Fischer Function. . 2.0000e-07 eqn: (F)<br />

Inf-Norm of Grad FB Function. . 2.0000e+00 eqn: (F)<br />

FINAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 1.0000e-14 var: (X)<br />

Maximum of F. . . . . . . . . . 1.0000e-07 eqn: (F)<br />

Maximum of Grad F . . . . . . . 5.0000e+06 eqn: (F)<br />

var: (X)<br />

Figure 3.5: PATH Output for Ill-Defined Jacobian<br />

close to zero. This example illustrates the point that all of these tests are<br />

not infallible. The modeler still needs to do some detective work to determine<br />

if they have found a solution or if the algorithm is converging to a point where<br />

the function is ill-defined.<br />

Jacobian Undefined<br />

Since PATH uses a Newton-like method to solve the problem, it also needs<br />

the Jacobian of F to be well-defined. One model for which the function is<br />

well-defined over C, but for which the Jacobian is undefined at the solution<br />

is: 0 ≤ x ⊥− √ x ≥ 0. This model has a unique solution at x =0.<br />

<strong>Using</strong> PATH and starting from the point x =1∗ 10 −14 , PATH generates<br />

the output given in Figure 3.5. We can see the that gradient of the Fischer<br />

Function is nonzero and the Jacobian is beginning to become large. These<br />

conditions indicate that the Jacobian is undefined at the solution. It is<br />

therefore important for a modeler to inspect the given output to guard against<br />

such problems.<br />

If we start from x = 0, PATH correctly informs us that we are at the<br />

solution. Even though the entries in the Jacobian are undefined at this<br />

point, the GAMS interpreter incorrectly returns a value of 0 to PATH. This<br />

problem with the Jacobian is therefore undetectable by PATH. (This problem<br />

has been fixed in versions of GAMS beyond 19.1).<br />

46


INITIAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 4.1279e+06 var: (w.29)<br />

Maximum of F. . . . . . . . . . 2.2516e+00 eqn: (a1.33)<br />

Maximum of Grad F . . . . . . . 6.7753e+06 eqn: (a1.29)<br />

var: (x1.29)<br />

INITIAL JACOBIAN NORM STATISTICS<br />

Maximum Row Norm. . . . . . . . 9.4504e+06 eqn: (a2.29)<br />

Minimum Row Norm. . . . . . . . 2.7680e-03 eqn: (g.10)<br />

Maximum Column Norm . . . . . . 9.4504e+06 var: (x2.29)<br />

Minimum Column Norm . . . . . . 1.3840e-03 var: (w.10)<br />

Figure 3.6: PATH Output - Poorly Scaled Model<br />

3.3.2 Poorly Scaled Models<br />

Problems which are well-defined can have various numerical problems that<br />

can impede the algorithm’s convergence. One particular problem is a badly<br />

scaled Jacobian. In such cases, we can obtain a poor “Newton” direction<br />

because of numerical problems introduced in the linear algebra performed.<br />

This problem can also lead the code to a point from which it cannot recover.<br />

The final model given to the solver should be scaled such that we avoid<br />

numerical difficulties in the linear algebra. The output provided by PATH<br />

can be used to iteratively refine the model so that we eventually end up with<br />

a well-scaled problem. We note that we only calculate our scaling statistics at<br />

the starting point provided. For nonlinear problems these statistics may not<br />

be indicative of the overall scaling of the model. Model specific knowledge is<br />

very important when we have a nonlinear problem because it can be used to<br />

appropriately scale the model to achieve a desired result.<br />

We look at the titan.gms model in MCPLIB, that has some scaling<br />

problems. The relevant output from PATH for the original code is given in<br />

Figure 3.6. The maximum row norm is defined as<br />

max<br />

and the minimum row norm is<br />

<br />

1≤i≤n<br />

1≤j≤n<br />

min<br />

<br />

1≤i≤n<br />

1≤j≤n<br />

| (∇F (x))ij |<br />

| (∇F (x))ij | .<br />

47


INITIAL POINT STATISTICS<br />

Maximum of X. . . . . . . . . . 1.0750e+03 var: (x1.49)<br />

Maximum of F. . . . . . . . . . 3.9829e-01 eqn: (g.10)<br />

Maximum of Grad F . . . . . . . 6.7753e+03 eqn: (a1.29)<br />

var: (x1.29)<br />

INITIAL JACOBIAN NORM STATISTICS<br />

Maximum Row Norm. . . . . . . . 9.4524e+03 eqn: (a2.29)<br />

Minimum Row Norm. . . . . . . . 2.7680e+00 eqn: (g.10)<br />

Maximum Column Norm . . . . . . 9.4904e+03 var: (x2.29)<br />

Minimum Column Norm . . . . . . 1.3840e-01 var: (w.10)<br />

Figure 3.7: PATH Output - Well-Scaled Model<br />

Similar definitions are used for the column norm. The norm numbers for this<br />

particular example are not extremely large, but we can nevertheless improve<br />

the scaling. We first decided to reduce the magnitude of the a2 block of<br />

equations as indicated by PATH. <strong>Using</strong> the GAMS modeling language, we<br />

can scale particular equations and variables using the .scale attribute. To<br />

turn the scaling on for the model we use the .scaleopt model attribute. After<br />

scaling the a2 block, we re-ran PATH and found an additional blocks of<br />

equations that also needed scaling, a2. We also scaled some of the variables,<br />

g and w. The code added to the model follows:<br />

titan.scaleopt = 1;<br />

a1.scale(i) = 1000;<br />

a2.scale(i) = 1000;<br />

g.scale(i) = 1/1000;<br />

w.scale(i) = 100000;<br />

After scaling these blocks of equations in the model, we have improved the<br />

scaling statistics which are given in Figure 3.7 for the new model. For this<br />

particular problem PATH cannot solve the unscaled model, while it can find<br />

a solution to the scaled model. <strong>Using</strong> the scaling language features and the<br />

information provided by PATH we are able to remove some of the problem’s<br />

difficulty and obtain better performance from PATH.<br />

It is possible to get even more information on initial point scaling by<br />

inspecting the GAMS listing file. The equation row listing gives the values<br />

of all the entries of the Jacobian at the starting point. The row norms<br />

generated by PATH give good pointers into this source of information.<br />

48


INITIAL POINT STATISTICS<br />

Zero column of order. . . . . . 0.0000e+00 var: (X)<br />

Zero row of order . . . . . . . 0.0000e+00 eqn: (F)<br />

Total zero columns. . . . . . . 1<br />

Total zero rows . . . . . . . . 1<br />

Maximum of F. . . . . . . . . . 1.0000e+00 eqn: (F)<br />

Maximum of Grad F . . . . . . . 0.0000e+00 eqn: (F)<br />

var: (X)<br />

Figure 3.8: PATH Output - Zero Rows and Columns<br />

Not all of the numerical problems are directly attributable to poorly<br />

scaled models. Problems for which the Jacobian of the active constraints<br />

is singular or nearly singular can also cause numerical difficulty as illustrated<br />

next.<br />

3.3.3 Singular Models<br />

Assuming that the problem is well-defined and properly scaled, we can still<br />

have a Jacobian for which the active constraints are singular or nearly singular<br />

(i.e. it is ill-conditioned). When problems are singular or nearly singular,<br />

we are also likely to have numerical problems. As a result the “Newton” direction<br />

obtained from the linear problem solver can be very bad. In PATH,<br />

we can use proximal perturbation or add artificial variables to attempt to<br />

remove the singularity problems from the model. However, it is most often<br />

beneficial for solver robustness to remove singularities if possible.<br />

The easiest problems to detect are those for which the Jacobian has zero<br />

rows and columns. A simple problem for which we have zero rows and<br />

columns is:<br />

−2 ≤ x ≤ 2 ⊥ −x 2 +1.<br />

Note that the Jacobian, −2x, is non-singular at all three solutions, but singular<br />

at the point x = 0. Output from PATH on this model starting at x =0<br />

is given in Figure 3.8. We display in the code the variables and equations for<br />

which the row/column in the Jacobian is close to zero. These situations are<br />

problematic and for nonlinear problems likely stem from the modeler providing<br />

an inappropriate starting point or fixing some variables resulting in some<br />

equations becoming constant. We note that the solver may perform well in<br />

the presence of zero rows and/or columns, but the modeler should make sure<br />

49


that these are what was intended.<br />

Singularities in the model can also be detected by the linear solver. This<br />

in itself is a hard problem and prone to error. For matrices which are poorly<br />

scaled, we can incorrectly identify “linearly dependent” rows because of numerical<br />

problems. Setting output factorization singularities yes in<br />

an options file will inform the user which equations the linear solver thinks<br />

are linearly dependent. Typically, singularity does not cause a lot of problems<br />

and the algorithm can handle the situation appropriately. However, an<br />

excessive number of singularities are cause for concern. A further indication<br />

of possible singularities at the solution is the lack of quadratic convergence<br />

to the solution.<br />

50


Appendix A<br />

Case Study: Von Thunen Land<br />

Model<br />

We now turn our attention towards using the diagnostic information provided<br />

by PATH to improve an actual model. The Von Thunen land model, is a<br />

problem renowned in the mathematical programming literature for its computational<br />

difficulty. We attempt to understand more carefully the facets<br />

of the problem that make it difficult to solve. This will enable to outline<br />

and identify these problems and furthermore to extend the model to a more<br />

realistic and computationally more tractable form.<br />

A.1 Classical Model<br />

The problem is cast in the Arrow-Debreu framework as an equilibrium problem.<br />

The basic model is a closed economy consisting of three economic agents,<br />

a landowner, a worker and a porter. There is a central market, around which<br />

concentric regions of land are located. Since the produced goods have to be<br />

delivered to the market, this is an example of a spatial price equilibrium. The<br />

key variables of the model are the prices of commodities, land, labour and<br />

transport. Given these prices, it is assumed that the agents demand certain<br />

amounts of the commodities, which are supplied so as to maximize profit in<br />

each sector. Walras’ law is then a consequence of the assumed competitive<br />

paradigm, namely that supply will equal demand in the equilibrium state.<br />

We now describe the problems that the consumers and the producers face.<br />

We first look at consumption, and derive a demand function for each of the<br />

51


consumer agents in the economy. Each of these agents has a utility function,<br />

that they wish to maximize subject to their budgetary constraints. As is<br />

typical in such problems, the utility function is assumed to be Cobb-Douglas<br />

ua(d) = <br />

d αc,a<br />

c , αc,a ≥ 0, <br />

αc,a =1,<br />

c<br />

where the αc,a are given parameters dependent only on the agent. For each<br />

agent a, thevariablesdc represent quantities of the desired commodities c.<br />

In the Von Thunen model, the goods are wheat, rice, corn and barley. The<br />

agents endowments determine their budgetary constraint as follows. Given<br />

current market prices, an agents wealth is the value of the initial endowment<br />

of goods at those prices. The agents problem is therefore<br />

max<br />

d ua(d) subject to 〈p, d〉 ≤〈p, ea〉 ,d≥ 0,<br />

where ea is the endowment bundle for agent a. A closed form solution,<br />

corresponding to demand from agent a for commodity c is thus<br />

dc,a(p) := αc,a 〈p, ea〉<br />

.<br />

Note that this assumes the prices of the commodities pc are positive.<br />

The supply side of the economy is similar. The worker earns a wage wL<br />

for his labour input. The land is distributed around the market in rings with<br />

arentalratewr associated with each ring r of land. The area of land ar<br />

in each ring is an increasing function of r. The model assumes that labour<br />

and land are substitutable via a constant elasticities of substitution (CES)<br />

function.<br />

Consider the production xc,r of commodity c in region r. In order to<br />

maximize profit (or minimize costs), the labour yL and land use yr solve<br />

min wLyL + wryr subject to φcy βc<br />

L y 1−βc<br />

r ≥ xc,r, yL,yr ≥ 0, (A.1)<br />

where φc is a given cost function scale parameter, and βc ∈ [0, 1] is the share<br />

parameter. The technology constraint is precisely the CES function allowing<br />

a suitable mix of labour and land use. Again, a closed form solution can be<br />

calculated. For example, the demand for labour in order to produce xc,r of<br />

commodity c in region r is given by<br />

βc 1−βc wL wr<br />

βc βc 1−βc<br />

.<br />

xc,r<br />

φcwL<br />

52<br />

pc<br />

c


Considering all such demands, this clearly assumes the prices of inputs wL, wr<br />

are positive. A key point to note is that input commodity (factor) demands<br />

to produce xc,r can be determined by first solving (A.1) for unit demand<br />

xc,r ≡ 1 and then multiplying these factor demands by the actual amount<br />

desired. Let ¯yL and ¯yr denote the optimal solutions of (A.1) with xc,r ≡ 1.<br />

<strong>Using</strong> this fact, the unit production cost γc,r for commodity c in region r can<br />

be calculated as follows:<br />

γc,r = wL¯yL + wr¯yr<br />

βc <br />

wr<br />

= wL<br />

= 1<br />

φc<br />

βc<br />

wL<br />

βc<br />

1−βc<br />

φcwL<br />

βc <br />

wL wr<br />

βc<br />

1 − βc<br />

1−βc<br />

1−βc<br />

+ wr<br />

.<br />

(1 − βc) βc 1−βc wL wr<br />

βc 1−βc<br />

φcwr<br />

Transportation is provided by a porter, earning a wage wp. Ifwedenote<br />

the unit cost for transportation of commodity c by tc, then unit transportation<br />

cost to market is<br />

Tc,r(wp) :=tcdrwp,<br />

where dr is the distance of region r to the market. Spatial price equilibrium<br />

arises from the consideration:<br />

0 ≤ xc,r ⊥ γc,r(wL,wr)+Tc,r(wp) ≥ pc.<br />

This is intuitively clear; it states that commodity c will be produced in<br />

region r only if the combined cost of production and transportation equals<br />

the market price.<br />

The above derivations assumed that the producers and consumers acted<br />

as price takers. Walras’ law is now invoked to determine the prices so that<br />

markets clear. The resulting complementarity problem is:<br />

βc <br />

wL wr<br />

1−βc<br />

γc,r = 1<br />

φc βc 1 − βc<br />

(A.2)<br />

0 ≤ xc,r ⊥ γc,r + Tc,r(wp) ≥ pc (A.3)<br />

0 ≤ wL ⊥ eL ≥ <br />

(A.4)<br />

0 ≤ wr ⊥ ar ≥ <br />

βcγc,r<br />

xc,r<br />

r,c wL<br />

xc,r(1 − βc)γc,r<br />

c<br />

wr<br />

53<br />

(A.5)


0 ≤ wp ⊥ eP ≥ <br />

tcdrxc,r<br />

r,c<br />

0 ≤ pc ⊥ <br />

xc,r ≥ αc,P eP wp + αc,LeLwL + αc,O<br />

r<br />

pc<br />

<br />

r wrar<br />

(A.6)<br />

(A.7)<br />

Note that in (A.4), (A.5) and (A.6), the amounts of labour, land and transport<br />

are bounded from above, and hence the prices on these inputs are determined<br />

as multipliers (or shadow prices) on the corresponding constraints.<br />

The final relationship (A.7) in the above complementarity problem corresponds<br />

to market clearance; prices are nonnegative and can only be positive<br />

if supply equals demand. (Some modelers multiply the last inequality<br />

throughout by pc. This removes problems where pc becomes zero, but can<br />

also introduce spurious solutions.)<br />

The Arrow-Debreu theory guarantees that the problem is homogeneous<br />

in prices; (x, λw, λp) is also a solution whenever (x, w, p) solves the above.<br />

Typically this singularity in the model is removed by fixing a numeraire,<br />

that is fixing a price (for example wL = 1) and dropping the corresponding<br />

complementary relationship.<br />

Unfortunately, in this formulation even after fixing a numeraire, some of<br />

the variables p and w may go to zero, resulting in an ill-defined problem. In<br />

the case of the Von Thunen land model, the rental price of land wr decreases<br />

as the distance to market increases, and for remote rings of land, it becomes<br />

zero. A standard modeling fix is to put artificial lower bounds on these<br />

variables. Even with this fix, the problem typically remains very hard to<br />

solve. More importantly, the homogeneity property of the prices used above<br />

to fix a numeraire no longer holds, and the corresponding complementary<br />

relationship (which was dropped from the problem) may fail to be satisfied.<br />

It therefore matters which numeriare is fixed, and many modelers run into<br />

difficulty since in many cases the solution found by a solver is invalid for the<br />

originally posed model.<br />

In order to test our diagnostic information, we implemented a version of<br />

the above model in GAMS. The model corresponds closely to the MCPLIB<br />

model pgvon105.gms except we added more regions to make the problem<br />

even more difficult. The model file has been documented more fully, and the<br />

data rounded to improve clarity.<br />

The first trial we attempted was to solve the model without fixing a<br />

numeraire. In this case, PATH 4.x failed to find a solution. At the starting<br />

point, the indicators described in Section 3.3.1 are reasonable, and there<br />

54


are no zero rows/columns in the Jacobian. At the best point found, all<br />

indicators are still reasonable. However, the listing file indicates a large<br />

number of division by zero problems occurring in (A.5). We also note that<br />

a nonzero proximal perturbation is used in the first iteration of the crash<br />

method. This is an indication of singularities. We therefore added an option<br />

to output factorization singularities, and singularities appeared in the first<br />

iteration. At this point, we decided to fix a numeraire to see if this alleviated<br />

the problem.<br />

We chose to fix the labour wage rate to 1. After increasing the iterations<br />

allowed to 100,000, PATH 4.x solved the problem. The statistics at the solution<br />

are cause for concern. In particular, the gradient of the Fischer function<br />

is 7 orders of magnitude larger than all the other residuals. Furthermore, the<br />

Jacobian is very large at the solution point. Looking further in the listing<br />

file, a large number of division by zero problems occur in (A.5).<br />

To track down the problem further, we added an artificial lower bound<br />

on the variables wr of 10 −5 , that would not be active at the aforementioned<br />

solution. Resolving gave the same “solution”, but resulted in the domain<br />

errors disappearing.<br />

Although the problem is solved, there is concern on two fronts. Firstly, the<br />

gradient of the Fischer function should go to zero at the solution. Secondly,<br />

if a modeler happens to make the artificial lower bounds on the variables a<br />

bit larger, then they become active at the solution, and hence the constraint<br />

that has been dropped by fixing the price of labour at 1 is violated at this<br />

point. Of course, the algorithm is unable to detect this problem, since it<br />

is not part of the model that is passed to it, and the corresponding output<br />

looks satisfactory.<br />

We are therefore led to the conclusion that the model as postulated is<br />

ill-defined. The remainder of this section outlines two possible modeling<br />

techniques to overcome the difficulties with ill-defined problems of this type.<br />

A.2 Intervention Pricing<br />

The principal difficulty is the fact that the rental prices on land go to zero as<br />

proximity to the market decreases, and become zero for sufficiently remote<br />

rings. Such a property is unlikely to hold in a practical setting. Typically, a<br />

landowner has a minimum rental price (for example, land in fallow increases<br />

in value). As outlined above, a fixed lower bound on the rental price vi-<br />

55


olates the well-established homogeneity property. A suggestion postulated<br />

by Professor Thomas Rutherford is to allow the landowner to intervene and<br />

“purchase-back” his land whenever the rental cost gets smaller than a certain<br />

fraction of the labour wage.<br />

The new model adds a (homogeneous in price) constraint<br />

0 ≤ ir ⊥ wr ≥ 0.0001 ∗ wL<br />

and modifies (A.5) and (A.7) as follows:<br />

0 ≤ wr ⊥ ar − ir ≥ <br />

c<br />

xc,r(1 − βc)γc,r<br />

wr<br />

0 ≤ pc ⊥ <br />

xc,r ≥ αc,P<br />

<br />

eP wp + αc,LeLwL + αc,O r wr(ar − ir)<br />

.(A.8)<br />

r<br />

Given the intervention purchase, we can now add a lower bound on wr to<br />

avoid division by zero errors. In our model we chose 10 −5 since this will never<br />

be active at the solution and therefore will not affect the positive homogeneity.<br />

After this reformulation, PATH 4.x solves the problem. Furthermore,<br />

the gradient of the Fischer function, although slightly larger than the other<br />

residuals, is quite small, and can be made even smaller by reducing the<br />

convergence tolerance of PATH. Inspecting the listing file, the only difficulties<br />

mentioned are division by zero errors in the market clearance condition<br />

(A.8), that can be avoided a posteori by imposing an artificial (inactive)<br />

lower bound on these prices. We chose not to do this however.<br />

A.3 Nested Production and Maintenance<br />

Another observation that can be used to overcome the land price going to<br />

zero is the fact that land typically requires some maintenance labour input to<br />

keep it usable for crop growth. Traditionally, in economics, this is carried out<br />

by providing a nested CES function as technology input to the model. The<br />

idea is that commodity c in region r is made from labour and an intermediate<br />

good. The intermediate good is “maintained land”. Essentially, the following<br />

56<br />

pc


production problem replaces (A.1):<br />

minyM ,yL,yr,g wL(yM + yL)+wryr<br />

subject to yr ≥ (1 − βc − ɛ)g<br />

yM ≥ ɛg<br />

φcy βc<br />

L g1−βc ≥ 1,<br />

yM,yL,yr,g ≥ 0.<br />

Note that the variable yM represents “maintenance labour” and g represents<br />

the amount of “maintained land” produced, an intermediate good. The<br />

process of generating maintained land uses a Leontieff production function,<br />

namely<br />

min(λryr,λMyM) ≥ g.<br />

Here λM = 1,<br />

ɛ small, corresponds to small amounts of maintenance labour,<br />

ɛ<br />

while λr = 1 is chosen to calibrate the model correctly. A simple cal-<br />

1−βc−ɛ<br />

culus exercise then generates appropriate demand and cost expressions. The<br />

resulting complementarity problem comprises (A.3), (A.6), (A.7) and<br />

γc,r = wβc L<br />

0 ≤ wL ⊥ eL ≥ <br />

0 ≤ wr ⊥ ar ≥ <br />

φc<br />

xc,rγc,r<br />

r,c<br />

c<br />

wLɛ + wr(1 − βc − ɛ)<br />

βc<br />

wL<br />

1−βc<br />

1 − βc<br />

<br />

ɛ(1 − βc)<br />

+<br />

wLɛ + wr(1 − βc − ɛ)<br />

xc,rγc,r(1 − βc)(1 − βc − ɛ)<br />

wLɛ + wr(1 − βc − ɛ)<br />

After making the appropriate modifications to the model file, PATH 4.x<br />

solved the problem on defaults without any difficulties. All indicators showed<br />

the problem and solution found to be well-posed.<br />

57


Bibliography<br />

[1] S. C. Billups. Algorithms for Complementarity Problems and Generalized<br />

Equations. PhD thesis, University of Wisconsin–Madison, Madison,<br />

Wisconsin, August 1995.<br />

[2] S. C. Billups. Improving the robustness of descent-based mehtods for<br />

semi-smooth equations using proximal perturbations. Mathematical Programming,<br />

87:153–176, 2000.<br />

[3] S.C.BillupsandM.C.Ferris.QPCOMP:Aquadraticprogrambased<br />

solver for mixed complementarity problems. Mathematical Programming,<br />

76:533–562, 1997.<br />

[4] A. Brooke, D. Kendrick, and A. Meeraus. GAMS: A User’s Guide. The<br />

Scientific Press, South San Francisco, CA, 1988.<br />

[5] R.M.Chamberlain,M.J.D.Powell,andC.Lemaréchal. The watchdog<br />

technique for forcing convergence in algorithms for constrained optimization.<br />

Mathematical Programming Study, 16:1–17, 1982.<br />

[6] V. Chvátal. Linear Programming. W. H. Freeman and Company, New<br />

York, 1983.<br />

[7] R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical<br />

programming. Linear Algebra and Its Applications, 1:103–125,<br />

1968.<br />

[8] R. W. Cottle, J. S. Pang, and R. E. Stone. The Linear Complementarity<br />

Problem. Academic Press, Boston, 1992.<br />

[9] S. P. Dirkse. Robust Solution of Mixed Complementarity Problems.<br />

PhD thesis, Computer Sciences Department, University of Wisconsin,<br />

58


Madison, Wisconsin, 1994. Available from ftp://ftp.cs.wisc.edu/mathprog/tech-reports/.<br />

[10] S. P. Dirkse and M. C. Ferris. MCPLIB: A collection of nonlinear mixed<br />

complementarity problems. Optimization Methods and Software, 5:319–<br />

345, 1995.<br />

[11] S. P. Dirkse and M. C. Ferris. A pathsearch damped Newton method<br />

for computing general equilibria. Annals of Operations Research, pages<br />

211–232, 1996.<br />

[12] S. P. Dirkse and M. C. Ferris. Crash techniques for large-scale complementarity<br />

problems. In Ferris and Pang [19], pages 40–61.<br />

[13] S. P. Dirkse and M. C. Ferris. Traffic modeling and variational inequalities<br />

using GAMS. In Ph. L. Toint, M. Labbe, K. Tanczos, and<br />

G. Laporte, editors, Operations Research and Decision Aid Methodologies<br />

in Traffic and Transportation Management, volume 166 of NATO<br />

ASI Series F, pages 136–163. Springer-Verlag, 1998.<br />

[14] M. C. Ferris, C. Kanzow, and T. S. Munson. Feasible descent algorithms<br />

for mixed complementarity problems. Mathematical Programming,<br />

86:475–497, 1999.<br />

[15] M. C. Ferris and S. Lucidi. Nonmonotone stabilization methods for<br />

nonlinear equations. Journal of Optimization Theory and Applications,<br />

81:53–71, 1994.<br />

[16] M. C. Ferris, A. Meeraus, and T. F. Rutherford. Computing Wardropian<br />

equilibrium in a complementarity framework. Optimization Methods and<br />

Software, 10:669–685, 1999.<br />

[17] M. C. Ferris and T. S. Munson. Interfaces to PATH 3.0: Design, implementation<br />

and usage. Computational Optimization and Applications,<br />

12:207–227, 1999.<br />

[18] M. C. Ferris and T. S. Munson. Preprocessing complementarity problems.<br />

Mathematical Programming Technical Report 99-07, Computer<br />

Sciences Department, University of Wisconsin, Madison, Wisconsin,<br />

1999.<br />

59


[19] M. C. Ferris and J. S. Pang, editors. Complementarity and Variational<br />

Problems: State of the Art, Philadelphia, Pennsylvania, 1997. SIAM<br />

Publications.<br />

[20] M. C. Ferris and J. S. Pang. Engineering and economic applications of<br />

complementarity problems. SIAM Review, 39:669–713, 1997.<br />

[21] A. Fischer. A special Newton–type optimization method. Optimization,<br />

24:269–284, 1992.<br />

[22] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. A practical<br />

anti-cycling procedure for linearly constrained optimization. Mathematical<br />

Programming, 45:437–474, 1989.<br />

[23] L. Grippo, F. Lampariello, and S. Lucidi. A nonmonotone line search<br />

technique for Newton’s method. SIAM Journal on Numerical Analysis,<br />

23:707–716, 1986.<br />

[24] L. Grippo, F. Lampariello, and S. Lucidi. A class of nonmonotone stabilization<br />

methods in unconstrained optimization. Numerische Mathematik,<br />

59:779–805, 1991.<br />

[25] P. T. Harker and J. S. Pang. Finite–dimensional variational inequality<br />

and nonlinear complementarity problems: A survey of theory, algorithms<br />

and applications. Mathematical Programming, 48:161–220, 1990.<br />

[26] G. W. Harrison, T. F. Rutherford, and D. Tarr. Quantifying the<br />

Uruguay round. TheEconomicJournal, 107:1405–1430, 1997.<br />

[27] J. Huang and J. S. Pang. Option pricing and linear complementarity.<br />

Journal of Computational Finance, 2:31–60, 1998.<br />

[28] N. H. Josephy. Newton’s method for generalized equations. Technical<br />

Summary Report 1965, Mathematics Research Center, University of<br />

Wisconsin, Madison, Wisconsin, 1979.<br />

[29] W. Karush. Minima of functions of several variables with inequalities as<br />

side conditions. Master’s thesis, Department of Mathematics, University<br />

of Chicago, 1939.<br />

60


[30] H. W. Kuhn and A. W. Tucker. Nonlinear programming. In J. Neyman,<br />

editor, Proceedings of the Second Berkeley Symposium on Mathematical<br />

Statistics and Probability, pages 481–492. University of California Press,<br />

Berkeley and Los Angeles, 1951.<br />

[31] C. E. Lemke and J. T. Howson. Equilibrium points of bimatrix games.<br />

SIAM Journal on Applied Mathematics, 12:413–423, 1964.<br />

[32] L. Mathiesen. Computation of economic equilibria by a sequence of<br />

linear complementarity problems. Mathematical Programming Study,<br />

23:144–162, 1985.<br />

[33] L. Mathiesen. An algorithm based on a sequence of linear complementarity<br />

problems applied to a Walrasian equilibrium model: An example.<br />

Mathematical Programming, 37:1–18, 1987.<br />

[34] S. M. Robinson. Normal maps induced by linear transformations. Mathematics<br />

of Operations Research, 17:691–714, 1992.<br />

[35] T. F. Rutherford. Extensions of GAMS for complementarity problems<br />

arising in applied economic analysis. Journal of Economic Dynamics<br />

and Control, 19:1299–1324, 1995.<br />

[36] P. Tseng. Growth behavior of a class of merit functions for the nonlinear<br />

complementarity problem. Journal of Optimization Theory and<br />

Applications, 89:17–37, 1996.<br />

[37] S. J. Wright. Primal–Dual Interior–Point Methods. SIAM, Philadelphia,<br />

Pennsylvania, 1997.<br />

61


GAMS/SBB<br />

Contents<br />

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

2 The Branch and Bound Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

3 SBB with Pseudo Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

4 The SBB <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

5 The SBB Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

6 Comparison of DICOPT and SBB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

Release Notes<br />

• April 30, 2002: Level 009<br />

◦ NLP solvers sometimes have difficulty proving the optimality of a good point. The way they report that<br />

solution is with solver status ”Terminated by <strong>Solver</strong>” and model status ”Intermediate Nonoptimal”.<br />

SBB’s default behavior is to ignore such a solution (potentially go into failseq). With the new option<br />

acceptnonopt these solutions are accepted for further action.<br />

◦ SBB offers node selections that switch between DFS and best bound/best estimate selection. In DFS<br />

mode SBB usually switches back to best bound/estimate if the DFS search resulted in a pruned or<br />

infeasible node or a new integer solution was found. In these cases it can be advantageous to search<br />

the close neighborhood of that node also in a DFS fashion. With the new option dfsstay SBB is<br />

instructed to do some more DFS nodes even after a switch to best bound/estimate has been requested.<br />

• January 11, 2002: Level 008<br />

◦ Maintenance release<br />

• December 11, 2001: Level 007<br />

◦ NLP solvers sometimes have difficulty solving particular nodes and using up all the resources in this<br />

node. SBB provides options (see subres, subiter) to overcome these instances, but these options<br />

must be set in advance. SBB now keeps track of how much time is spent in the nodes, builds an average<br />

over time, and automatically controls the time spend in each node. The option avgresmult allows the<br />

user to customize this new feature.<br />

• November 13, 2001: Level 006<br />

◦ Maintenance release<br />

• May 22, 2001: Level 005<br />

◦ SBB derives an implicit, absolute termination tolerance if the model has a discrete objective row.<br />

This may speed up the overall time if the user has tight termination tolerances (optca, optcr).


2 GAMS/SBB<br />

◦ SBB passes indices of rows with domain violations back to the LST file. All domain violation from the<br />

root node and from all sub nodes are reported, and the user can take advantage of this information to<br />

overcome these violations.<br />

• March 21, 2001: Level 004<br />

◦ Pseudo Costs are available in SBB. Check section SBB with Pseudo Costs<br />

◦ A mix of DFS/Best Bound/Best Estimate node selection schemes are available through the nodesel<br />

option.<br />

◦ The tryint option now works for integer variables.<br />

◦ Additional information about node and variable selection can be printed to the Log file using the<br />

printbbinfo option.<br />

• December 22, 2000: Level 003<br />

◦ MINOS and SNOPT are available as NLP subsolvers.<br />

• November 19, 2000: Level 002<br />

◦ The model and solver status returned by SBB are synchronized with return codes reported by DICOPT.<br />

• October 16, 2000: Level 001<br />

◦ SBB introduced to GAMS distribution 19.5.<br />

◦ CONOPT and CONOPT2 are the available NLP subsolvers.<br />

1 Introduction<br />

SBB is a new GAMS solver for Mixed Integer Nonlinear Programming (MINLP) models. It is based on a<br />

combination of the standard Branch and Bound (B&B) method known from Mixed Integer Linear Programming<br />

and some of the standard NLP solvers already supported by GAMS. Currently, SBB can use<br />

• CONOPT<br />

• MINOS<br />

• SNOPT<br />

as solvers for submodels.<br />

SBB supports all types of discrete variables supported by GAMS, including:<br />

• Binary<br />

• Integer<br />

• Semicont<br />

• Semiint<br />

• Sos1<br />

• Sos2<br />

2 The Branch and Bound Algorithm<br />

The Relaxed Mixed Integer Nonlinear Programming (RMINLP) model is initially solved using the starting point<br />

provided by the modeler. SBB will stop immediately if the RMINLP model is unbounded or infeasible, or if it<br />

fails (see option infeasseq and failseq below for an exception). If all discrete variables in the RMINLP model<br />

are integer, SBB will return this solution as the optimal integer solution. Otherwise, the current solution is stored<br />

and the Branch and Bound procedure will start.<br />

During the Branch and Bound process, the feasible region for the discrete variables is subdivided, and bounds<br />

on discrete variables are tightened to new integer values to cut off the current non-integer solutions. Each time<br />

a bound is tightened, a new, tighter NLP submodel is solved starting from the optimal solution to the previous<br />

looser submodel. The objective function values from the NLP submodel is assumed to be lower bounds on the<br />

objective in the restricted feasible space (assuming minimization), even though the local optimum found by the


GAMS/SBB 3<br />

NLP solver may not be a global optimum. If the NLP solver returns a Locally Infeasible status for a submodel, it<br />

is usually assumed that there is no feasible solution to the submodel, even though the infeasibility only has been<br />

determined locally (see option infeasseq below for an exception). If the model is convex, these assumptions will<br />

be satisfied and SBB will provide correct bounds. If the model is not convex, the objective bounds may not be<br />

correct and better solutions may exist in other, unexplored parts of the search space.<br />

3 SBB with Pseudo Costs<br />

Over the last decades quite a number of search strategies have been successfully introduced for mixed integer<br />

linear programming (for details see e.g. J.T. Linderoth and M.W.P. Savelsbergh, A Computational Study of<br />

Search Strategies for Mixed Integer Programming, INFORMS Journal on Computing, 11(2), 1999). Pseudo costs<br />

are key elements of sophisticated search strategies. <strong>Using</strong> pseudo costs, we can estimate the degradation of the<br />

objective function if we move a fractional variable to a close integer value. Naturally, the variable selection can<br />

be based on pseudo costs (see SBB option varsel). Node selection can also make use of pseudo cost: If we can<br />

estimate the change of the objective for moving one fractional variable to the closed integer value, we can then<br />

aggregate this change for all fractional variables, to estimate the objective of the best integer solution reachable<br />

from a particular node (see SBB option nodesel).<br />

Unfortunately, the computation of pseudo cost can be a substantial part of the overall computation. Models with<br />

a large number of fractional variables in the root node are not good candidates for search strategies which require<br />

pseudo costs (varsel 3, nodesel 3,5,6). The impact (positive or negative) of using pseudo cost depends<br />

significantly on the particular model. At this stage, general statements are difficult to make.<br />

Selecting pseudo cost related search strategies (varsel 3, nodesel 3,5,6) may use computation time which<br />

sometimes does not pay off. However, we encourage the user to try these options for difficult models which<br />

require a large number of branch-and-bound nodes to solve.<br />

4 The SBB <strong>Options</strong><br />

SBB works like other GAMS solvers, and many options can be set in the GAMS model. The most relevant GAMS<br />

options are iterlim, reslim, nodlim, optca, optcr, optfile, cheat, cutoff, prioropt, and tryint.<br />

A description of all available GAMS options can be found in Chapter ”<strong>Using</strong> <strong>Solver</strong> <strong>Specific</strong> <strong>Options</strong>”.<br />

If you specify ”.optfile = 1;” before the SOLVE statement in your GAMS model, SBB will then<br />

look for and read an option file with the name sbb.opt (see ”<strong>Using</strong> <strong>Solver</strong> <strong>Specific</strong> <strong>Options</strong>” for general use of<br />

solver option files). Unless explicitly specified in the SBB option file, the NLP subsolvers will not read an option<br />

file. The syntax for the SBB option file is<br />

optname value<br />

with one option on each line.<br />

For example,<br />

rootsolver conopt.1<br />

subsolver snopt<br />

loginterval 10<br />

The first two lines determine the NLP subsolvers for the Branch and Bound procedure. CONOPT with the option<br />

file conopt.opt will be used for solving the root node. SNOPT with no option file will be used for the remaining<br />

nodes. The last option determines the frequency for log line printing. Every 10th node, and each node with a<br />

new integer solution, causes a log line to be printed. The following options are implemented:


4 GAMS/SBB<br />

Option Description Default<br />

rootsolver solver[.n] <strong>Solver</strong> is the name of the GAMS NLP solver that should be<br />

used in the root node, and n is the integer corresponding to optfile for the<br />

root node. If .n is missing, the optfile treated as zero (i.e., the NLP solver)<br />

will not look for an options file. This SBB option can be used to overwrite<br />

the default that uses the NLP solver specified with an Option NLP = solver;<br />

GAMS NLP<br />

solver<br />

subsolver<br />

statement or the default GAMS solver for NLP.<br />

solver[.n] Similar to rootsolver but applied to the subnodes. GAMS NLP<br />

solver<br />

loginterval The interval (number of nodes) for which log lines are written. 1<br />

loglevel The level of log output:<br />

1<br />

0: only SBB log lines with one line every loginterval nodes<br />

1: NLP solver log for the root node plus SBB loglines as 0<br />

2: NLP solver log for all nodes plus SBB log lines as 0<br />

subres The default for subres passed on through reslim. Sets the time limit in reslim<br />

subiter<br />

seconds for solving a node in the B&B tree. If the NLP solver exceeds this<br />

limit it is handled like a failure and the node is ignored, or the solvers in the<br />

failseq are called.<br />

The default for subiter passed on through iterlim. Similar to subres but<br />

sets the iteration limit for solving a node in the B&B tree.<br />

iterlim<br />

failseq solver1[.n1] solver2[.n2] ... where solver1 is the name of a GAMS None<br />

NLP solver to be used if the default solver fails, i.e., if it was not stopped<br />

by an iteration, resource, or domain, limit and does not return a locally<br />

optimal or locally infeasible solution. n1 is the value of optfile passed to the<br />

alternative NLP solver. If .n1 is left blank it is interpreted as zero. Similarly,<br />

solver2 is the name of a GAMS NLP solver that is used if solver1 fails,<br />

and n2 is the value of optfile passed to the second NLP solver. If you have<br />

a difficult model where solver failures are not unlikely, you may add more<br />

solver.n pairs. You can use the same solver several times with different<br />

infeasseq<br />

options files. failseq conopt conopt.2 conopt.3 means to try CONOPT<br />

with no options file. If this approach also fails, try CONOPT with options file<br />

conopt.op2, and if it again fails, try CONOPT with options file conopt.op3. If<br />

all solver and options file combinations fail the node will be labeled ”ignored”<br />

and the node will not be explored further. The default is to try only one solver<br />

(the rootsolver or subsolver) and to ignore nodes with a solver failure.<br />

level solver1[.n1] solver2[.n2] ... The purpose of infeasseq is to avoid<br />

cutting parts of the search tree that appear to be infeasible but really are<br />

feasible. If the NLP solver labels a node ”Locally Infeasible” and the model<br />

is not convex a feasible solution may actually exist. If SBB is high in the<br />

None<br />

search tree it can be very drastic to prune the node immediately. SBB is<br />

therefore directed to try the solver/option combinations in the list as long as<br />

the depth in the search tree is less than the integer value . If the<br />

list is exhausted without finding a feasible solution, the node is assumed to<br />

be infeasible. The default is to trust that Locally Infeasible nodes are indeed<br />

infeasible and to remove them from further consideration.<br />

acceptnonopt If this option is set to 1 and the subsolver terminates with solver status<br />

”Terminated by <strong>Solver</strong>” and model status ”Intermediate Nonoptimal” SBB<br />

takes this as a good solution and keeps on going. In default mode such a<br />

return is treated as a subsolver failure and the failseq is consulted.<br />

0


GAMS/SBB 5<br />

Option Description Default<br />

avgresmult Similar to subres, this options allows the user to control the time limit spend<br />

in a node. SBB keeps track of how much time is spent in the nodes, and builds<br />

an average over time. This average multiplied by the factor avgresmult is set<br />

as a time limit for solving a node in the B&B tree. If the NLP solver exceeds<br />

this limit it is handled like a failure: the node is ignored or the solvers in<br />

5<br />

the failseq are called. The default multiplier avgresmult is 5. Setting<br />

nodesel<br />

avgresmult to 0 will disable the automatic time limit feature. A multiplier<br />

is not very useful for very small node solution times; therefore, independent<br />

of each node, SBB grants the solver at least 5 seconds to solve the node. The<br />

competing option subres overwrites the automatically generated resource<br />

limit.<br />

Node selection:<br />

0<br />

0: automatic<br />

1: Depth First Search (DFS)<br />

2: Best Bound (BB)<br />

3: Best Estimate (BE)<br />

4: DFS/BB mix<br />

5: DFS/BE mix<br />

6: DFS/BB/BE mix<br />

dfsstay If the node selection is a B*/DFS mix, SBB switches frequently to DFS node<br />

selection mode. It switches back into B* node selction mode, if no subnodes<br />

were created (new int, pruned, infeasible, fail). It can be advantageous to<br />

0<br />

search the neighborhood of the last node also in a DFS manner. Setting<br />

varsel<br />

dfsstay to n instructs SBB to stay in DFS mode for another n nodes.<br />

Variable selection:<br />

0<br />

0: automatic<br />

1: maximum integer infeasibility<br />

2: minimum integer infeasibility<br />

3: pseudo costs<br />

epint The integer infeasibility tolerance. 1.0e-5<br />

memnodes The maximum number of nodes SBB can have in memory. If this number is<br />

exceeded, SBB will terminate and return the best solution found so far.<br />

10000<br />

printbbinfo Additional info of log output:<br />

0<br />

0: no additional info<br />

1: the node and variable selection for the current node are indicated by a<br />

two letter code at the end of the log line. The first letter represents the<br />

node selection: D for DFS, B for Best Bound, and E for Best Estimate.<br />

The second letter represents the variable selection: X for maximum infeasibility,<br />

N for minimum infeasibility, and P for pseudo cost.<br />

intsollim maximum number of integer solutions. If this number is exceeded, SBB will<br />

terminate and return the best solution found so far.<br />

99999<br />

5 The SBB Log File<br />

The SBB Log file (usually directed to the screen) can be controlled with the loginterval and loglevel options in<br />

SBB. It will by default first show the iteration output from the NLP solver that solves the root node. This is<br />

followed by output from SBB describing the search tree. An example of this search tree output follows:<br />

Root node solved locally optimal.<br />

Node Act. Lev. Objective IInf Best Int. Best Bound Gap (2 secs)<br />

0 0 0 8457.6878 3 - 8457.6878 -<br />

1 1 1 8491.2869 2 - 8457.6878 -<br />

2 2 2 8518.1779 1 - 8457.6878 -<br />

* 3 3 3 9338.1020 0 9338.1020 8457.6878 0.1041


6 GAMS/SBB<br />

4 2 1 pruned - 9338.1020 8491.2869 0.0997<br />

Solution satisfies optcr<br />

Statistics:<br />

Iterations : 90<br />

NLP Seconds : 0.110000<br />

B&B nodes : 3<br />

MIP solution : 9338.101979 found in node 3<br />

Best possible : 8491.286941<br />

Absolute gap : 846.815039 optca : 0.000000<br />

Relative gap : 0.099728 optcr : 0.100000<br />

Model Status : 8<br />

<strong>Solver</strong> Status : 1<br />

NLP <strong>Solver</strong> Statistics<br />

Total Number of NLP solves : 7<br />

Total Number of NLP failures: 0<br />

Details: conopt<br />

# execs 7<br />

# failures 0<br />

Terminating.<br />

The fields in the log are:<br />

Field Description<br />

Node The number of the current node. The root node is node 0.<br />

Act The number of active nodes defined as the number of subnodes that have not yet been<br />

solved.<br />

Lev The level in the search tree, i.e., the number of branches needed to reach this node.<br />

Objective The objective function value for the node. A numerical value indicates that the node was<br />

solved and the objective was good enough for the node to not be ignored. ”pruned” indicates<br />

that the objective value was worse than the Best Integer value, ”infeasible” indicates<br />

that the node was Infeasible or Locally Infeasible, and ”ignored” indicates that the node<br />

could not be solved (see under failseq above).<br />

IInf The number of integer infeasibilities, i.e. the number of variables that are supposed to be<br />

binary or integer that do not satisfy the integrality requirement. Semi continuous variables<br />

and SOS variables may also contribute to IInf.<br />

Best Int The value of the best integer solution found so far. A dash (-) indicates that an integer<br />

solution has not yet been found. A star (*) in column one indicates that the node is integer<br />

and that the solution is better than the best yet found.<br />

Best Bound The minimum value of ”Objective” for the subnodes hat have not been solved yet (maximum<br />

for maximization models). For convex models, Best Bound will increase monotonically.<br />

For nonconvex models, Best Bound may decrease, indicating that the Objective<br />

value for a node was not a valid lower bound for that node.<br />

Gap The relative gap between the Best Integer solution and the Best Bound.<br />

The remaining part of the Log file displays various solution statistics similar to those provided by the MIP solvers.<br />

This information can also be found in the <strong>Solver</strong> Status area of the GAMS listing file.<br />

The following Log file shows cases where the NLP solver fails to solve a subnode. The text ”ignored” in the<br />

Objective field shows the failure, and the values in parenthesis following the Gap field are the Solve and Model<br />

status returned by the NLP solver:<br />

Root node solved locally optimal.<br />

Node Act. Lev. Objective IInf Best Int. Best Bound Gap (2 secs)<br />

0 0 0 6046.0186 12 - 6046.0186 -<br />

1 1 1 infeasible - - 6046.0186 -<br />

2 0 1 6042.0995 10 - 6042.0995 -


GAMS/SBB 7<br />

3 1 2 ignored - - 6042.0995 - (4,6)<br />

4 0 2 5804.5856 8 - 5804.5856 -<br />

5 1 3 ignored - - 5804.5856 - (4,7)<br />

The next Log file shows the effect of the infeasseq and failseq options on the model above. CONOPT with<br />

options file conopt.opt (the default solver and options file pair for this model) considers the first subnode to be<br />

locally infeasible. CONOPT1, MINOS, and SNOPT, all with no options file, are therefore tried in sequence. In<br />

this case, they all declare the node infeasible and it is considered to be infeasible.<br />

In node 3, CONOPT fails but CONOPT1 finds a Locally Optimal solution, and this solution is then used for<br />

further search. The option file for the following run would be:<br />

rootsolver conopt.1<br />

subsolver conopt.1<br />

infeasseq conopt1 minos snopt<br />

The log looks as follows:<br />

Root node solved locally optimal.<br />

Node Act. Lev. Objective IInf Best Int. Best Bound Gap (2 secs)<br />

0 0 0 6046.0186 12 - 6046.0186 -<br />

conopt.1 reports locally infeasible<br />

Executing conopt1<br />

conopt1 reports locally infeasible<br />

Executing minos<br />

minos reports locally infeasible<br />

Executing snopt<br />

1 1 1 infeasible - - 6046.0186 -<br />

2 0 1 6042.0995 10 - 6042.0995 -<br />

conopt.1 failed. 4 TERMINATED BY SOLVER, 7 INTERMEDIATE NONOPTIMAL<br />

Executing conopt1<br />

3 1 2 4790.2373 8 - 6042.0995 -<br />

4 2 3 4481.4156 6 - 6042.0995 -<br />

conopt.1 reports locally infeasible<br />

Executing conopt1<br />

conopt1 reports locally infeasible<br />

Executing minos<br />

minos failed. 4 TERMINATED BY SOLVER, 6 INTERMEDIATE INFEASIBLE<br />

Executing snopt<br />

5 3 4 infeasible - - 6042.0995 -<br />

6 2 4 4480.3778 4 - 6042.0995 -<br />

The Log file shows a solver statistic at the end, summarizing how many times an NLP was executed and how<br />

often it failed:<br />

NLP <strong>Solver</strong> Statistics<br />

Total Number of NLP solves : 45<br />

Total Number of NLP failures: 13<br />

Details: conopt minos snopt<br />

# execs 34 3 8<br />

# failures 4 3 6<br />

The solutions found by the NLP solver to the subproblems in the Branch and Bound may not be the global<br />

optima. Therefore, the objective can improve even though we restrict the problem by tightening some bounds.<br />

These jumps of the objective in the wrong direction which might also have an impact on the best bound/possible<br />

are reported in a separate statistic:


8 GAMS/SBB<br />

Non convex model!<br />

# jumps in best bound : 2<br />

Maximum jump in best bound : 20.626587 in node 13<br />

# jumps to better objective : 2<br />

Maximum jump in objective : 20.626587 in node 13<br />

6 Comparison of DICOPT and SBB<br />

Until recently, MINLP models could only be solved with the DICOPT solver. DICOPT is based on the outer<br />

approximation method. Initially, the RMINLP model is solved just as in SBB. The model is then linearized<br />

around this point and a linear MIP model is solved. The discrete variables are then fixed at the optimal values<br />

from the MIP model, and the resulting NLP model is solved. If the NLP model is feasible, we have an integer<br />

feasible solution.<br />

The model is linearized again and a new MIP model with both the old and new linearized constraints is solved.<br />

The discrete variables are again fixed at the optimal values, and a new NLP model is solved.<br />

The process stops when the MIP model becomes infeasible, when the NLP solution becomes worse, or, in some<br />

cases, when bounds derived from the MIP model indicate that it is safe to stop.<br />

DICOPT is based on the assumption that MIP models can be solved efficiently while NLP models can be expensive<br />

and difficult to solve. The MIP models try to approximate the NLP model over a large area and solve it using<br />

cheap linear technology. Ideally, only a few NLPs must be solved.<br />

DICOPT can experience difficulties solving models, if many or all the NLP submodels are infeasible. DICOPT<br />

can also have problems if the linearizations used for the MIP model create ill-conditioned models. The MIP<br />

models may become very difficult to solve, and the results from the MIP models may be poor as initial values for<br />

the NLP models. The linearized constraint used by DICOPT may also exclude certain areas of the feasible space<br />

from consideration.<br />

SBB uses different assumptions and works very differently. Most of the work in SBB involves solving NLP models.<br />

Since the NLP submodels differ only in one or a few bounds, the assumption is that the NLP models can be<br />

solved quickly using a good restart procedure. Since the NLP models differ very little and good initial values are<br />

available, the solution process will be fairly reliable compared to the solution process in DICOPT, where initial<br />

values of good quality seldom are available. Because search space is reduced based on very different grounds than<br />

in DICOPT, other solutions may therefore be explored.<br />

Overall, DICOPT should perform better on models that have a significant and difficult combinatorial part, while<br />

SBB may perform better on models that have fewer discrete variables but more difficult nonlinearities (and<br />

possibly also on models that are fairly non convex).


GAMS/SNOPT: AN SQP ALGORITHM FOR<br />

LARGE-SCALE CONSTRAINED<br />

OPTIMIZATION<br />

PHILIP E. GILL ∗ WALTER MURRAY †<br />

MICHAEL A. SAUNDERS † ARNE DRUD ‡<br />

1 Introduction<br />

ERWIN KALVELAGEN §<br />

January 19, 2000<br />

This section describes the GAMS interface to the general-purpose NLP solver<br />

SNOPT, (Sparse Nonlinear Optimizer) which implements a sequential quadratic<br />

programming (SQP) method for solving constrained optimization problems with<br />

smooth nonlinear functions in the objective and constraints. The optimization<br />

problem is assumed to be stated in the form<br />

NP minimize<br />

x<br />

subject to<br />

or maximize f(x)<br />

x<br />

F (x) ∼ b1<br />

Gx ∼ b2<br />

l ≤ x ≤ u,<br />

where x ∈ ℜ n , f(x) is a linear or nonlinear smooth objective function, l and<br />

u are constant lower and upper bounds, F (x) is a set of nonlinear constraint<br />

functions, G is a sparse matrix, ∼ is a vector of relational operators (≤, ≥ or<br />

=), and b1 and b2 are right-hand side constants. F (x) ∼ b1 are the nonlinear<br />

constraints of the model and Gx ∼ b2 form the linear constraints.<br />

The gradients of f and Fi are automatically provided by GAMS, using its<br />

automatic differentiation engine.<br />

∗Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-<br />

0112.<br />

† Department of EESOR, Stanford University, Stanford, CA 94305-4023<br />

‡ ARKI Consulting and Development, Bagsvaerd, Denmark<br />

§ GAMS Development Corp., Washington DC<br />

1<br />

(1)


The bounds may include special values -INF or +INF to indicate lj = −∞ or<br />

uj = +∞ for appropriate j. Free variables have both bounds infinite and fixed<br />

variables have lj = uj.<br />

1.1 Problem Types<br />

If the nonlinear functions are absent, the problem is a linear program (LP)<br />

and SNOPT applies the primal simplex method [2]. Sparse basis factors are<br />

maintained by LUSOL [11] as in MINOS [14].<br />

If only the objective is nonlinear, the problem is linearly constrained (LC)<br />

and tends to solve more easily than the general case with nonlinear constraints<br />

(NC). Note that GAMS models have an objective variable instead of an objective<br />

function. The GAMS/SNOPT link will try to substitute out the objective<br />

variable and reformulate the model such that SNOPT will see a true objective<br />

function.<br />

For both linearly and nonlinearly constrained problems SNOPT applies a<br />

sparse sequential quadratic programming (SQP) method [6], using limitedmemory<br />

quasi-Newton approximations to the Hessian of the Lagrangian. The<br />

merit function for steplength control is an augmented Lagrangian, as in the<br />

dense SQP solver NPSOL [7, 9].<br />

In general, SNOPT requires less matrix computation than NPSOL and fewer<br />

evaluations of the functions than the nonlinear algorithms in MINOS [12, 13]. It<br />

is suitable for nonlinear problems with thousands of constraints and variables,<br />

but not thousands of degrees of freedom. (Thus, for large problems there should<br />

be many constraints and bounds, and many of them should be active at a<br />

solution.)<br />

1.2 Selecting the SNOPT <strong>Solver</strong><br />

The GAMS system can be instructed to use the SNOPT solver by incorporating<br />

the following option in the GAMS model:<br />

option NLP=SNOPT;<br />

If the model contains non-smooth functions like abs(x), or max(x, y) you<br />

can try to get it solved by SNOPT using<br />

option DNLP=SNOPT;<br />

These models have discontinuous derivatives however, and SNOPT was not<br />

designed for solving such models. Discontinuities in the gradients can sometimes<br />

be tolerated if they are not too close to an optimum.<br />

It is also possible to specify NLP=SNOPT or DNLP=SNOPT on the command line,<br />

as in:<br />

> gamslib chem<br />

> gams chem nlp=snopt<br />

2


2 Description of the method<br />

Here we briefly describe the main features of the SQP algorithm used in SNOPT<br />

and introduce some terminology. The SQP algorithm is fully described in [6].<br />

2.1 Objective function reconstruction<br />

The first step GAMS/SNOPT performs is to try to reconstruct the objective<br />

function. In GAMS, optimization models minimize or maximize an objective<br />

variable. SNOPT however works with an objective function. One way of dealing<br />

with this is to add a dummy linear function with just the objective variable.<br />

Consider the following GAMS fragment:<br />

obj.. z =e= sum(i, sqr(resid(i)));<br />

model m /all/;<br />

solve m using nlp minimizing z;<br />

This can be cast in form (1) by saying minimize z subject to z = <br />

i resid2i and the other constraints in the model. Although simple, this approach is not<br />

always preferable. Especially when all constraints are linear it is important to<br />

minimize <br />

i resid2 i<br />

directly. This can be achieved by a simple reformulation: z<br />

can be substituted out. The substitution mechanism carries out the formulation<br />

if all of the following conditions hold:<br />

• the objective variable z is a free continuous variable (no bounds are defined<br />

on z),<br />

• z appears linearly in the objective function,<br />

• the objective function is formulated as an equality constraint,<br />

• z is only present in the objective function and not in other constraints.<br />

For many models it is very important that the nonlinear objective function<br />

be used by SNOPT. For instance the model chem.gms from the model library<br />

solves in 16 iterations. When we add the bound<br />

energy.lo = 0;<br />

on the objective variable energy and thus preventing it from being substituted<br />

out, SNOPT will not be able to find a feasible point for the given starting point.<br />

This reformulation mechanism has been extended for substitutions along the<br />

diagonal. For example, the GAMS model<br />

variables x,y,z;<br />

equations e1,e2;<br />

e1..z =e= y;<br />

e2..y =e= sqr(1+x);<br />

3


model m /all/;<br />

option nlp=snopt;<br />

solve m using nlp minimizing z;<br />

will be reformulated as an unconstrained optimization problem<br />

minimize f(x) = (1 + x) 2 .<br />

These additional reformulations can be turned off by using the statement option<br />

reform = 0; (see §4.1).<br />

2.2 Constraints and slack variables<br />

The m general constraints of the problem (1) are formed by F (x) ∼ b1 and<br />

Gx ∼ b2. SNOPT will add to each general constraint a slack variable si with<br />

appropriate bounds. The problem defined by (1) can therefore be rewritten in<br />

the following equivalent form:<br />

minimize<br />

x,s<br />

f(x)<br />

<br />

<br />

subject to<br />

F (x)<br />

Gx<br />

− s = 0, l ≤<br />

x<br />

s<br />

≤ u.<br />

where a maximization problem is cast into a minimization by multiplying the<br />

objective function by −1.<br />

The linear and nonlinear general constraints become equalities of the form<br />

F (x) − sN = 0 and Gx − sL = 0, where sL and sN are known as the linear and<br />

nonlinear slacks.<br />

2.3 Major iterations<br />

The basic structure of SNOPT’s solution algorithm involves major and minor<br />

iterations. The major iterations generate a sequence of iterates (xk) that satisfy<br />

the linear constraints and converge to a point that satisfies the first-order<br />

conditions for optimality. At each iterate {xk} a QP subproblem is used to<br />

generate a search direction towards the next iterate {xk+1}. The constraints<br />

of the subproblem are formed from the linear constraints Gx − sL = 0 and the<br />

nonlinear constraint linearization<br />

F (xk) + F ′ (xk)(x − xk) − sN = 0,<br />

where F ′ (xk) denotes the Jacobian: a matrix whose rows are the first derivatives<br />

of F (x) evaluated at xk. The QP constraints therefore comprise the m linear<br />

constraints<br />

F ′ (xk)x −sN = −F (xk) + F ′ (xk)xk,<br />

Gx −sL = 0,<br />

4


where x and s are bounded by l and u as before. If the m × n matrix A and<br />

m-vector b are defined as<br />

<br />

F<br />

A =<br />

′ <br />

(xk)<br />

G<br />

and<br />

<br />

−F (xk) + F<br />

b =<br />

′ <br />

(xk)xk<br />

,<br />

0<br />

then the QP subproblem can be written as<br />

minimize<br />

x,s<br />

q(x) subject to Ax − s = b, l ≤<br />

<br />

<br />

x<br />

s<br />

≤ u, (2)<br />

where q(x) is a quadratic approximation to a modified Lagrangian function [6].<br />

2.4 Minor iterations<br />

Solving the QP subproblem is itself an iterative procedure, with the minor<br />

iterations of an SQP method being the iterations of the QP method. At each<br />

minor iteration, the constraints Ax − s = b are (conceptually) partitioned into<br />

the form<br />

BxB + SxS + NxN = b,<br />

where the basis matrix B is square and nonsingular. The elements of xB, xS<br />

and xN are called the basic, superbasic and nonbasic variables respectively; they<br />

are a permutation of the elements of x and s. At a QP solution, the basic<br />

and superbasic variables will lie somewhere between their bounds, while the<br />

nonbasic variables will be equal to one of their upper or lower bounds. At each<br />

iteration, xS is regarded as a set of independent variables that are free to move<br />

in any desired direction, namely one that will improve the value of the QP<br />

objective (or the sum of infeasibilities). The basic variables are then adjusted<br />

in order to ensure that (x, s) continues to satisfy Ax − s = b. The number<br />

of superbasic variables (nS say) therefore indicates the number of degrees of<br />

freedom remaining after the constraints have been satisfied. In broad terms, nS<br />

is a measure of how nonlinear the problem is. In particular, nS will always be<br />

zero at a solution for LP problems.<br />

If it appears that no improvement can be made with the current definition<br />

of B, S and N, a nonbasic variable is selected to be added to S, and the process<br />

is repeated with the value of nS increased by one. At all stages, if a basic or<br />

superbasic variable encounters one of its bounds, the variable is made nonbasic<br />

and the value of nS is decreased by one.<br />

Associated with each of the m equality constraints in Ax−s = b are the dual<br />

variables πi. Similarly, each variable in (x, s) has an associated reduced gradient<br />

dj. The reduced gradients for the variables x are the quantities g−A T π, where g<br />

is the gradient of the QP objective, and the reduced gradients for the slacks are<br />

the dual variables π. The QP subproblem is optimal if dj ≥ 0 for all nonbasic<br />

variables at their lower bounds, dj ≤ 0 for all nonbasic variables at their upper<br />

bounds, and dj = 0 for other variables, including superbasics. In practice, an<br />

approximate QP solution is found by relaxing these conditions on dj (see the<br />

Minor optimality tolerance in §4.3).<br />

5


2.5 The merit function<br />

After a QP subproblem has been solved, new estimates of the NP solution are<br />

computed using a linesearch on the augmented Lagrangian merit function<br />

M(x, s, π) = f(x) − π T <br />

1<br />

T <br />

F (x) − sN + 2 F (x) − sN D F (x) − sN , (3)<br />

where D is a diagonal matrix of penalty parameters. If (xk, sk, πk) denotes the<br />

current solution estimate and (xk, sk, πk) denotes the optimal QP solution, the<br />

linesearch determines a step αk (0 < αk ≤ 1) such that the new point<br />

⎛ ⎞ ⎛ ⎞ ⎛ ⎞<br />

⎜<br />

⎝<br />

xk+1<br />

sk+1<br />

πk+1<br />

⎟ ⎜<br />

⎠ = ⎝<br />

xk<br />

sk<br />

πk<br />

xk − xk<br />

⎟ ⎜<br />

⎠ + αk ⎝ sk − sk<br />

πk − πk<br />

⎟<br />

⎠ (4)<br />

gives a sufficient decrease in the merit function. When necessary, the penalties<br />

in D are increased by the minimum-norm perturbation that ensures descent<br />

for M [9]. As in NPSOL, sN is adjusted to minimize the merit function as a<br />

function of s prior to the solution of the QP subproblem. For more details, see<br />

[7, 3].<br />

2.6 Treatment of constraint infeasibilities<br />

SNOPT makes explicit allowance for infeasible constraints. Infeasible linear<br />

constraints are detected first by solving a problem of the form<br />

FLP minimize<br />

x,v,w<br />

eT (v + w)<br />

subject to Gx − v + w ∼ b2, l ≤ x ≤ u, v, w ≥ 0,<br />

where e is a vector of ones. This is equivalent to minimizing the sum of the<br />

general linear constraint violations subject to the simple bounds. (In the linear<br />

programming literature, the approach is often called elastic programming. We<br />

also describe it as minimizing the ℓ1 norm of the infeasibilities.)<br />

If the linear constraints are infeasible (v = 0 or w = 0), SNOPT terminates<br />

without computing the nonlinear functions.<br />

If the linear constraints are feasible, all subsequent iterates satisfy the linear<br />

constraints. (Such a strategy allows linear constraints to be used to define<br />

a region in which the functions can be safely evaluated.) SNOPT proceeds to<br />

solve NP as given, using search directions obtained from a sequence of quadratic<br />

programming subproblems (2).<br />

If a QP subproblem proves to be infeasible or unbounded (or if the dual<br />

variables π for the nonlinear constraints become large), SNOPT enters “elastic”<br />

mode and solves the problem<br />

NP(γ) minimize<br />

x,v,w<br />

f(x) + γeT (v + w)<br />

<br />

<br />

subject to<br />

F (x) − v + w<br />

Gx<br />

∼ b, l ≤ x ≤ u, v, w ≥ 0,<br />

6


where γ is a nonnegative parameter (the elastic weight), and f(x) + γe T (v + w)<br />

is called a composite objective. If γ is sufficiently large, this is equivalent to<br />

minimizing the sum of the nonlinear constraint violations subject to the linear<br />

constraints and bounds. A similar ℓ1 formulation of NP is fundamental to the<br />

Sℓ1QP algorithm of Fletcher [4]. See also Conn [1].<br />

3 Starting points and advanced bases<br />

A good starting point may be essential for solving nonlinear models. We show<br />

how such a starting point can be specified in a GAMS environment, and how<br />

SNOPT will use this information.<br />

A related issue is the use of “restart” information in case a number of related<br />

models is solved in a row. Starting from an optimal point from a previous solve<br />

statement is in such situations often beneficial. In a GAMS environment this<br />

means reusing primal and dual information, which is stored in the .L and .M<br />

fields of variables and equations.<br />

3.1 Starting points<br />

To specify a starting point for SNOPT use the .L level values in GAMS. For<br />

example, to set all variables xi,j := 1 use x.l(i,j)=1;. The default values for<br />

level values are zero.<br />

Setting a good starting point can be crucial for getting good results. As an<br />

(artificial) example consider the problem where we want to find the smallest<br />

circle that contains a number of points (xi, yi):<br />

Example minimize<br />

r,a,b<br />

r<br />

subject to (xi − a) 2 + (yi − b) 2 ≤ r 2 , r ≥ 0.<br />

This problem can be modeled in GAMS as follows.<br />

set i points /p1*p10/;<br />

parameters<br />

x(i) x coordinates,<br />

y(i) y coordinates;<br />

* fill with random data<br />

x(i) = uniform(1,10);<br />

y(i) = uniform(1,10);<br />

variables<br />

a x coordinate of center of circle<br />

b y coordinate of center of circle<br />

r radius;<br />

equations<br />

e(i) points must be inside circle;<br />

7


e(i).. sqr(x(i)-a) + sqr(y(i)-b) =l= sqr(r);<br />

r.lo = 0;<br />

model m /all/;<br />

option nlp=snopt;<br />

solve m using nlp minimizing r;<br />

Without help, SNOPT will not be able to find an optimal solution. The<br />

problem will be declared infeasible. In this case, providing a good starting<br />

point is very easy. If we define<br />

then good estimates are<br />

Thus we include in our model:<br />

parameters xmin,ymin,xmax,ymax;<br />

xmin = smin(i, x(i));<br />

ymin = smin(i, x(i));<br />

xmax = smax(i, x(i));<br />

ymax = smax(i, y(i));<br />

xmin = min<br />

i xi,<br />

ymin = min<br />

i yi,<br />

xmax = max<br />

i xi,<br />

ymax = max<br />

i yi,<br />

a = (xmin + xmax)/2,<br />

b = (ymin + ymax)/2,<br />

r = (a − xmin) 2 + (b − ymin) 2 .<br />

* set starting point<br />

a.l = (xmin+xmax)/2;<br />

b.l = (ymin+ymax)/2;<br />

r.l = sqrt( sqr(a.l-xmin) + sqr(b.l-ymin) );<br />

and now the model solves very easily.<br />

Level values can also be set implicitly as a result of assigning bounds. When<br />

a variable is bounded away from zero, for instance by the statement Y.LO = 1;,<br />

the SOLVE statement will override the default level of zero of such a variable in<br />

order to make it feasible.<br />

Note: another way to formulate the model would be to minimize r 2 instead<br />

of r. This allows SNOPT to solve the problem even with the default starting<br />

point.<br />

8


3.2 Advanced basis<br />

GAMS automatically passes on level values and basis information from one solve<br />

to the next. Thus, when we have two solve statements in a row, with just a<br />

few changes in between SNOPT will typically need very few iterations to find an<br />

optimal solution in the second solve. For instance, when we add a second solve<br />

to the chem.gms model from the model library:<br />

model mixer chemical mix for N2H4+O2 / all /;<br />

solve mixer minimizing energy using nlp;<br />

solve mixer minimizing energy using nlp;<br />

we observe the following log:<br />

[erwin@hamilton]$ gams chem nlp=snopt<br />

GAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reserved<br />

Licensee: hamilton G990622:1048CP-LNX<br />

GAMS Development<br />

--- Starting compilation<br />

--- chem.gms(48) 1 Mb<br />

--- Starting execution<br />

--- chem.gms(42) 1 Mb<br />

--- Generating model MIXER<br />

--- chem.gms(46) 1 Mb<br />

--- 5 rows, 12 columns, and 37 non-zeroes.<br />

--- Executing SNOPT<br />

GAMS/SNOPT X86/LINUX version 5.3.4-007-035<br />

P. E. Gill, UC San Diego<br />

W. Murray and M. A. Saunders, Stanford University<br />

Work space allocated -- 0.02 Mb<br />

Major Minor Step nObj Objective Optimal nS PD<br />

0 5 0.0E+00 1 3.292476E+01 2.1E-01 0 TF r<br />

1 4 1.0E+00 2 4.517191E+01 2.2E-01 1 TF n r<br />

2 10 8.6E-02 5 4.533775E+01 1.7E-01 6 TF s<br />

3 3 1.0E+00 6 4.608439E+01 7.6E-02 6 TF<br />

4 2 1.0E+00 7 4.667813E+01 1.4E-01 5 TF<br />

5 2 1.0E+00 8 4.751149E+01 2.2E-02 6 TF<br />

6 3 1.0E+00 9 4.757024E+01 2.1E-02 4 TF<br />

7 2 7.1E-01 11 4.763634E+01 3.3E-02 5 TF<br />

8 3 1.0E+00 12 4.768395E+01 3.3E-02 7 TF<br />

9 3 4.6E-01 14 4.769958E+01 1.9E-02 5 TF<br />

10 2 1.0E+00 15 4.770539E+01 1.5E-02 6 TF<br />

Major Minor Step nObj Objective Optimal nS PD<br />

11 1 1.0E+00 16 4.770639E+01 6.2E-04 6 TF<br />

12 1 1.0E+00 17 4.770650E+01 5.0E-03 6 TF<br />

13 1 1.0E+00 18 4.770651E+01 1.6E-04 6 TF<br />

14 1 1.0E+00 19 4.770651E+01 1.8E-05 6 TF<br />

15 1 1.0E+00 20 4.770651E+01 2.7E-05 6 TF<br />

16 1 1.0E+00 21 4.770651E+01 7.6E-07 6 TT<br />

EXIT - Optimal Solution found.<br />

9


--- Restarting execution<br />

--- chem.gms(46) 1 Mb<br />

--- Reading solution for model MIXER<br />

--- chem.gms(46) 1 Mb<br />

--- Generating model MIXER<br />

--- chem.gms(47) 1 Mb<br />

--- 5 rows, 12 columns, and 37 non-zeroes.<br />

--- Executing SNOPT<br />

GAMS/SNOPT X86/LINUX version 5.3.4-007-035<br />

P. E. Gill, UC San Diego<br />

W. Murray and M. A. Saunders, Stanford University<br />

Work space allocated -- 0.02 Mb<br />

Major Minor Step nObj Objective Optimal nS PD<br />

0 0 0.0E+00 1 4.770651E+01 7.4E-07 0 TT r<br />

EXIT - Optimal Solution found.<br />

--- Restarting execution<br />

--- chem.gms(47) 1 Mb<br />

--- Reading solution for model MIXER<br />

--- chem.gms(47) 1 Mb<br />

*** Status: Normal completion<br />

[erwin@hamilton]$<br />

The first solve takes 16 iterations, while the second solve needs exactly<br />

zero iterations.<br />

Basis information is passed on using the marginals of the variables and equations.<br />

In general the rule is:<br />

X.M = 0 basic<br />

X.M = 0 nonbasic if level value is at bound, superbasic otherwise<br />

A marginal value of EPS means that the numerical value of the marginal is<br />

zero, but that the status is nonbasic or superbasic. The user can specify a basis<br />

by assigning zero or nonzero values to the .M values. It is further noted that if<br />

too many .M values are zero, the basis is rejected. This happens for instance<br />

when two subsequent models are too different. This decision is made based on<br />

the value of the bratio option (see §4.1).<br />

4 <strong>Options</strong><br />

In many cases NLP models can be solved with GAMS/SNOPT without using<br />

solver options. For special situations it is possible to specify non-standard values<br />

for some or all of the options.<br />

4.1 GAMS options<br />

The following GAMS options affect the behavior of SNOPT.<br />

10


NLP<br />

This option Selects the NLP solver. Example: option NLP=SNOPT;. See<br />

also §1.2.<br />

DNLP<br />

Selects the DNLP solver. Example: option DNLP=SNOPT;. See also §1.2.<br />

iterlim<br />

Sets the (minor) iteration limit. Example: option iterlim=50000;. The<br />

default is 10000. SNOPT will stop as soon as the number of minor iterations<br />

exceeds the iteration limit. In that case the current solution will be<br />

reported.<br />

reslim<br />

Sets the time limit or resource limit. Depending on the architecture this<br />

is wall clock time or CPU time. SNOPT will stop as soon as more than<br />

reslim seconds have elapsed since SNOPT started. The current solution<br />

will be reported in this case. Example: option reslim = 600;. The<br />

default is 1000 seconds.<br />

domlim<br />

Sets the domain violation limit. Domain errors are evaluation errors in the<br />

nonlinear functions. An example of a domain error is trying to evaluate<br />

√ x for x < 0. Other examples include taking logs of negative numbers,<br />

and evaluating x y for x < 0 (x y is evaluated as exp(y log x)). When<br />

such a situation occurs the number of domain errors is increased by one,<br />

and SNOPT will stop if this number exceeds the limit. If the limit has<br />

not been reached, a reasonable number is returned (e.g., in the case of<br />

√ x, x < 0 a zero is passed back) and SNOPT is asked to continue. In many<br />

cases SNOPT will be able to recover from these domain errors, especially<br />

when they happen at some intermediate point. Nevertheless it is best to<br />

add appropriate bounds or linear constraints to ensure that these domain<br />

errors don’t occur. For example, when an expression log(x) is present<br />

in the model, add a statement like x.lo = 0.001;. Example: option<br />

domlim=100;. The default value is 0.<br />

bratio<br />

Basis acceptance test. When several models are solved in a row, GAMS<br />

automatically passes dual information to SNOPT so that it can reconstruct<br />

an advanced basis. When too many new variables or constraints enter the<br />

model, it may be better not to use existing basis information, but to crash<br />

a new basis instead. The bratio determines how quickly an existing basis<br />

is discarded. A value of 1.0 will discard any basis, while a value of 0.0 will<br />

retain any basis. Example: option bratio=1.0;. Default: bratio = 0.25.<br />

sysout<br />

Debug listing. When turned on, extra information printed by SNOPT<br />

11


will be added to the listing file. Example: option sysout=on;. Default:<br />

sysout = off.<br />

work<br />

The work option sets the amount of memory SNOPT can use. By default<br />

an estimate is used based on the model statistics (number of (nonlinear)<br />

equations, number of (nonlinear) variables, number of (nonlinear) nonzeroes<br />

etc.). In most cases this is sufficient to solve the model. In some<br />

extreme cases SNOPT may need more memory, and the user can specify<br />

this with this option. For historical reasons work is specified in “double<br />

words” or 8 byte quantities. For example, option work=100000; will ask<br />

for 0.76 MB (a megabyte being defined as 1024 × 1024 bytes).<br />

reform<br />

This option will instruct the reformulation mechanism described in §2.1<br />

to substitute out equality equations. The default value of 100 will cause<br />

the procedure to try further substitutions along the diagonal after the<br />

objective variable has been removed. Any other value will prohibit this<br />

diagonal procedure. Example: option reform = 0;. Default: reform =<br />

100.<br />

4.2 Model suffices<br />

A model identifier in GAMS can have several suffices to which you can assign<br />

values. A small GAMS fragment illustrates this:<br />

model m /all/;<br />

m.iterlim = 3000;<br />

solve m minimizing z using nlp;<br />

<strong>Options</strong> set by assigning to the suffixed model identifier override the global<br />

options. For example,<br />

model m /all/;<br />

m.iterlim = 3000;<br />

option iterlim = 100;<br />

solve m minimizing z using nlp;<br />

will use an iteration limit of 3000 for this solve.<br />

m.iterlim<br />

Sets the iteration limit. Overrides the global iteration limit. Example:<br />

m.iterlim=50000; The default is 10000. See also §4.1.<br />

m.reslim<br />

Sets the resource or time limit. Overrides the global resource limit. Example:<br />

m.reslim=600; The default is 1000 seconds. See also §4.1.<br />

12


m.bratio<br />

Sets the basis acceptance test parameter. Overrides the global setting.<br />

Example: m.bratio=1.0; The default is 0.25. See also §4.1.<br />

m.scaleopt<br />

Whether or not to scale the model using user-supplied scale factors. The<br />

user can provide scale factors using the .scale variable and equation<br />

suffix. For example, x.scale(i,j) = 100; will assign a scale factor of<br />

100 to all xi,j variables. The variables SNOPT will see are scaled by<br />

a factor 1/variable scale, so the modeler should use scale factors that<br />

represent the order of magnitude of the variable. In that case SNOPT<br />

will see variables that are scaled around 1.0. Similarly equation scales can<br />

be assigned to equations, which are scaled by a factor 1/equation scale.<br />

Example: m.scaleopt=1; will turn scaling on. The default is not to use<br />

scaling, and the default scale factors are 1.0. Automatic scaling is provided<br />

by the SNOPT option scale option.<br />

m.optfile<br />

Sets whether or not to use a solver option file. <strong>Solver</strong> specific SNOPT<br />

options are specified in a file called snopt.opt, see §4.3. To tell SNOPT<br />

to use this file, add the statement: option m.optfile=1;. The default is<br />

not to use an option file.<br />

m.workspace<br />

The workspace option sets the amount of memory that SNOPT can use.<br />

By default an estimate is used based on the model statistics (number of<br />

(nonlinear) equations, number of (nonlinear) variables, number of (nonlinear)<br />

nonzeroes, etc.). In most cases this is sufficient to solve the model.<br />

In some extreme cases SNOPT may need more memory, and the user can<br />

specify this with this option. The amount of memory is specified in MB.<br />

Example: m.workspace = 5;.<br />

4.3 SNOPT options<br />

SNOPT options are specified in a file called snopt.opt which should reside in<br />

the working directory (this is the project directory when running models from<br />

the IDE). To tell SNOPT to read this file, use the statement m.optfile = 1;<br />

in the model (see §4.2).<br />

An example of a valid snopt.opt is:<br />

Hessian full memory<br />

Hessian frequency 20<br />

Users familiar with the SNOPT distribution from Stanford University will<br />

notice that the begin and end keywords are missing. These markers are optional<br />

in GAMS/SNOPT and will be ignored. Therefore, the following option file is also<br />

accepted:<br />

13


egin<br />

Hessian full memory<br />

Hessian frequency 20<br />

end<br />

All options are case-insensitive. A line is a comment line if it starts with an<br />

asterisk, *, in column one.<br />

Here follows a description of all SNOPT options that are possibly useful in<br />

a GAMS environment:<br />

Check frequency i<br />

Every ith minor iteration after the most recent basis factorization, a numerical<br />

test is made to see if the current solution x satisfies the general<br />

linear constraints (including linearized nonlinear constraints, if any). The<br />

constraints are of the form Ax−s = b, where s is the set of slack variables.<br />

To perform the numerical test, the residual vector r = b − Ax + s is computed.<br />

If the largest component of r is judged to be too large, the current<br />

basis is refactorized and the basic variables are recomputed to satisfy the<br />

general constraints more accurately.<br />

Check frequency 1 is useful for debugging purposes, but otherwise this<br />

option should not be needed.<br />

Default: check frequency 60.<br />

Crash option i<br />

Except on restarts, a CRASH procedure is used to select an initial basis<br />

from certain rows and columns of the constraint matrix ( A − I ). The<br />

Crash option i determines which rows and columns of A are eligible<br />

initially, and how many times CRASH is called. Columns of −I are used<br />

to pad the basis where necessary.<br />

Crash option 0: The initial basis contains only slack variables: B = I.<br />

Crash option 1: CRASH is called once, looking for a triangular basis<br />

in all rows and columns of the matrix A.<br />

Crash option 2: CRASH is called twice (if there are nonlinear constraints).<br />

The first call looks for a triangular basis in linear rows,<br />

and the iteration proceeds with simplex iterations until the linear<br />

constraints are satisfied. The Jacobian is then evaluated for the first<br />

major iteration and CRASH is called again to find a triangular basis<br />

in the nonlinear rows (retaining the current basis for linear rows).<br />

Crash option 3: CRASH is called up to three times (if there are nonlinear<br />

constraints). The first two calls treat linear equalities and linear<br />

inequalities separately. As before, the last call treats nonlinear rows<br />

before the first major iteration.<br />

If i ≥ 1, certain slacks on inequality rows are selected for the basis first.<br />

(If i ≥ 2, numerical values are used to exclude slacks that are close to<br />

14


a bound.) CRASH then makes several passes through the columns of A,<br />

searching for a basis matrix that is essentially triangular. A column is<br />

assigned to “pivot” on a particular row if the column contains a suitably<br />

large element in a row that has not yet been assigned. (The pivot elements<br />

ultimately form the diagonals of the triangular basis.) For remaining<br />

unassigned rows, slack variables are inserted to complete the basis.<br />

Default: Crash option 3 for linearly constrained problems and Crash<br />

option 0; for problems with nonlinear constraints.<br />

Crash tolerance r<br />

The Crash tolerance r allows the starting procedure CRASH to ignore<br />

certain “small” nonzeros in each column of A. If amax is the largest element<br />

in column j, other nonzeros aij in the column are ignored if |aij| ≤ amax×r.<br />

(To be meaningful, r should be in the range 0 ≤ r < 1.)<br />

When r > 0.0, the basis obtained by CRASH may not be strictly triangular,<br />

but it is likely to be nonsingular and almost triangular. The<br />

intention is to obtain a starting basis containing more columns of A and<br />

fewer (arbitrary) slacks. A feasible solution may be reached sooner on<br />

some problems.<br />

For example, suppose the first m columns of A are the matrix shown under<br />

LU factor tolerance; i.e., a tridiagonal matrix with entries −1, 4,<br />

−1. To help CRASH choose all m columns for the initial basis, we would<br />

specify Crash tolerance r for some value of r > 1/4.<br />

Default: Crash tolerance 0.1<br />

Elastic weight ω<br />

This parameter denoted by ω determines the initial weight γ associated<br />

with problem NP(γ).<br />

At any given major iteration k, elastic mode is started if the QP subproblem<br />

is infeasible, or the QP dual variables are larger in magnitude than<br />

ω(1 + g(xk)2), where g is the objective gradient. In either case, the QP<br />

is re-solved in elastic mode with γ = ω(1 + g(xk)2).<br />

Thereafter, γ is increased (subject to a maximum allowable value) at any<br />

point that is optimal for problem NP(γ), but not feasible for NP. After the<br />

rth increase, γ = ω10 r (1 + g(xk1)2), where xk1 is the iterate at which<br />

γ was first needed.<br />

Default: Elastic weight 10000.0<br />

Expand frequency i<br />

This option is part of the EXPAND anti-cycling procedure [8] designed to<br />

make progress even on highly degenerate problems.<br />

For linear models, the strategy is to force a positive step at every iteration,<br />

at the expense of violating the bounds on the variables by a small amount.<br />

Suppose that the Minor feasibility tolerance is δ. Over a period of<br />

i iterations, the tolerance actually used by SNOPT increases from 0.5δ to<br />

δ (in steps of 0.5δ/i).<br />

For nonlinear models, the same procedure is used for iterations in which<br />

15


there is only one superbasic variable. (Cycling can occur only when the<br />

current solution is at a vertex of the feasible region.) Thus, zero steps<br />

are allowed if there is more than one superbasic variable, but otherwise<br />

positive steps are enforced.<br />

Increasing the expand frequency helps reduce the number of slightly infeasible<br />

nonbasic basic variables (most of which are eliminated during a<br />

resetting procedure). However, it also diminishes the freedom to choose a<br />

large pivot element (see Pivot tolerance).<br />

Default: Expand frequency 10000<br />

Factorization frequency k<br />

At most k basis changes will occur between factorizations of the basis<br />

matrix.<br />

• With linear programs, the basis factors are usually updated every<br />

iteration. The default k is reasonable for typical problems. Smaller<br />

values (say k = 75 or k = 50) may be more efficient on problems that<br />

are rather dense or poorly scaled.<br />

• When the problem is nonlinear, fewer basis updates will occur as<br />

an optimum is approached. The number of iterations between basis<br />

factorizations will therefore increase. During these iterations a test is<br />

made regularly (according to the Check frequency) to ensure that<br />

the general constraints are satisfied. If necessary the basis will be<br />

refactorized before the limit of k updates is reached.<br />

Default: Factorization frequency 100 for linear programs and Factorization<br />

frequency 50 for nonlinear models.<br />

Feasibility tolerance t<br />

See Minor feasibility tolerance.<br />

Default: Feasibility tolerance 1.0e-6<br />

Feasible point only<br />

This options means “Ignore the objective function” while finding a feasible<br />

point for the linear and nonlinear constraints. It can be used to check that<br />

the nonlinear constraints are feasible.<br />

Default: turned off.<br />

Feasible exit<br />

If SNOPT is about to terminate with nonlinear constraint violations, the<br />

option Feasible exit requests further effort to satisfy the nonlinear constraints<br />

while ignoring the objective function.<br />

Default: turned off.<br />

Hessian Full memory<br />

This option selects the full storage method for storing and updating the<br />

approximate Hessian. (SNOPT uses a quasi-Newton approximation to the<br />

Hessian of the Lagrangian. A BFGS update is applied after each major<br />

16


iteration.)<br />

If Hessian Full memory is specified, the approximate Hessian is treated<br />

as a dense matrix and the BFGS updates are applied explicitly. This<br />

option is most efficient when the number of nonlinear variables n1 is not<br />

too large (say, less than 75). In this case, the storage requirement is fixed<br />

and one can expect 1-step Q-superlinear convergence to the solution.<br />

Default: turned on when the number of nonlinear variables n1 ≤ 75.<br />

Hessian Limited memory<br />

This option selects the limited memory storage method for storing and<br />

updating the approximate Hessian. (SNOPT uses a quasi-Newton approximation<br />

to the Hessian of the Lagrangian. A BFGS update is applied after<br />

each major iteration.)<br />

Hessian Limited memory should be used on problems where the number<br />

of nonlinear variables n1 is very large. In this case a limited-memory procedure<br />

is used to update a diagonal Hessian approximation Hr a limited<br />

number of times. (Updates are accumulated as a list of vector pairs. They<br />

are discarded at regular intervals after Hr has been reset to their diagonal.)<br />

Default: turned on when the number of nonlinear variables n1 > 75.<br />

Hessian frequency i<br />

If Hessian Full is selected and i BFGS updates have already been carried<br />

out, the Hessian approximation is reset to the identity matrix. (For<br />

certain problems, occasional resets may improve convergence, but in general<br />

they should not be necessary.)<br />

Hessian Full memory and Hessian frequency = 20 have a similar effect<br />

to Hessian Limited memory and Hessian updates = 20 (except<br />

that the latter retains the current diagonal during resets).<br />

Default: Hessian frequency 99999999 (i.e. never).<br />

Hessian updates i<br />

If Hessian Limited memory is selected and i BFGS updates have already<br />

been carried out, all but the diagonal elements of the accumulated updates<br />

are discarded and the updating process starts again.<br />

Broadly speaking, the more updates stored, the better the quality of the<br />

approximate Hessian. However, the more vectors stored, the greater the<br />

cost of each QP iteration. The default value is likely to give a robust algorithm<br />

without significant expense, but faster convergence can sometimes<br />

be obtained with significantly fewer updates (e.g., i = 5).<br />

Default: Hessian updates 20 (only when limited memory storage model<br />

is used).<br />

Infeasible exit ok<br />

If SNOPT is about to terminate with nonlinear constraint violations, the<br />

companion option Feasible exit requests further effort to satisfy the<br />

nonlinear constraints while ignoring the objective function. Infeasible<br />

17


exit ok does not do this extra effort.<br />

Default: turned on.<br />

Iterations limit k<br />

This is the maximum number of minor iterations allowed (i.e., iterations<br />

of the simplex method or the QP algorithm), summed over all major<br />

iterations. This option overrides the GAMS iterlim options.<br />

Default: specified by GAMS.<br />

Linesearch tolerance t<br />

This controls the accuracy with which a steplength will be located along<br />

the direction of search each iteration. At the start of each linesearch<br />

a target directional derivative for the merit function is identified. This<br />

parameter determines the accuracy to which this target value is approximated.<br />

• t must be a real value in the range 0.0 ≤ t ≤ 1.0.<br />

• The default value t = 0.9 requests just moderate accuracy in the<br />

linesearch.<br />

• If the nonlinear functions are cheap to evaluate (this is usually the<br />

case for GAMS models), a more accurate search may be appropriate;<br />

try t = 0.1, 0.01 or 0.001. The number of major iterations might<br />

decrease.<br />

• If the nonlinear functions are expensive to evaluate, a less accurate<br />

search may be appropriate. In the case of running under GAMS where<br />

all gradients are known, try t = 0.99. (The number of major iterations<br />

might increase, but the total number of function evaluations<br />

may decrease enough to compensate.)<br />

Default: Linesearch tolerance 0.9.<br />

Log frequency k<br />

See Print frequency.<br />

Default: Log frequency 1<br />

LU factor tolerance r1<br />

LU update tolerance r2<br />

These tolerances affect the stability and sparsity of the basis factorization<br />

B = LU during refactorization and updating, respectively. They must<br />

satisfy r1, r2 ≥ 1.0. The matrix L is a product of matrices of the form<br />

<br />

1<br />

µ 1<br />

where the multipliers µ satisfy |µ| ≤ ri. Smaller values of ri favor stability,<br />

while larger values favor sparsity.<br />

18<br />

<br />

,


• For large and relatively dense problems, r1 = 5.0 (say) may give<br />

a useful improvement in stability without impairing sparsity to a<br />

serious degree.<br />

• For certain very regular structures (e.g., band matrices) it may be<br />

necessary to reduce r1 and/or r2 in order to achieve stability. For<br />

example, if the columns of A include a submatrix of the form<br />

⎛<br />

4<br />

⎜ −1<br />

⎜<br />

⎝<br />

−1<br />

4<br />

−1<br />

−1<br />

4<br />

. ..<br />

−1<br />

. ..<br />

−1<br />

. ..<br />

4<br />

⎞<br />

⎟ ,<br />

⎟<br />

−1 ⎠<br />

−1 4<br />

both r1 and r2 should be in the range 1.0 ≤ ri < 4.0.<br />

Defaults for linear models: LU factor tolerance 100.0 and LU update<br />

tolerance 10.0.<br />

The defaults for nonlinear models are LU factor tolerance 5.0 and LU<br />

update tolerance 5.0.<br />

LU density tolerance r1<br />

The density tolerance r1 is used during LU factorization of the basis matrix.<br />

Columns of L and rows of U are formed one at a time, and the remaining<br />

rows and columns of the basis are altered appropriately. At any<br />

stage, if the density of the remaining matrix exceeds r1, the Markowitz<br />

strategy for choosing pivots is terminated. The remaining matrix is factored<br />

by a dense LU procedure. Raising the density tolerance towards 1.0<br />

may give slightly sparser LU factors, with a slight increase in factorization<br />

time.<br />

Default: LU density tolerance 0.6<br />

LU singularity tolerance r2<br />

The singularity tolerance r2 helps guard against ill-conditioned basis matrices.<br />

When the basis is refactorized, the diagonal elements of U are<br />

tested as follows: if |Ujj| ≤ r2 or |Ujj| < r2 maxi |Uij|, the jth column of<br />

the basis is replaced by the corresponding slack variable. (This is most<br />

likely to occur after a restart, or at the start of a major iteration.)<br />

In some cases, the Jacobian may converge to values that make the basis<br />

exactly singular. (For example, a whole row of the Jacobian could be zero<br />

at an optimal solution.) Before exact singularity occurs, the basis could<br />

become very ill-conditioned and the optimization could progress very slowly<br />

(if at all). Setting a larger tolerance r2 = 1.0e-5, say, may help<br />

cause a judicious change of basis.<br />

Default: LU singularity tolerance 3.7e-11 for most machines. This<br />

value corresponds to ɛ 2/3 , where ɛ is the relative machine precision.<br />

19


Major feasibility tolerance ɛr<br />

This specifies how accurately the nonlinear constraints should be satisfied.<br />

The default value of 1.0e-6 is appropriate when the linear and nonlinear<br />

constraints contain data to about that accuracy.<br />

Let rowerr be the maximum nonlinear constraint violation, normalized<br />

by the size of the solution. It is required to satisfy<br />

rowerr = max<br />

i<br />

violi/x ≤ ɛr, (5)<br />

where violi is the violation of the ith nonlinear constraint (i = 1 : nnCon,<br />

nnCon being the number of nonlinear constraints).<br />

In the GAMS/SNOPT iteration log, rowerr appears as the quantity labeled<br />

“Feasibl”. If some of the problem functions are known to be of low<br />

accuracy, a larger Major feasibility tolerance may be appropriate.<br />

Default: Major feasibility tolerance 1.0e-6.<br />

Major optimality tolerance ɛd<br />

This specifies the final accuracy of the dual variables. On successful termination,<br />

SNOPT will have computed a solution (x, s, π) such that<br />

maxgap = max<br />

j<br />

gap j /π ≤ ɛd, (6)<br />

where gapj is an estimate of the complementarity gap for variable j (j =<br />

1 : n + m). The gaps are computed from the final QP solution using the<br />

reduced gradients dj = gj − πT aj (where gj is the jth component of the<br />

objective gradient, aj is the associated column of the constraint matrix<br />

( A − I ), and π is the set of QP dual variables):<br />

<br />

gapj =<br />

dj min{xj − lj, 1}<br />

−dj min{uj − xj, 1}<br />

if dj ≥ 0;<br />

if dj < 0.<br />

In the GAMS/SNOPT iteration log, maxgap appears as the quantity labeled<br />

“Optimal”.<br />

Default: Major optimality tolerance 1.0e-6.<br />

Major iterations limit k<br />

This is the maximum number of major iterations allowed. It is intended<br />

to guard against an excessive number of linearizations of the constraints.<br />

Default: Major iterations limit max{1000, 3 max{m, n}}.<br />

Major print level p<br />

This controls the amount of output to the GAMS listing file each major<br />

iteration. This output is only visible if the sysout option is turned on (see<br />

§4.1). Major print level 1 gives normal output for linear and nonlinear<br />

problems, and Major print level 11 gives additional details of the<br />

Jacobian factorization that commences each major iteration.<br />

20


In general, the value being specified may be thought of as a binary number<br />

of the form<br />

Major print level JFDXbs<br />

where each letter stands for a digit that is either 0 or 1 as follows:<br />

s a single line that gives a summary of each major iteration. (This entry<br />

in JFDXbs is not strictly binary since the summary line is printed<br />

whenever JFDXbs ≥ 1).<br />

b BASIS statistics, i.e., information relating to the basis matrix whenever<br />

it is refactorized. (This output is always provided if JFDXbs ≥<br />

10).<br />

X xk, the nonlinear variables involved in the objective function or the<br />

constraints.<br />

D πk, the dual variables for the nonlinear constraints.<br />

F F (xk), the values of the nonlinear constraint functions.<br />

J J(xk), the Jacobian.<br />

To obtain output of any items JFDXbs, set the corresponding digit to 1,<br />

otherwise to 0.<br />

If J=1, the Jacobian will be output column-wise at the start of each major<br />

iteration. Column j will be preceded by the value of the corresponding<br />

variable xj and a key to indicate whether the variable is basic, superbasic<br />

or nonbasic. (Hence if J=1, there is no reason to specify X=1 unless the<br />

objective contains more nonlinear variables than the Jacobian.) A typical<br />

line of output is<br />

3 1.250000D+01 BS 1 1.00000E+00 4 2.00000E+00<br />

which would mean that x3 is basic at value 12.5, and the third column of<br />

the Jacobian has elements of 1.0 and 2.0 in rows 1 and 4.<br />

Major print level 0 suppresses most output, except for error messages.<br />

Default: Major print level 00001<br />

Major step limit r<br />

This parameter limits the change in x during a linesearch. It applies to<br />

all nonlinear problems, once a “feasible solution” or “feasible subproblem”<br />

has been found.<br />

1. A linesearch determines a step α over the range 0 < α ≤ β, where<br />

β is 1 if there are nonlinear constraints, or the step to the nearest<br />

upper or lower bound on x if all the constraints are linear. Normally,<br />

the first steplength tried is α1 = min(1, β).<br />

2. In some cases, such as f(x) = ae bx or f(x) = ax b , even a moderate<br />

change in the components of x can lead to floating-point overflow.<br />

The parameter r is therefore used to define a limit ¯ β = r(1+x)/p<br />

(where p is the search direction), and the first evaluation of f(x) is<br />

at the potentially smaller steplength α1 = min(1, ¯ β, β).<br />

21


3. Wherever possible, upper and lower bounds on x should be used to<br />

prevent evaluation of nonlinear functions at meaningless points. The<br />

Major step limit provides an additional safeguard. The default<br />

value r = 2.0 should not affect progress on well behaved problems,<br />

but setting r = 0.1 or 0.01 may be helpful when rapidly varying<br />

functions are present. A “good” starting point may be required.<br />

An important application is to the class of nonlinear least-squares<br />

problems.<br />

4. In cases where several local optima exist, specifying a small value for<br />

r may help locate an optimum near the starting point.<br />

Default: Major step limit 2.0.<br />

Minor iterations limit k<br />

This is the maximum number of minor iterations allowed for each QP<br />

subproblem in the SQP algorithm. Current experience is that the major<br />

iterations converge more reliably if the QP subproblems are allowed to<br />

solve accurately. Thus, k should be a large value.<br />

In the major iteration log, a t at the end of a line indicates that the<br />

corresponding QP was terminated by the limit k.<br />

Note that the SNOPT option Iterations limit or the GAMS iterlim<br />

option defines an independent limit on the total number of minor iterations<br />

(summed over all QP subproblems).<br />

Default: Minor iterations limit max{1000, 5 max{m, n}}<br />

Minor feasibility tolerance t<br />

SNOPT tries to ensure that all variables eventually satisfy their upper<br />

and lower bounds to within the tolerance t. This includes slack variables.<br />

Hence, general linear constraints should also be satisfied to within t.<br />

Feasibility with respect to nonlinear constraints is judged by the Major<br />

feasibility tolerance (not by t).<br />

• If the bounds and linear constraints cannot be satisfied to within t,<br />

the problem is declared infeasible. Let sInf be the corresponding<br />

sum of infeasibilities. If sInf is quite small, it may be appropriate<br />

to raise t by a factor of 10 or 100. Otherwise, some error in the data<br />

should be suspected.<br />

• Nonlinear functions will be evaluated only at points that satisfy the<br />

bounds and linear constraints. If there are regions where a function is<br />

undefined, every attempt should be made to eliminate these regions<br />

from the problem. For example, if f(x) = √ x1 + log x2, it is essential<br />

to place lower bounds on both variables. If t = 1.0e-6, the bounds<br />

x1 ≥ 10 −5 and x2 ≥ 10 −4 might be appropriate. (The log singularity<br />

is more serious. In general, keep x as far away from singularities as<br />

possible.)<br />

• If Scale option ≥ 1, feasibility is defined in terms of the scaled<br />

problem (since it is then more likely to be meaningful).<br />

22


• In reality, SNOPT uses t as a feasibility tolerance for satisfying the<br />

bounds on x and s in each QP subproblem. If the sum of infeasibilities<br />

cannot be reduced to zero, the QP subproblem is declared infeasible.<br />

SNOPT is then in elastic mode thereafter (with only the linearized<br />

nonlinear constraints defined to be elastic). See the Elastic options.<br />

Default: Minor feasilibility tolerance 1.0e-6.<br />

Minor optimality tolerance t<br />

This is used to judge optimality for each QP subproblem. Let the QP<br />

reduced gradients be dj = gj − π T aj, where gj is the jth component of<br />

the QP gradient, aj is the associated column of the QP constraint matrix,<br />

and π is the set of QP dual variables.<br />

• By construction, the reduced gradients for basic variables are always<br />

zero. The QP subproblem will be declared optimal if the reduced<br />

gradients for nonbasic variables at their lower or upper bounds satisfy<br />

dj/π ≥ −t or dj/π ≤ t<br />

respectively, and if |dj|/π ≤ t for superbasic variables.<br />

• In the above tests, π is a measure of the size of the dual variables.<br />

It is included to make the tests independent of a large scale factor<br />

on the objective function. The quantity actually used is defined by<br />

π = max{σ/ √ m, 1}, where σ =<br />

m<br />

|πi|.<br />

• If the objective is scaled down to be very small, the optimality test<br />

reduces to comparing dj against t.<br />

Default: Minor optimality tolerance 1.0e-6.<br />

Minor print level k<br />

This controls the amount of output to the GAMS listing file during solution<br />

of the QP subproblems. This option is only useful if the sysout option is<br />

turned on (see §4.1). The value of k has the following effect:<br />

0 No minor iteration output except error messages.<br />

≥ 1 A single line of output each minor iteration (controlled by Print<br />

frequency).<br />

≥ 10 Basis factorization statistics generated during the periodic refactorization<br />

of the basis (see Factorization frequency). Statistics for<br />

the first factorization each major iteration are controlled by the Major<br />

print level.<br />

Default: Minor print level 0.<br />

23<br />

i=1


Optimality tolerance t<br />

See Minor optimality tolerance.<br />

Default: Optimality tolerance 1.0e-6.<br />

Partial Price i<br />

This parameter is recommended for large problems that have significantly<br />

more variables than constraints. It reduces the work required for each<br />

“pricing” operation (when a nonbasic variable is selected to become superbasic).<br />

• When i = 1, all columns of the constraint matrix ( A − I ) are<br />

searched.<br />

• Otherwise, A and I are partitioned to give i roughly equal segments<br />

Aj, Ij (j = 1 to i). If the previous pricing search was successful<br />

on Aj, Ij, the next search begins on the segments Aj+1, Ij+1. (All<br />

subscripts here are modulo i.)<br />

• If a reduced gradient is found that is larger than some dynamic tolerance,<br />

the variable with the largest such reduced gradient (of appropriate<br />

sign) is selected to become superbasic. If nothing is found,<br />

the search continues on the next segments Aj+2, Ij+2, and so on.<br />

• Partial price t (or t/2 or t/3) may be appropriate for time-stage<br />

models having t time periods.<br />

Default: Partial price 10 for linear models and Partial price 1 for<br />

nonlinear models.<br />

Pivot tolerance r During solution of QP subproblems, the pivot tolerance is<br />

used to prevent columns entering the basis if they would cause the basis<br />

to become almost singular.<br />

• When x changes to x + αp for some search direction p, a “ratio<br />

test” is used to determine which component of x reaches an upper or<br />

lower bound first. The corresponding element of p is called the pivot<br />

element.<br />

• Elements of p are ignored (and therefore cannot be pivot elements)<br />

if they are smaller than the pivot tolerance r.<br />

• It is common for two or more variables to reach a bound at essentially<br />

the same time. In such cases, the Minor Feasibility tolerance<br />

(say t) provides some freedom to maximize the pivot element and<br />

thereby improve numerical stability. Excessively small values of t<br />

should therefore not be specified.<br />

• To a lesser extent, the Expand frequency (say f) also provides some<br />

freedom to maximize the pivot element. Excessively large values of<br />

f should therefore not be specified.<br />

Default: Pivot tolerance 3.7e-11 on most machines. This corresponds<br />

to ɛ 2/3 where ɛ is the machine precision.<br />

24


Print frequency k<br />

When sysout is turned on (see §4.1) and Minor print level ≥ 1, a line<br />

of the QP iteration log will be printed on the listing file every kth minor<br />

iteration.<br />

Default: Print frequency 1.<br />

Scale option i<br />

Three scale options are available as follows:<br />

Scale option 0: No scaling. This is recommended if it is known that<br />

x and the constraint matrix (and Jacobian) never have very large<br />

elements (say, larger than 1000).<br />

Scale option 1: Linear constraints and variables are scaled by an iterative<br />

procedure that attempts to make the matrix coefficients as close<br />

as possible to 1.0 (see Fourer [5]). This will sometimes improve the<br />

performance of the solution procedures.<br />

Scale option 2: All constraints and variables are scaled by the iterative<br />

procedure. Also, an additional scaling is performed that takes into<br />

account columns of ( A − I ) that are fixed or have positive lower<br />

bounds or negative upper bounds.<br />

If nonlinear constraints are present, the scales depend on the Jacobian<br />

at the first point that satisfies the linear constraints. Scale<br />

option 2 should therefore be used only if (a) a good starting point<br />

is provided, and (b) the problem is not highly nonlinear.<br />

Default: Scale option 2 for linear models and Scale option 1 for NLP’s.<br />

Scale tolerance r<br />

This parameter affects how many passes might be needed through the<br />

constraint matrix. On each pass, the scaling procedure computes the<br />

ratio of the largest and smallest nonzero coefficients in each column:<br />

ρj = max<br />

i<br />

|aij|/ min |aij| (aij = 0).<br />

i<br />

If maxj ρj is less than r times its previous value, another scaling pass is<br />

performed to adjust the row and column scales. Raising r from 0.9 to 0.99<br />

(say) usually increases the number of scaling passes through A. At most<br />

10 passes are made.<br />

Default: Scale tolerance 0.9.<br />

Scale Print<br />

This option causes the row-scales r(i) and column-scales c(j) to be printed.<br />

The scaled matrix coefficients are āij = aijc(j)/r(i), and the scaled bounds<br />

on the variables and slacks are ¯ lj = lj/c(j), ūj = uj/c(j), where c(j) ≡<br />

r(j − n) if j > n.<br />

The listing file will only show these values if the sysout option is turned<br />

on (see §4.1).<br />

Default: turned off.<br />

25


Solution Yes<br />

This option causes the SNOPT solution file to be printed to the GAMS<br />

listing file. It is only visible if the sysout option is turned on (see §4.1).<br />

Default: turned off.<br />

Superbasics limit i<br />

This places a limit on the storage allocated for superbasic variables. Ideally,<br />

i should be set slightly larger than the “number of degrees of freedom”<br />

expected at an optimal solution.<br />

For linear programs, an optimum is normally a basic solution with no degrees<br />

of freedom. (The number of variables lying strictly between their<br />

bounds is no more than m, the number of general constraints.) The default<br />

value of i is therefore 1.<br />

For nonlinear problems, the number of degrees of freedom is often called<br />

the “number of independent variables”.<br />

Normally, i need not be greater than n1 + 1, where n1 is the number of<br />

nonlinear variables. For many problems, i may be considerably smaller<br />

than n1. This will save storage if n1 is very large.<br />

Unbounded objective value fmax<br />

Unbounded step size αmax<br />

These parameters are intended to detect unboundedness in nonlinear problems.<br />

(They may not achieve that purpose!) During a line search, f is<br />

evaluated at points of the form x + αp, where x and p are fixed and α<br />

varies. if |f| exceeds fmax or α exceeds αmax, iterations are terminated<br />

with the exit message Problem is unbounded (or badly scaled).<br />

In a GAMS environment no floating-point overflow errors should occur<br />

when singularities are present during the evaluation of f(x + αp) before<br />

the test can be made.<br />

Defaults: Unbounded objective value 1.0e+15 and Unbounded step<br />

size 1.0e+18.<br />

Violation limit τ<br />

This keyword defines an absolute limit on the magnitude of the maximum<br />

constraint violation after the line search. On completion of the line search,<br />

the new iterate xk+1 satisfies the condition<br />

vi(xk+1) ≤ τ max{1, vi(x0)},<br />

where x0 is the point at which the nonlinear constraints are first evaluated<br />

and vi(x) is the ith nonlinear constraint violation vi(x) = max(0, li −<br />

Fi(x), Fi(x) − ui).<br />

The effect of this violation limit is to restrict the iterates to lie in an<br />

expanded feasible region whose size depends on the magnitude of τ. This<br />

makes it possible to keep the iterates within a region where the objective<br />

is expected to be well-defined and bounded below. If the objective is<br />

bounded below for all values of the variables, then τ may be any large<br />

26


positive value.<br />

Default: Violation limit 10.<br />

5 The SNOPT log<br />

When GAMS/SNOPT solves a linearly constrained problem the following log is<br />

visible on the screen:<br />

[erwin@hamilton]$ gamslib chem<br />

Model chem.gms retrieved<br />

[erwin@hamilton]$ gams chem nlp=snopt<br />

GAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reserved<br />

Licensee: hamilton G990622:1048CP-LNX<br />

GAMS Development<br />

--- Starting compilation<br />

--- chem.gms(47) 1 Mb<br />

--- Starting execution<br />

--- chem.gms(42) 1 Mb<br />

--- Generating model MIXER<br />

--- chem.gms(46) 1 Mb<br />

--- 5 rows, 12 columns, and 37 non-zeroes.<br />

--- Executing SNOPT<br />

GAMS/SNOPT X86/LINUX version 5.3.4-007-035<br />

P. E. Gill, UC San Diego<br />

W. Murray and M. A. Saunders, Stanford University<br />

Work space allocated -- 0.02 Mb<br />

Major Minor Step nObj Objective Optimal nS PD<br />

0 5 0.0E+00 1 3.292476E+01 2.1E-01 0 TF r<br />

1 4 1.0E+00 2 4.517191E+01 2.2E-01 1 TF n r<br />

2 10 8.6E-02 5 4.533775E+01 1.7E-01 6 TF s<br />

3 3 1.0E+00 6 4.608439E+01 7.6E-02 6 TF<br />

4 2 1.0E+00 7 4.667813E+01 1.4E-01 5 TF<br />

5 2 1.0E+00 8 4.751149E+01 2.2E-02 6 TF<br />

6 3 1.0E+00 9 4.757024E+01 2.1E-02 4 TF<br />

7 2 7.1E-01 11 4.763634E+01 3.3E-02 5 TF<br />

8 3 1.0E+00 12 4.768395E+01 3.3E-02 7 TF<br />

9 3 4.6E-01 14 4.769958E+01 1.9E-02 5 TF<br />

10 2 1.0E+00 15 4.770539E+01 1.5E-02 6 TF<br />

Major Minor Step nObj Objective Optimal nS PD<br />

11 1 1.0E+00 16 4.770639E+01 6.2E-04 6 TF<br />

12 1 1.0E+00 17 4.770650E+01 5.0E-03 6 TF<br />

13 1 1.0E+00 18 4.770651E+01 1.6E-04 6 TF<br />

14 1 1.0E+00 19 4.770651E+01 1.8E-05 6 TF<br />

15 1 1.0E+00 20 4.770651E+01 2.7E-05 6 TF<br />

16 1 1.0E+00 21 4.770651E+01 7.6E-07 6 TT<br />

EXIT - Optimal Solution found.<br />

--- Restarting execution<br />

--- chem.gms(46) 1 Mb<br />

--- Reading solution for model MIXER<br />

27


--- chem.gms(46) 1 Mb<br />

*** Status: Normal completion<br />

[erwin@hamilton]$<br />

For a nonlinearly constrained problem, the log is somewhat different:<br />

[erwin@hamilton]$ gamslib chenery<br />

Model chenery.gms retrieved<br />

[erwin@hamilton]$ gams chenery nlp=snopt<br />

GAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reserved<br />

Licensee: hamilton G990622:1048CP-LNX<br />

GAMS Development<br />

--- Starting compilation<br />

--- chenery.gms(240) 1 Mb<br />

--- Starting execution<br />

--- chenery.gms(222) 1 Mb<br />

--- Generating model CHENRAD<br />

--- chenery.gms(225) 1 Mb<br />

--- 39 rows, 44 columns, and 133 non-zeroes.<br />

--- Executing SNOPT<br />

GAMS/SNOPT X86/LINUX version 5.3.4-007-035<br />

P. E. Gill, UC San Diego<br />

W. Murray and M. A. Saunders, Stanford University<br />

Work space allocated -- 0.07 Mb<br />

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD<br />

0 39 0.0E+00 1 -1.653933E+07 1.4E+00 1.4E+00 6 0.0E+00 FF r i<br />

1 6 1.0E+00 2 6.215366E+03 9.8E-01 1.4E+00 6 0.0E+00 FF n r<br />

2 2 1.0E+00 3 -4.424844E+03 5.6E-01 7.8E-01 7 2.8E-01 FF s<br />

3 1 1.0E+00 4 2.756620E+02 1.1E-01 2.1E-01 7 2.8E-01 FF<br />

4 1 1.0E+00 6 5.640617E+02 5.6E-03 2.2E-01 7 2.8E-01 FF m<br />

5 6 1.0E+00 8 6.188177E+02 9.9E-03 2.6E-01 5 2.8E-01 FF m<br />

6 2 7.2E-01 11 6.827737E+02 2.7E-02 2.0E-01 4 2.8E-01 FF m<br />

7 4 5.9E-01 14 7.516259E+02 3.4E-02 9.4E-02 6 2.8E-01 FF m<br />

8 1 1.0E+00 15 8.437315E+02 6.7E-03 1.7E+00 6 2.8E-01 FF<br />

9 3 1.0E+00 17 8.756771E+02 7.1E-03 3.5E-01 4 2.8E-01 FF m<br />

10 5 3.1E-01 21 9.010440E+02 2.4E-02 1.1E+00 6 2.8E-01 FF m<br />

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD<br />

11 2 2.6E-01 24 9.168958E+02 2.5E-02 6.9E-01 5 2.8E-01 FF<br />

12 1 4.8E-01 26 9.404851E+02 2.9E-02 5.0E-01 5 2.8E-01 FF<br />

13 3 1.0E+00 27 9.983802E+02 1.6E-02 1.3E-01 6 2.8E-01 FF<br />

14 1 1.0E+00 28 1.013533E+03 6.2E-04 4.8E-02 6 2.8E-01 FF<br />

15 2 1.0E+00 29 1.021295E+03 1.1E-02 1.2E-02 5 2.8E-01 FF<br />

16 1 1.0E+00 30 1.032156E+03 5.2E-03 1.1E-02 5 2.8E-01 FF<br />

17 2 1.0E+00 31 1.033938E+03 6.7E-05 1.4E-02 4 2.8E-01 FF<br />

18 2 1.0E+00 32 1.036764E+03 4.5E-04 1.0E-02 3 2.8E-01 FF<br />

19 2 1.0E+00 33 1.037592E+03 6.5E-05 3.0E-02 2 2.8E-01 FF<br />

20 1 1.0E+00 34 1.039922E+03 4.6E-04 4.4E-02 2 2.8E-01 FF<br />

21 2 1.0E+00 35 1.040566E+03 1.4E-05 7.8E-02 3 2.8E-01 FF<br />

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD<br />

22 4 1.0E+00 36 1.056256E+03 1.3E-02 5.6E-02 4 2.8E-01 FF<br />

23 2 1.0E+00 37 1.053213E+03 1.9E-04 3.1E-03 3 3.9E-01 FF<br />

24 2 1.0E+00 38 1.053464E+03 2.2E-05 4.7E-03 2 3.9E-01 FF<br />

25 1 1.0E+00 39 1.053811E+03 3.7E-05 1.8E-02 2 3.9E-01 FF<br />

28


26 1 1.0E+00 40 1.055352E+03 1.1E-03 3.9E-02 2 3.9E-01 FF<br />

27 1 1.0E+00 41 1.055950E+03 1.3E-03 1.5E-02 2 3.9E-01 FF<br />

28 1 1.0E+00 42 1.056047E+03 1.5E-06 1.3E-02 2 3.9E-01 FF<br />

29 1 1.0E+00 43 1.056991E+03 3.2E-04 2.2E-02 2 3.9E-01 FF<br />

30 1 1.0E+00 44 1.058439E+03 2.7E-03 1.8E-02 2 3.9E-01 FF<br />

31 3 1.0E+00 45 1.058885E+03 2.2E-04 9.3E-03 2 3.9E-01 FF<br />

32 1 1.0E+00 46 1.058918E+03 3.3E-05 1.6E-03 2 3.9E-01 FF<br />

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD<br />

33 1 1.0E+00 47 1.058920E+03 4.6E-06 1.1E-05 2 3.9E-01 FF<br />

34 1 1.0E+00 48 1.058920E+03 5.1E-10 5.3E-07 2 3.9E-01 TT<br />

EXIT - Optimal Solution found.<br />

--- Restarting execution<br />

--- chenery.gms(225) 1 Mb<br />

--- Reading solution for model CHENRAD<br />

--- chenery.gms(239) 1 Mb<br />

*** Status: Normal completion<br />

[erwin@hamilton]$<br />

GAMS prints the number of equations, variables and non-zero elements of<br />

the model it generated. This gives an indication of the size of the model. SNOPT<br />

then says how much memory it allocated to solve the model, based on an estimate.<br />

If the user had specified a different amount using the work option or the<br />

workspace model suffix, there would be a message like<br />

Work space requested by user -- 0.76 Mb<br />

Work space requested by solver -- 0.02 Mb<br />

The SNOPT log shows the following columns:<br />

Major The current major iteration number.<br />

Minor is the number of iterations required by both the feasibility and optimality<br />

phases of the QP subproblem. Generally, Minor will be 1 in the<br />

later iterations, since theoretical analysis predicts that the correct active<br />

set will be identified near the solution (see §2).<br />

Step The step length α taken along the current search direction p. The variables<br />

x have just been changed to x + αp. On reasonably well-behaved<br />

problems, the unit step will be taken as the solution is approached.<br />

nObj The number of times the nonlinear objective function has been evaluated.<br />

nObj is printed as a guide to the amount of work required for the linesearch.<br />

nCon The number of times SNOPT evaluated the nonlinear constraint functions.<br />

Merit is the value of the augmented Lagrangian merit function (3). This function<br />

will decrease at each iteration unless it was necessary to increase the<br />

penalty parameters (see §2). As the solution is approached, Merit will<br />

29


converge to the value of the objective at the solution.<br />

In elastic mode, the merit function is a composite function involving the<br />

constraint violations weighted by the elastic weight.<br />

If the constraints are linear, this item is labeled Objective, the value of<br />

the objective function. It will decrease monotonically to its optimal value.<br />

Feasibl is the value of rowerr, the maximum component of the scaled nonlinear<br />

constraint residual (5). The solution is regarded as acceptably feasible if<br />

Feasibl is less than the Major feasibility tolerance.<br />

If the constraints are linear, all iterates are feasible and this entry is not<br />

printed.<br />

Optimal is the value of maxgap, the maximum complementarity gap (6). It<br />

is an estimate of the degree of nonoptimality of the reduced costs. Both<br />

Feasibl and Optimal are small in the neighborhood of a solution.<br />

nS The current number of superbasic variables.<br />

Penalty is the Euclidean norm of the vector of penalty parameters used in the<br />

augmented Lagrangian merit function (not printed if the constraints are<br />

linear).<br />

PD is a two-letter indication of the status of the convergence tests involving<br />

primal and dual feasibility of the iterates (see (5) and (6) in the description<br />

of Major feasibility tolerance and Major optimality tolerance).<br />

Each letter is T if the test is satisfied, and F otherwise.<br />

If either of the indicators is F when SNOPT terminates with 0 EXIT -optimal<br />

solution found, the user should check the solution carefully.<br />

The summary line may include additional code characters that indicate what<br />

happened during the course of the iteration.<br />

c Central differences have been used to compute the unknown components<br />

of the objective and constraint gradients. This should not happen in a<br />

GAMS environment.<br />

d During the linesearch it was necessary to decrease the step in order to obtain<br />

a maximum constraint violation conforming to the value of Violation<br />

limit.<br />

l The norm-wise change in the variables was limited by the value of the<br />

Major step limit. If this output occurs repeatedly during later iterations,<br />

it may be worthwhile increasing the value of Major step limit.<br />

i If SNOPT is not in elastic mode, an “i” signifies that the QP subproblem<br />

is infeasible. This event triggers the start of nonlinear elastic mode, which<br />

remains in effect for all subsequent iterations. Once in elastic mode, the<br />

QP subproblems are associated with the elastic problem NP(γ).<br />

If SNOPT is already in elastic mode, an “i” indicates that the minimizer<br />

30


of the elastic subproblem does not satisfy the linearized constraints. (In<br />

this case, a feasible point for the usual QP subproblem may or may not<br />

exist.)<br />

M An extra evaluation of the problem functions was needed to define an acceptable<br />

positive-definite quasi-Newton update to the Lagrangian Hessian.<br />

This modification is only done when there are nonlinear constraints.<br />

m This is the same as “M” except that it was also necessary to modify the<br />

update to include an augmented Lagrangian term.<br />

R The approximate Hessian has been reset by discarding all but the diagonal<br />

elements. This reset will be forced periodically by the Hessian frequency<br />

and Hessian updates keywords. However, it may also be necessary to<br />

reset an ill-conditioned Hessian from time to time.<br />

r The approximate Hessian was reset after ten consecutive major iterations<br />

in which no BFGS update could be made. The diagonals of the approximate<br />

Hessian are retained if at least one update has been done since the<br />

last reset. Otherwise, the approximate Hessian is reset to the identity<br />

matrix.<br />

s A self-scaled BFGS update was performed. This update is always used<br />

when the Hessian approximation is diagonal, and hence always follows a<br />

Hessian reset.<br />

S This is the same as a “s” except that it was necessary to modify the<br />

self-scaled update to maintain positive definiteness.<br />

n No positive-definite BFGS update could be found. The approximate Hessian<br />

is unchanged from the previous iteration.<br />

t The minor iterations were terminated at the Minor iteration limit.<br />

u The QP subproblem was unbounded.<br />

w A weak solution of the QP subproblem was found.<br />

Finally SNOPT prints an exit message. See §5.1.<br />

5.1 EXIT conditions<br />

When the solution procedure terminates, an EXIT -- message is printed to<br />

summarize the final result. Here we describe each message and suggest possible<br />

courses of action.<br />

0 EXIT -- optimal solution found<br />

This is the message we all hope to see! It is certainly preferable to every other<br />

message, and we naturally want to believe what it says, because this is surely<br />

one situation where the computer knows best.<br />

31


In all cases, a distinct level of caution is in order, even if it can wait until<br />

next morning. For example, if the objective value is much better than expected,<br />

we may have obtained an optimal solution to the wrong problem! Almost any<br />

item of data could have that effect if it has the wrong value. Verifying that the<br />

problem has been defined correctly is one of the more difficult tasks for a model<br />

builder.<br />

If nonlinearities exist, one must always ask the question: could there be more<br />

than one local optimum? When the constraints are linear and the objective is<br />

known to be convex (e.g., a sum of squares) then all will be well if we are<br />

minimizing the objective: a local minimum is a global minimum in the sense<br />

that no other point has a lower function value. (However, many points could<br />

have the same objective value, particularly if the objective is largely linear.)<br />

Conversely, if we are maximizing a convex function, a local maximum cannot<br />

be expected to be global, unless there are sufficient constraints to confine the<br />

feasible region.<br />

Similar statements could be made about nonlinear constraints defining convex<br />

or concave regions. However, the functions of a problem are more likely to<br />

be neither convex nor concave. Our advice is always to specify a starting point<br />

that is as good an estimate as possible, and to include reasonable upper and<br />

lower bounds on all variables, in order to confine the solution to the specific<br />

region of interest. We expect modelers to know something about their problem,<br />

and to make use of that knowledge as they themselves know best.<br />

One other caution about “Optimal solution”s. Some of the variables or<br />

slacks may lie outside their bounds more than desired, especially if scaling was<br />

requested. Max Primal infeas refers to the largest bound infeasibility and<br />

which variable is involved. If it is too large, consider restarting with a smaller<br />

Minor feasibility tolerance (say 10 times smaller) and perhaps Scale<br />

option 0.<br />

Similarly, Max Dual infeas indicates which variable is most likely to be at<br />

a non-optimal value. Broadly speaking, if<br />

Max Dual infeas/Norm of pi = 10 −d ,<br />

then the objective function would probably change in the dth significant digit if<br />

optimization could be continued. If d seems too large, consider restarting with<br />

smaller Major and Minor optimality tolerances.<br />

Finally, Nonlinear constraint violn shows the maximum infeasibility for<br />

nonlinear rows. If it seems too large, consider restarting with a smaller Major<br />

feasibility tolerance.<br />

1 EXIT -- the problem is infeasible<br />

When the constraints are linear, this message can probably be trusted. Feasibility<br />

is measured with respect to the upper and lower bounds on the variables<br />

and slacks. Among all the points satisfying the general constraints Ax − s = 0,<br />

there is apparently no point that satisfies the bounds on x and s. Violations<br />

as small as the Minor feasibility tolerance are ignored, but at least one<br />

component of x or s violates a bound by more than the tolerance.<br />

32


When nonlinear constraints are present, infeasibility is much harder to recognize<br />

correctly. Even if a feasible solution exists, the current linearization of<br />

the constraints may not contain a feasible point. In an attempt to deal with<br />

this situation, when solving each QP subproblem, SNOPT is prepared to relax<br />

the bounds on the slacks associated with nonlinear rows.<br />

If a QP subproblem proves to be infeasible or unbounded (or if the Lagrange<br />

multiplier estimates for the nonlinear constraints become large), SNOPT enters<br />

so-called “nonlinear elastic” mode. The subproblem includes the original QP<br />

objective and the sum of the infeasibilities—suitably weighted using the Elastic<br />

weight parameter. In elastic mode, the nonlinear rows are made “elastic”—i.e.,<br />

they are allowed to violate their specified bounds. Variables subject to elastic<br />

bounds are known as elastic variables. An elastic variable is free to violate<br />

one or both of its original upper or lower bounds. If the original problem has<br />

a feasible solution and the elastic weight is sufficiently large, a feasible point<br />

eventually will be obtained for the perturbed constraints, and optimization can<br />

continue on the subproblem. If the nonlinear problem has no feasible solution,<br />

SNOPT will tend to determine a “good” infeasible point if the elastic weight<br />

is sufficiently large. (If the elastic weight were infinite, SNOPT would locally<br />

minimize the nonlinear constraint violations subject to the linear constraints<br />

and bounds.)<br />

Unfortunately, even though SNOPT locally minimizes the nonlinear constraint<br />

violations, there may still exist other regions in which the nonlinear<br />

constraints are satisfied. Wherever possible, nonlinear constraints should be defined<br />

in such a way that feasible points are known to exist when the constraints<br />

are linearized.<br />

2 EXIT -- the problem is unbounded (or badly scaled)<br />

EXIT -- violation limit exceeded -- the problem may be unbounded<br />

For linear problems, unboundedness is detected by the simplex method when<br />

a nonbasic variable can apparently be increased or decreased by an arbitrary<br />

amount without causing a basic variable to violate a bound. A message prior to<br />

the EXIT message will give the index of the nonbasic variable. Consider adding<br />

an upper or lower bound to the variable. Also, examine the constraints that<br />

have nonzeros in the associated column, to see if they have been formulated as<br />

intended.<br />

Very rarely, the scaling of the problem could be so poor that numerical error<br />

will give an erroneous indication of unboundedness. Consider using the Scale<br />

option.<br />

For nonlinear problems, SNOPT monitors both the size of the current objective<br />

function and the size of the change in the variables at each step. If either of<br />

these is very large (as judged by the Unbounded parameters—see §4), the problem<br />

is terminated and declared UNBOUNDED. To avoid large function values,<br />

it may be necessary to impose bounds on some of the variables in order to keep<br />

them away from singularities in the nonlinear functions.<br />

The second message indicates an abnormal termination while enforcing the<br />

limit on the constraint violations. This exit implies that the objective is not<br />

33


ounded below in the feasible region defined by expanding the bounds by the<br />

value of the Violation limit.<br />

3 EXIT -- major iteration limit exceeded<br />

EXIT -- minor iteration limit exceeded<br />

EXIT -- too many iterations<br />

Either the Iterations limit or the Major iterations limit was exceeded<br />

before the required solution could be found. Check the iteration log to be sure<br />

that progress was being made. If so, repeat the run with higher limits. If not,<br />

consider specifying new initial values for some of the nonlinear variables.<br />

4 EXIT -- requested accuracy could not be achieved<br />

A feasible solution has been found, but the requested accuracy in the dual<br />

infeasibilities could not be achieved. An abnormal termination has occurred, but<br />

SNOPT is within 10 −2 of satisfying the Major optimality tolerance. Check<br />

that the Major optimality tolerance is not too small.<br />

5 EXIT -- the superbasics limit is too small: nnn<br />

The problem appears to be more nonlinear than anticipated. The current set of<br />

basic and superbasic variables have been optimized as much as possible and a<br />

PRICE operation is necessary to continue, but there are already nnn superbasics<br />

(and no room for any more).<br />

In general, raise the Superbasics limit s by a reasonable amount, bearing<br />

in mind the storage needed for the reduced Hessian (about 1<br />

2 s2 double words).<br />

6 EXIT -- constraint and objective values could not be calculated<br />

This exit should not occur in a GAMS environment.<br />

7 EXIT -- subroutine funobj seems to be giving incorrect gradients<br />

This exit should not occur in a GAMS environment.<br />

8 EXIT -- subroutine funcon seems to be giving incorrect gradients<br />

This exit should not occur in a GAMS environment.<br />

9 EXIT -- the current point cannot be improved upon<br />

The algorithm could not find a better solution although optimality was not<br />

achieved within the optimality tolerance. Possibly scaling can lead to better<br />

function values and derivatives. Raising the optimality tolerance will probably<br />

make this message go away.<br />

10 EXIT -- cannot satisfy the general constraints<br />

An LU factorization of the basis has just been obtained and used to recompute<br />

the basic variables xB, given the present values of the superbasic and nonbasic<br />

variables. A step of “iterative refinement” has also been applied to increase the<br />

accuracy of xB. However, a row check has revealed that the resulting solution<br />

does not satisfy the current constraints Ax − s = 0 sufficiently well.<br />

This probably means that the current basis is very ill-conditioned. Try Scale<br />

option 1 if scaling has not yet been used and there are some linear constraints<br />

and variables.<br />

34


For certain highly structured basis matrices (notably those with band structure),<br />

a systematic growth may occur in the factor U. Try setting the LU factor<br />

tolerance to 2.0 (or possibly even smaller, but not less than 1.0).<br />

12 EXIT -- terminated from subroutine s1User<br />

This message appears when the resource limit was reached (see §4.1) or when<br />

the solver was interrupted by a Ctrl-C keyboard signal.<br />

20 EXIT -- not enough integer/real storage for the basis factors<br />

Increase the workspace by using the work option or the workspace model suffix.<br />

21 EXIT -- error in basis package<br />

A preceding message will describe the error in more detail. This error should<br />

not happen easily in a GAMS environment.<br />

22 EXIT -- singular basis after nnn factorization attempts<br />

This exit is highly unlikely to occur. The first factorization attempt will have<br />

found the basis to be structurally or numerically singular. (Some diagonals<br />

of the triangular matrix U were respectively zero or smaller than a certain<br />

tolerance.) The associated variables are replaced by slacks and the modified<br />

basis is refactorized, but singularity persists. This must mean that the problem<br />

is badly scaled, or the LU factor tolerance is too much larger than 1.0.<br />

30 EXIT -- the basis file dimensions do not match this problem<br />

This exit should not occur in a GAMS environment.<br />

31 EXIT -- the basis file state vector does not match this problem<br />

This exit should not occur in a GAMS environment.<br />

32 EXIT -- system error. Wrong no. of basic variables: nnn<br />

This exit should never happen, and may indicate a configuration problem with<br />

your GAMS/SNOPT version.<br />

42 EXIT -- not enough 8-character storage to start solving the problem<br />

Increase the workspace by using the work option or the workspace model suffix.<br />

43 EXIT -- not enough integer storage to start solving the problem<br />

Increase the work space by using the work option or the workspace model suffix.<br />

44 EXIT -- not enough real storage to start solving the problem<br />

Increase the work space by using the work option or the workspace model suffix.<br />

45 EXIT -- Function evaluation error limit exceeded<br />

The function evaluation error limit was reached. When evaluating a nonlinear<br />

function either in the objective or in the constraints, an evaluation error<br />

occurred and the allowed number of such errors was exceeded. Either increase<br />

the domlim option (see §4.1) or preferably add bounds or linear constraints such<br />

that these errors cannot happen. The errors are most often caused by declaring<br />

x to be a free variable while the model contains functions like √ x or log(x).<br />

Overflows in exponentiation x y are also a common cause for this exit. Inspect<br />

the listing file for messages like<br />

35


**** ERROR(S) IN EQUATION PRODF<br />

2 INSTANCES OF - UNDEFINED LOG OPERATION (RETURNED -0.1E5)<br />

The equation name mentioned in this message gives a good indication where to<br />

look in the model.<br />

6 Listing file messages<br />

The listing file (.lst file) also contains feedback on how the SNOPT solver<br />

performed on a particular model. For the chem.gms model, the solve summary<br />

looks like the following:<br />

S O L V E S U M M A R Y<br />

MODEL MIXER OBJECTIVE ENERGY<br />

TYPE NLP DIRECTION MINIMIZE<br />

SOLVER SNOPT FROM LINE 46<br />

**** SOLVER STATUS 1 NORMAL COMPLETION<br />

**** MODEL STATUS 2 LOCALLY OPTIMAL<br />

**** OBJECTIVE VALUE -47.7065<br />

RESOURCE USAGE, LIMIT 0.010 1000.000<br />

ITERATION COUNT, LIMIT 45 10000<br />

EVALUATION ERRORS 0 0<br />

GAMS/SNOPT X86/LINUX version 5.3.4-007-035<br />

P. E. Gill, UC San Diego<br />

W. Murray and M. A. Saunders, Stanford University<br />

Work space allocated -- 0.02 Mb<br />

EXIT - Optimal Solution found.<br />

Major, Minor Iterations 16 45<br />

Funobj, Funcon calls 22 0<br />

Superbasics 6<br />

Aggregations 0<br />

Interpreter Usage 0.01 100.0%<br />

Work space used by solver -- 0.02 Mb<br />

The solver completed normally at a local (or possibly global) optimum. A<br />

complete list of possible solver status and model status values is in Tables 1 and<br />

2.<br />

The resource usage (time used), iteration count and evaluation errors during<br />

nonlinear function and gradient evaluation are all within their limits. These<br />

limits can be increased by the option reslim, iterlim and domlim (see §4.1).<br />

The possible EXIT messages are listed in §5.1.<br />

The statistics following the EXIT message are as follows.<br />

36


Model status Remarks<br />

1 OPTIMAL Applies only to linear models.<br />

2 LOCALLY OPTIMAL A local optimum in an NLP was<br />

found. It may or may not be a global<br />

optimum.<br />

3 UNBOUNDED For LP’s this message is reliable. A<br />

badly scaled NLP can also cause this<br />

message to appear.<br />

4 INFEASIBLE Applies to LP’s: the model is infeasible.<br />

5 LOCALLY INFEASIBLE Applies to NLP’s: Given the starting<br />

point, no feasible solution could<br />

be found although feasible points<br />

may exist.<br />

6 INTERMEDIATE INFEASIBLE The search was stopped (e.g., because<br />

of an iteration or time limit)<br />

and the current point violates some<br />

constraints or bounds.<br />

7 INTERMEDIATE NONOPTIMAL The search was stopped (e.g., because<br />

of an iteration or time limit)<br />

and the current point is feasible but<br />

violates the optimality conditions.<br />

8 INTEGER SOLUTION Does not apply to SNOPT.<br />

9 INTERMEDIATE NON-INTEGER Does not apply to SNOPT.<br />

10 INTEGER INFEASIBLE Does not apply to SNOPT.<br />

ERROR UKNOWN Check listing file for error messages.<br />

ERROR NO SOLUTION Check listing file for error messages.<br />

Table 1: Model status values<br />

37


<strong>Solver</strong> status Remarks<br />

1 NORMAL COMPLETION SNOPT completed the optimization<br />

task.<br />

2 ITERATION INTERRUPT Iteration limit was hit. Increase the<br />

iterlim option (see §4.1).<br />

3 RESOURCE INTERRUPT Time limit was hit. Increase the<br />

reslim option (see §4.1).<br />

4 TERMINATED BY SOLVER Check the listing file.<br />

5 EVALUATION ERROR LIMIT domlim error limit was exceeded.<br />

See §4.1.<br />

6 UNKNOWN ERROR Check the listing file for error messages.<br />

ERROR SETUP FAILURE Id.<br />

ERROR SOLVER FAILURE Id.<br />

ERROR INTERNAL SOLVER FAILURE Id.<br />

ERROR SYSTEM FAILURE Id.<br />

Table 2: <strong>Solver</strong> status values<br />

Major, minor iterations. The number of major and minor iterations for this<br />

optimization task. Note that the number of minor iterations is the same<br />

as reported by ITERATION COUNT.<br />

Funobj, Funcon calls. The number of times SNOPT evaluated the objective<br />

function f(x) or the constraint functions Fi(x) and their gradients. For a<br />

linearly constrained problem the number of funcon calls should be zero.<br />

Superbasics. This is number of superbasic variables in the reported solution.<br />

See §2.4.<br />

Aggregations. The number of equations removed from the model by the objective<br />

function recovery algorithm (see §2.1).<br />

Interpreter usage. This line refers to how much time was spent evaluating<br />

functions and gradients. Due to the low resolution of the clock and the<br />

small size of this model, it was concluded that 100% of the time was spent<br />

inside the routines that do function and gradient evaluations. For larger<br />

models these numbers are more accurate.<br />

References<br />

[1] A. R. Conn, Constrained optimization using a nondifferentiable penalty<br />

function, SIAM J. Numer. Anal., 10 (1973), pp. 760–779.<br />

[2] G. B. Dantzig, Linear Programming and Extensions, Princeton University<br />

Press, Princeton, New Jersey, 1963.<br />

38


[3] S. K. Eldersveld, Large-scale sequential quadratic programming algorithms,<br />

PhD thesis, Department of Operations Research, Stanford University,<br />

Stanford, CA, 1991.<br />

[4] R. Fletcher, An ℓ1 penalty method for nonlinear constraints, in Numerical<br />

Optimization 1984, P. T. Boggs, R. H. Byrd, and R. B. Schnabel, eds.,<br />

Philadelphia, 1985, SIAM, pp. 26–40.<br />

[5] R. Fourer, Solving staircase linear programs by the simplex method. 1:<br />

Inversion, Math. Prog., 23 (1982), pp. 274–313.<br />

[6] P. E. Gill, W. Murray, and M. A. Saunders, SNOPT: An SQP algorithm<br />

for large-scale constrained optimization, Numerical Analysis Report<br />

97-2, Department of Mathematics, University of California, San Diego, La<br />

Jolla, CA, 1997.<br />

[7] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright, User’s<br />

guide for NPSOL (Version 4.0): a Fortran package for nonlinear programming,<br />

Report SOL 86-2, Department of Operations Research, Stanford<br />

University, Stanford, CA, 1986.<br />

[8] , A practical anti-cycling procedure for linearly constrained optimization,<br />

Math. Prog., 45 (1989), pp. 437–474.<br />

[9] , Some theoretical properties of an augmented Lagrangian merit function,<br />

in Advances in Optimization and Parallel Computing, P. M. Pardalos,<br />

ed., North Holland, North Holland, 1992, pp. 101–128.<br />

[10] , Sparse matrix methods in optimization, SIAM J. on Scientific and<br />

Statistical Computing, 5 (1984), pp. 562–589.<br />

[11] , Maintaining LU factors of a general sparse matrix, Linear Algebra<br />

and its Applications, 88/89 (1987), pp. 239–270.<br />

[12] B. A. Murtagh and M. A. Saunders, Large-scale linearly constrained<br />

optimization, Math. Prog., 14 (1978), pp. 41–72.<br />

[13] , A projected Lagrangian algorithm and its implementation for sparse<br />

nonlinear constraints, Math. Prog. Study, 16 (1982), pp. 84–117.<br />

[14] , MINOS 5.5 User’s Guide, Report SOL 83-20R, Department of Operations<br />

Research, Stanford University, Stanford, CA, Revised 1998.<br />

39


GAMS/XA<br />

GAMS/XA<br />

1 Introduction 2<br />

2 How To Run A Model With Xa 2<br />

3 Linear Programming 2<br />

4 <strong>Gams</strong> Semi-Continuous Variables 3<br />

5 <strong>Gams</strong>/Xa Memory Usage 3<br />

6 Branch & Bound Strategies 4<br />

6.1 BRANCHING PRIORITIES 4<br />

6.2 BRANCHING STRATEGIES 5<br />

7 Limitsearch Parameter 6<br />

8 The Xa Option File 7<br />

8.1 AVAILABLE GAMS/XA OPTIONS 8<br />

8.2 GAMS/XA OPTIONS FILE PARAMETERS. 9<br />

8.2.1 OVERVIEV 9<br />

8.2.2 DETAILED DESCRIPTION OF OPTIONS 11<br />

9 Reports 14<br />

9.1 STATISTICS REPORT 14<br />

9.2 MATRIX LISTING 15<br />

9.3 ITERATION LOG LINE 16<br />

9.4 STATUS LINE 17<br />

10 Solution Interruption 18


GAMS/XA 2<br />

1 Introduction<br />

This document describes the GAMS interface to XA<br />

The GAMS/XA System is the implementation of Sunset Software Technology’s XA Callable<br />

Library, containing the high performance solvers for LP (linear programming) and MIP (mixed<br />

integer programming) problems.<br />

The GAMS/XA Systems offers many options and tuning parameters to solve your models as<br />

efficiently as possible. Most of these options are accessible to the GAMS User through an option<br />

file, option integer1 through 5, or option real1 through 5. In most cases, GAMS/XA should<br />

perform satisfactorily without using any options.<br />

<strong>Gams</strong>inst is XA-aware. By default, XA dynamically allocates all available memory, up to 32<br />

megabytes. See GAMS/XA MEMORY USAGE Section to alter this behavior.<br />

2 How to run a model with XA<br />

XA is capable of solving models of the following types: LP, RMIP, and MIP.<br />

If you did not specify XA as a default LP, RMIP or MIP, then you can use the following<br />

statement in your GAMS model:<br />

OPTION LP=XA ; { RMIP or MIP }<br />

It should appear before the SOLVE statement<br />

3 Linear Programming<br />

Linear Programming is a generalized structure for optimally allocating limited resources among<br />

competing options. An objective function of decision variables is maximized (or minimized) to<br />

determine the allocations to the competing activities via the final values of the decision variables.<br />

If fractional values are not acceptable (such as an optimal solution of warehouse to build of 2.3),<br />

the problem is a mixed integer problem. The limiting of resources is accomplished by<br />

formulating constraint equations that also include decision variables. Linearity must be<br />

maintained for the objective function and the constraint equations.<br />

Linear Programs are typically solved through iterative matrix manipulations of the equations<br />

(constraints). This identifies successively improving corner points in the feasible region of the<br />

problem’s solution space. This approach, called the Simplex Method, is also used on mixed<br />

integer problems by solving many associated problems to partially enumerate the “decision tree”<br />

in an approach called “branch & bound.” While this approach can be quite efficient for solving<br />

mixed integer problems, its application is limited by the size of the problem it can solve.<br />

XA offers the primal/dual simplex and the barrier algorithm for solving linear problems. The<br />

primal/dual simplex method is a very robust method, and in most cases you should get good<br />

performance.<br />

XA is very fast, and allows for restarting from an advanced basis. In case you have a GAMS<br />

model with multiple solves, and there are relative minor changes between those LP models, the<br />

solves proceeding the first one will use basis information from the previous solve to do a "jump<br />

start". This is completely automated by GAMS and should be worry-free.


GAMS/XA 3<br />

In case of a ’cold start’ (the first solve), XA will try to create a better basis than the ’all-slack’<br />

basis. The crash routine tries to maintain dual feasibility. Crashing is usually not used for<br />

subsequent solves because that would destroy the advanced basis passed from GAMS. The<br />

default rule is to crash when GAMS does not pass on a basis, and not to crash otherwise.<br />

The XA presolver is called upon to try to reduce the size of the model. There are some simple<br />

reductions that can be applied before the model is solved, like replacing singleton constraints (i.e.<br />

X =e= 5;) by the bounds. Although most modelers will already use the GAMS facilities to<br />

specify (X.lo and X.up), in many cases there are still possibilities to do these reductions. In<br />

addition to GAMS reductions, XA can also remove some redundant rows, and substitute certain<br />

constraints. To prevent the ‘presolver’ from being called enter the following "Set eliminate off"<br />

in the xa.opt file.<br />

4 GAMS SEMI-CONTINUOUS VARIABLES<br />

XA supports GAMS' semi-continuous integer and continuous variable types. Semi-continuous<br />

variables are variables that are either at zero (0), or greater than or equal to their lower bound,<br />

e.g., a pump motor if operating must run at least 2400 r.p.m. and less than 5400 r.p.m., or not at<br />

all (0 r.p.m.); or investments levels must exceed a specific threshold or no investment is made.<br />

All semi-continuous variables must have a lower bound specification, e.g., speed.lo(i) = 100.<br />

And if the semi-continuous variable has the additional restriction of integer then an upper bound<br />

is also required.<br />

Prior to GAMS introduction of this feature, semi-continuous variables had to be emulated by<br />

adding one additional binary variable and one additional constraint for each semi-continuous<br />

variable. For models of any size, this approach very quickly increase the model’s size beyond<br />

solvability. Now XA has implicitly defined these variables without the addition of new variables<br />

and constraints to your model, hence, increase the size of model that can be solved, in a very neat<br />

and clean implementation in our branch and bound algorithm.<br />

For example, to define variables ‘a’ and ‘b’ as semi-continuous enter:<br />

SemiCont a , b ;<br />

or to define semi-continuous integer variables -<br />

SemiInt y1 , y2 ;<br />

GAMS’ priority (.prior suffix) values can be associated with semi-continuous variables both<br />

integer and continuous types.<br />

All the integer solving options are available and very useful when solving semi-continuous<br />

variable because XA’s branch and bound algorithms are use to select the direction, either, 0 or<br />

greater than it’s lower bound value. For example, you can select solving strategies, stop after<br />

time, optcr and optca values, etc.<br />

The solve time complexity for semi-continuous variables is comparable with the solve times for<br />

binary models and semi-continuous integer variables is comparable to integer solve times.<br />

5 GAMS/XA MEMORY USAGE<br />

The GAMS/XA default memory allocation is 32 Mbytes. If this amount is not available then XA<br />

will use whatever is available. This works well on a PC and under Unix, but in certain cases you<br />

may want to restrict the amount of memory GAMS/XA uses, for instance:<br />

• If you run Windows and you would like to leave memory for other programs, or


GAMS/XA 4<br />

• I you run Windows and you have less than 32 Mbytes of real memory, but a large amount of<br />

Virtual memory. In this case XA will request your Virtual memory and initialize it which<br />

may be rather time consuming. (You can hear the disk clicking for some time before<br />

anything happens).<br />

If you need more memory, follow this procedure<br />

Inside GAMS, add the following line before the solve statement:<br />

.workspace = xx;<br />

where xx is the amount of memory in Mbytes,<br />

In an attempt to insure that all models solve without running out of memory, XA makes one final<br />

memory check and if the user supplied memory amount is below what XA would consider<br />

reasonable for that size of problem, XA will then increase your amount to XA’s minimal value.<br />

Note, that workspace defined in the GAMS model overwrites any memory allocation defined<br />

with XAMEMORY. A small XAMEMORY can therefore be used as a default, and it is only<br />

overwritten via a GAMS statement for large models.<br />

On multi-processor machines, XA will automatically detect and use all available processors<br />

(CPU’s) when solving MIP models. You should consider specifying 50% more memory per<br />

processor to take full advantage of these processors. One interesting feature of solving parallel<br />

MIP models is the potential of superlinear performance.<br />

6 Branch & Bound Strategies<br />

XA is designed to solve a vast majority of LP problems using the default settings. When solving<br />

integer problems the default setting may not provide the best for speed and reliability. By<br />

experimenting with these parameters performance can be improved dramatically.<br />

6.1 BRANCHING PRIORITIES<br />

<strong>Using</strong> priorities, when possible, significantly reduce the amount of time required to obtain a good<br />

integer solution. If your model has a natural order in time, or in space, or in any other dimension<br />

then you should consider using priority branching. For example, multi-period production<br />

problem with inventory would use the period value as the priority setting for all variable active in<br />

that period, or a layered chip manufacturing process where the priority assigned to binary<br />

variable is top down or bottom up in that layer.<br />

If priorities are given to binary, integer, or semi-continuous variables, then these are used to<br />

provide a user-specified order for which variables are branched. XA selects the variable with the<br />

highest (lowest numerical value) priority for branching and the Strategy determines the direction,<br />

up or down.<br />

Priorities are assigned to variables using the .prior suffix. For example:<br />

NAVY.PRIOROPT = 1 ;<br />

...<br />

Z.PRIOR(J,"SMALL") = 10 ;<br />

Z.PRIOR(J,"MEDIUM") = 5 ;<br />

Z.PRIOR(J,"LARGE" ) = 1 ;<br />

The value 1 indicates the highest priority, and the value 10 the lowest priority. Valid priority<br />

values should range between -32000 and 32000. The default priority value is 16000. Columns<br />

with the smallest priority value branch and bound first.


GAMS/XA 5<br />

6.2 BRANCHING STRATEGIES<br />

Nine ‘branch and bound’ strategies are provided to meet the demands of many different types of<br />

problems. Each strategy has four variations which effects the solve time, speed to first solution,<br />

and finding best integer solution. The order in which integer variables are processed during the<br />

search for an integer solutions is important. This order is called the ‘branching order’ of integer<br />

variables. Solution times can vary significantly with the method selected.<br />

In general, XA will solve your MIP problems much faster when all model variable are given<br />

some kind of objective function value. This biases the basis in a specific direction and usually<br />

leads to satisfactory first integer solutions.<br />

The STRATEGY command is available to select the strategy of your choice. The STRATEGY<br />

parameter may be specified with one of following values, and assigned to GAMS' INTEGER1<br />

variable, e.g.,<br />

OPTION INTEGER1 = 1 ;<br />

Branch & Description of Selection Criteria<br />

Bound Strategy<br />

1 Proprietary method. Default value. Excellent strategy, also add<br />

priority to integer variable and try 1P for additional performance<br />

gains.<br />

2 Minimum change in the objective function. This strategy has not been<br />

very successful at solving MIP problems.<br />

3 Priority based upon column order. This strategy probably does not<br />

have must meaning because you cannot set the order variables are<br />

generated in GAMS.<br />

4 Column closest to it’s integer bound. This strategy tends to send a<br />

variable its lower bounds.<br />

6 Column always branches up (high). Second choice after 1. Excellent<br />

choice when your model is a multi-period problem; additional<br />

performance gains (runs faster) when priority value are equated with<br />

period number; also try 6P if using priorities.<br />

7 Column always branches down (low). Useful if variable branched<br />

down don’t limit capacity or resources. One suggestion is to use<br />

priorities in the reverse order described in Strategy 6.<br />

8 Column farthest from it’s integer bound. Next to Strategies 1 and 6<br />

this is probably the next choice to try. <strong>Using</strong> priorities and 8P is also<br />

suggested.<br />

9 Column randomly selected, useful when solving very large problems.<br />

Priority values helpful is multi-period models.<br />

10 Apparent smoothest sides on the polytope. Priorities helpful.<br />

Each XA Branch and Bound strategy has many variations. Sometimes these variations reduce<br />

the solution time but may not yield the optimal integer solution. If you are interested in obtaining<br />

a fast and ‘good’ integer solution (which may not be the optimal integer solution), try these<br />

variations.


GAMS/XA 6<br />

Variation Effect<br />

A This variation reduces the amount of time XA spends estimating the value of a<br />

would be integer solution. The values calculated are rough estimates and may<br />

eliminate nodes that would lead to better integer solutions. Variation A may not<br />

appear with variation B.<br />

Example: STRATEGY 1A<br />

Or add 100 to your Strategy number and assign to GAMS’ INTEGER1 variable,<br />

e.g., OPTION INTEGER1 = 101 ; Equates to 1A<br />

B This variation spends very little time calculating estimated integer solutions at each<br />

node and is the most radical in performance and integer solution value and may<br />

eliminate nodes that would lead to better integer solutions. Variation B may not<br />

appear with variation A.<br />

Example: STRATEGY 8B<br />

Or add 200 to your Strategy number and assign to GAMS’ INTEGER1 variable,<br />

e.g., OPTION INTEGER1 = 208 ; Equates to 8B<br />

C Each time an improving integer solution is found XA splits the remaining node list<br />

in half based upon the length of the current list. This technique allows XA to search<br />

nodes that might not normally be explored. The reported integer solution value may<br />

not be the optimal integer solution because nodes may be eliminated that would lead<br />

to this solutions. Variation C may not appear with variation D.<br />

Example: STRATEGY 6C<br />

Or add 400 to your Strategy number and assign to GAMS’ INTEGER1 variable,<br />

e.g., OPTION INTEGER1 = 406 ; Equates to 6C<br />

D Each time an improving integer solution is found XA splits the remaining node list<br />

based upon the difference in current projected objective and the best possible<br />

objective value divided by two. This technique allows XA to search nodes that<br />

might not normally be explored. The reported integer solution value may not be the<br />

optimal integer solution because nodes may be eliminated that would lead to this<br />

solutions. Variation D may not appear with variation C.<br />

Example: STRATEGY 8D<br />

Or add 800 to your Strategy number and assign to GAMS’ INTEGER1 variable,<br />

e.g., OPTION INTEGER1 = 808 ; Equates to 8D<br />

P Each time a node is generated XA calculates the effects of each non-integer on<br />

future objective function values, which is calculation intensive. By assigning<br />

branching priorities to your integer variables XA will only perform this calculation<br />

on non-integer variable(s) with the lowest branching priority, which frequently<br />

reduces the number of calculations.<br />

Example: STRATEGY 1P or: STRATEGY 6BP<br />

Variation P may appear any variation, but to be effective you must assign integer<br />

branching priorities.<br />

Or add 1600 to your Strategy number and assign to GAMS’ INTEGER1 variable,<br />

e.g., OPTION INTEGER1 = 1606 ; Equates to 6P<br />

7 LIMITSEARCH PARAMETER<br />

LIMITSEARCH is used to limit the number of nodes to search by implicitly or explicitly stating<br />

a bound on the value of an integer solution. The integer solution obtained, if any, will have a<br />

functional value no worse than LIMITSEARCH. The next integer solution will have a<br />

monotonically improving objective function value until an optimal integer solution is found and<br />

if verified.


GAMS/XA 7<br />

If you can estimate the objective function value of a good integer solution, you can avoid nodes<br />

that lead to worse solutions and, consequently, speed up the search. However, too restrictive a<br />

value may lead to no integer solution at all, if an integer solution with a functional value better<br />

that the LIMITSEARCH value does exist. If the search terminates with ‘NO INTEGER<br />

SOLUTION’, you must begin the search again with a less restrictive LIMITSEARCH value.<br />

The LIMITSEARCH command line parameter has three (3) methods of specifying a lower limit<br />

on the objective function.<br />

LIMITSEARCH Value Meaning<br />

## Only search for integer solutions<br />

between this value and the ‘optimal<br />

continuous’ solution.<br />

##% Only search for integer solutions<br />

with #% of the ‘optimal continuous’<br />

solution.<br />

(#%) Solve for the integer solution that is<br />

within #% of the ‘optimal integer<br />

solution’. This can reduce the search<br />

time significantly, but the reported<br />

integer solution may not be the<br />

optimal integer solution but will be<br />

within #% of it.<br />

You must experiment with different Strategies and LimitSearch values to determine which is best<br />

for your problems. We have found that one of the following settings generally works quite well:<br />

STRATEGY 1 LIMITSEARCH (5%) STOPAFTER 15:00<br />

or<br />

STRATEGY 6 LIMITSEARCH (5%) STOPAFTER 15:00<br />

Also try strategies 1A, 1B, 6A, 6B, and 9. The LimitSearch value should be as large as possible<br />

and still meets your business objectives (distance from the optimal integer solution). As you gain<br />

experience with these ‘strategies’, you will be able to make an ‘informed’ choice.<br />

8 The XA Option File<br />

The option file is called xa.opt. The GAMS model should contain the following line to signal<br />

GAMS/XA to use the option file:<br />

.optfile = 1 ;<br />

where is the name of the model specified in the model statement. For instance:<br />

model m /all/ ;<br />

m.optfile = 1 ;<br />

option LP = XA ;<br />

solve m using LP minimize z ;


GAMS/XA 8<br />

The XA option file allows you to make solver (XA) specific commands that are not part of the<br />

GAMS System. The commands in the XA option file take precedence over their equivalent<br />

GAMS command. The option file has the following format: Comment lines beginning with an<br />

asterisk (*) . For example:<br />

* Integer solving strategy.<br />

Strategy 6P<br />

* Write log information to the screen every 5 seconds.<br />

Set FreqLog 00:05<br />

* Do NOT scale the problem.<br />

Set Scale No<br />

The contents of the option file are echoed to the screen.<br />

8.1 Available GAMS/XA <strong>Options</strong><br />

Here is a list of the additional GAMS/XA options. Check also the section on the GAMS options<br />

in the GAMS User’s guide<br />

NAME Description Equivalent xa.opt<br />

file command<br />

OPTION INTEGER1 =<br />

# ;<br />

# is XA’s Branch and Bound strategy selection. Strategy #<br />

OPTION INTEGER2 =<br />

# ;<br />

# selects a basis file to use. Values range from 1<br />

to 32000. The default is 0 indicates that no<br />

external basis information is available. This basis<br />

will override any basis GAMS passes XA. Each<br />

GAM’s SOLVE statement will cause XA to use<br />

and update this basis file. Once a model has been<br />

developed (reached it’s production phase), use of<br />

this capability should improve model re-runs. If<br />

XA is solving an integer model and is interrupted<br />

by an iteration limit, resource limit, or Control Z,<br />

XA will save enough information to restart the<br />

branch and bound process just prior to the<br />

interruption. In addition, XA automatically saves<br />

restarting information every 15 minutes for<br />

additional safety against power failure, or<br />

accidental human intervention. You can stop and<br />

restart a model any number of times and XA will<br />

present GAMS with the best solution up to that<br />

point. After XA solves a model to its completion,<br />

the restarting information is deleted. If you plan<br />

to restart your model, do not change it. The size<br />

of the restart file is very small, about 50 bytes per<br />

integer variable. The restart files extension is r01<br />

and the integer solutions basis file’s extension is<br />

b01.To restart an interrupted model, just re-run<br />

GAMS. GAMS has to reprocess your model<br />

statements, but XA will automatically resume<br />

where it left off.<br />

Basis<br />

filename.ext<br />

OPTION INTEGER3 = XA terminates after finding # improving integer Set IntLimit #


GAMS/XA 9<br />

NAME Description Equivalent xa.opt<br />

file command<br />

# ; solution. Default is 0, indicating no termination.<br />

On restarted models the count begins at 0.<br />

Normally this termination will leave a restart file.<br />

OPTION REAL1 = # ; If an integer solution is found which is better that<br />

#, then XA will terminate. The default is 0,<br />

indicating no termination value. Normally this<br />

termination will leave a restart file.<br />

OPTION REAL2 = # ; # indicates the amount of perturbation for highly<br />

degenerate problems. A positive value allows XA<br />

to generate a uniformly distributive random<br />

variable between 0 and #. A negative value uses a<br />

constant perturbation of the absolute value of #.<br />

The default is 0, indicating no perturbation value.<br />

Note: REAL2 options should not be used, except<br />

in cases where "all else fails", and definitely not<br />

when first starting off. XA has build-in<br />

degenerate routines to handle degeneracy.<br />

OPTION REAL3 = # ; # indicates the number of seconds to continue<br />

solving after finding the first integer solution.<br />

Normally this termination will leave a restart file.<br />

The default is 0, indicating no termination.<br />

OPTION REAL4 = # ; # indicates the maximum percent of binary<br />

variables to lock at each generated node. If you<br />

believe your model has an integer solution near<br />

the relaxed LP solution, then this parameter may<br />

help getting a solution faster. We recommend you<br />

trying using a value of 5 to 10. This parameter has<br />

had mixed success. Default Value: 0, indicates no<br />

locking of binary variables.<br />

8.2 GAMS/XA <strong>Options</strong> File Parameters.<br />

8.2.1 Overviev<br />

--- Basis / Restarting Information ---<br />

Basis filename.ext | None | Never Set ReStart No | Yes<br />

--- Performance Enhancements ---<br />

Set Pricing # Set Mprice #<br />

Set Scale No | Yes | # Set Eliminate No | Yes | Off<br />

Set Perturbate # Set Crash #<br />

Set DegenIter # Set MaxCPU #<br />

--- Barrier <strong>Options</strong> ---<br />

Set Barrier No | Yes<br />

Set Perturbation<br />

#<br />

StopAfter<br />

hh:mm:ss<br />

Set IntPct #


GAMS/XA 10<br />

--- Reporting <strong>Options</strong> ---<br />

Output filename.ext | CON: | LPT1: Set XARPRT No | Yes<br />

MatList Variable | Constraint | Both | None ToMPS No | Yes<br />

--- Mixed Integer <strong>Options</strong> ---<br />

Strategy a LimitSearch # | #% | (#%)<br />

Priority a StopAfter hh:mm:ss<br />

Set IntGap # Set IntLimit #<br />

Set Relaxed No | Yes Set ReStart No | Yes<br />

Set IntPct # Set MaxNodes #<br />

Set Increment # Set Ltolerance #<br />

Set Utolerance #<br />

Set BVPriority # Set Iround No | Yes<br />

TreeDepth # TreeTime #<br />

Set Ibounds # Set DualSimplex No | Yes<br />

--- Stopping Criteria ---<br />

Set Iterations # Set TimeLimit hh:mm:ss<br />

StopAfter hh:mm:ss<br />

--- Algorithmic Tolerances/Enhancements ---<br />

Set Markowitz # Set Pricing #<br />

Set ReInvertFreq # Set TcTolerance #<br />

Set Ypivot # Set XToZero #<br />

Set ReducedCost # Set ElemSize #<br />

Set TransEntry # Set RejPivot #<br />

Set Scale No | Yes | #<br />

--- System Configuration <strong>Options</strong> ---<br />

Set Bell No | Yes


GAMS/XA 11<br />

8.2.2 Detailed description of options<br />

Parameter Description<br />

Basis<br />

file.ext | none | never<br />

FORCE<br />

No | Yes<br />

LIMITSEARCH<br />

# | #% | (#%)<br />

MATLIST<br />

Var | Con | Both | None<br />

OUTPUT<br />

CON: | filename<br />

SET Barrier<br />

No | Yes | X<br />

SET BELL<br />

No | Yes<br />

SET CRASH<br />

0 | 1 | 2<br />

After XA has solved your problem, the solution is saved for the next time<br />

the problem is solved. This can greatly reduce the number of iterations<br />

and execution time required. The Dual Simplex algorithm is used when<br />

XA detects advance basis restarts. You can instruct XA to not use the<br />

Dual Simplex algorithm for restarts as follows, Set DualSimplex No.<br />

- file.ext - The filename containing an ‘advance basis’ for restarting.<br />

Default file extension is SAV.<br />

- none - No ‘advance basis’ is specified, but the ‘final basis’ is<br />

saved in the problem filename with an extension of SAV.<br />

- never - No ‘advance basis’ is specified, and the final ‘basis’ is<br />

not saved.<br />

Defaults to no basis file.<br />

If your LP model is infeasible, XA makes adjustments in column and row<br />

bounds to make the problem feasible. No adjustments are made to binary<br />

columns or RHS values of SOS sets. Depending upon how tightly<br />

constrained the problem is, XA may be prevented from making additional<br />

adjustment that would lead to an integer solution. No adjustments are<br />

made to make a column’s lower bound less than zero.<br />

Default value: NO<br />

LimitSearch is used to limit the number of nodes to search by implicitly<br />

or explicitly stating a upper bound on the integer solution.<br />

Default value is no limiting value, or, -1.0e23 for maximizing and 1.0e23<br />

for minimizing.<br />

Problem is displayed in equation format. This is probably the most useful<br />

command when debugging model.<br />

- Var - List columns in row.<br />

- Con - List rows in columns.<br />

- Both - Var and Con<br />

- None - no listing (default value)<br />

Location XA prints all messages.<br />

- CON: prints on stdout (default value).<br />

- filename with extension prn. Existing files are overridden.<br />

Activates XA’s primal-dual interior point algorithm. Very use when<br />

solving very large scale linear programming models.<br />

- Yes - Uses primal-dual interior point algorithm, MIP models<br />

automatically crossover the simplex algorithm for branching &<br />

bounding.<br />

- No – Use the simplex algorithm.<br />

- X – Crossover to the simplex algorithm after solving. (automatic<br />

when solving MIP models.<br />

Turns XA’s termination bell on or off.<br />

- No - do not ring the bell.(default value).<br />

- Yes - ring the bell.<br />

Method of generating initial basis.<br />

- 0 - Minimize primal infeasibility (default value).<br />

- 1 - Minimize dual infeasibility.<br />

- 2 - Both 0 & 1.


GAMS/XA 12<br />

Parameter Description<br />

SET DEGENITER # Degenerate anticycling aide. Number of consecutive degenerate pivots<br />

before anti-cycling code is activated.<br />

Default value: square root of the number of rows.<br />

Set FREGLOG hh:mm:ss Frequency in time to print the iteration log line. A negative number (e.g. -<br />

00:02) overwrites the same line. This command reduces the overhead of<br />

printing too many iteration lines.<br />

Default value: 00:01 (one log line per second).<br />

SET INTGAP # Minimum objective function improvement between each new integer<br />

solutions. Reported integer solution may not be the optimal integer<br />

solution because of premature termination.<br />

Default value 0.00.<br />

SET INTLIMIT # After finding # improving integer solutions, XA terminates with the best<br />

solution thus far. Reported integer solution may not be the optimal integer<br />

solution because of premature termination.<br />

Defaults - no limit the number of integer solutions.<br />

SET INTPCT # Percent of available integer columns to consider fixing at each integer<br />

node. Useful on very large binary problems. If 100 is entered than all<br />

integer columns that are integer at the end of solving the relaxed LP<br />

problem are fixed at the current integer bounds.<br />

SET IROUND<br />

No | Yes<br />

Default is zero (0.0) meaning no fixing.<br />

XA reports either rounded or unrounded integer column primal activity.<br />

- YES causes XA to report rounded integer column activity. XA rounds<br />

integer activity values to the closest bound based upon the<br />

LTOLERANCE and UTOLERANCE values.<br />

- NO causes XA to report unrounded integer variable activity, but these<br />

activities will always be within the requested integer tolerance.<br />

Default value: YES<br />

SET ITERATION # Maximum number of iteration. XA terminates if limit is exceeded, and if<br />

solving an integer problem the best integer solution found thus far is<br />

returned. Reported integer solution may not be the optimal integer<br />

solution because of premature termination.<br />

SET LTOLERANCE #<br />

SET UTOLERANCE #<br />

SET MARKOWITZ<br />

#<br />

Default value: 2000000000<br />

The tolerance (closeness) XA uses to solve integer column to it’s numeric<br />

sequence. For instance, you might consider using an UTOLERANCE of<br />

0.02 (a boat 98% full for all practical purposes to really 100% full). But<br />

beware, these integer activities within the specified tolerances are used in<br />

calculating constraint relationships and the objective function value.<br />

For example, if LTOLERANCE = 0.001, UTOLERANCE = 0.05, and Y<br />

has a reported (rounded) activity of 4.0, then 3 * Y ranges of 3 * 3.95 to 3<br />

* 4.001.<br />

Default values: 5.0e-6.<br />

Numeric Stability vs. sparsity in basis updating and inverse.<br />

Sparsity favors a larger number at the expensive of numeric stability.<br />

Default 10<br />

SET MAXCPU # Number of processors to use when solving MIP models. In generate, MIP<br />

models should solve 1/# times faster that a single processor machine.<br />

Consider requesting 50% more memory per processor.<br />

Number of processors to utilize.<br />

Defaults to the number of processor on the machine.<br />

Number can be greater than the number of physical processors.


GAMS/XA 13<br />

Parameter Description<br />

SET MAXNODES # Maximum number of branch and bound nodes.<br />

Default value: 4,000 plus the number of binary variables plus square root<br />

SET PRICING<br />

0 | 1 | 2 | 3 | 4<br />

SET REDUCEDCOST<br />

#<br />

SET RELAXED<br />

No | Yes<br />

SET RESTART<br />

No | Yes<br />

SET SCALE<br />

No | Yes | 2<br />

SET STICKWITHIT<br />

#<br />

of the number of integer columns.<br />

Variable pricing strategies, useful if XA appears to make a substantial<br />

number (rows/2) of pivots that do move the objective function or reduce<br />

the sum of infeasibilities. This feature requires more memory because an<br />

additional set of matrix coefficient are loaded into memory.<br />

- 0 – Standard reduced cost pricing (default value).<br />

- 1 – Automatic DEVEX pricing switch over.<br />

- 2 – Infeasible DEVEX pricing.<br />

- 3 – Feasible DEVEX pricing (our next choice).<br />

- 4 - Both infeasible and feasible DEVEX pricing.<br />

Dual activity tolerance to zero.<br />

Default value: 1e-7<br />

Integer problem is solved as a standard LP and with no integer columns in<br />

the formulation.<br />

- No – integer problem are solved with branch and bound method<br />

(default value).<br />

- Yes – solve problems as if all columns where continuous columns.<br />

When solving integer, binary, or semi-continuous problems XA may<br />

terminate before exploring all your integer nodes if so you have the option<br />

of restarting XA pickup right where you left off. Just before XA returns to<br />

your program, XA writes out BASIS command line parameter determines<br />

filename used with the appropriate extensions. basis information in the<br />

basis file (extension sav). if an integer, binary or semi-continuous solution<br />

has been found an integer basis file is created with (extension b01).<br />

And if all you integer nodes are not explored then a third file is created<br />

for restarting the unchanged problem in (extension r01).<br />

- Yes - if an r01 file exists with my identical problem then restart<br />

integer problem where I left off and if XA terminates before all nodes<br />

are explored again then write restart information to this file.<br />

- No - do not use or create the r01 file to restart my problem (default<br />

value):<br />

Problem Scaling technique.<br />

- No - Data not scaled.<br />

- Yes - Column and row scaling (default value).<br />

- 2 - Row scaling only.<br />

If an integer basis (.b01) file is reloaded to indicate branching direction<br />

for the current XA solve, this branching advice is following until #<br />

infeasible branches are made. After # infeasible branches, the standard<br />

branching direction for the particular branch & bound strategy is used.<br />

Default value 10.<br />

SET TCTOLERANCE # The smallest technological coefficient allowed in your matrix array. This<br />

tolerance is useful when extensive calculations are performed on these<br />

coefficients, where the results should be zero (but because of rounding<br />

errors) ends up being something like 1.0e-15.<br />

Default value: 1.0e-7


GAMS/XA 14<br />

Parameter Description<br />

SET TIMELIMIT<br />

hh:mm:ss<br />

Maximum time allow to solving the problem. XA terminates if this limit<br />

is exceeded, and if solving an integer problem the best integer solution<br />

found thus far is returned. Units are wall clock time. If set to low,<br />

reported integer solution may not be the optimal integer solution because<br />

of premature termination.<br />

Default value: 2000000000 seconds.<br />

SET XTOZERO<br />

Primal activity tolerance to zero.<br />

#<br />

Default value: 1e-7<br />

STOPAFTER hh:mm:ss Amount of time (hh:mm:ss) to continue solving after finding the first<br />

integer solution. Reported integer solution may not be the optimal integer<br />

solution because of premature termination.<br />

Default value: 0, indicating no termination.<br />

STRATEGY<br />

MIP solving strategy.<br />

# | #A | #B<br />

TOMPS<br />

No | Yes | Secure<br />

9 REPORTS<br />

Default value: 1<br />

9.1 STATISTICS REPORT<br />

Write an MPS file of problem. gams.mps file is created or rewritten.<br />

- NO - Do not write an MPS formatted file (default value).<br />

- Yes - Write problem in MPS format.<br />

- Secure - Write problem in MPS format and change column and row<br />

names to: C0, C1,... and R0, R1, ...<br />

[a] STATISTICS - FILE: alloy TITLE: Alloy Blending Mon Jan 01 00:00:00<br />

19xx<br />

[b] XA VERSION xx.x USABLE MEMORY xxxK BYTES<br />

[c] VARIABLES 7 MAXIMUM 100,000<br />

[d] 2 LOWER, 0 FIXED, 5 UPPER, 0 FREE<br />

[e] CONSTRAINTS 7 MAXIMUM 50,000<br />

[f] 1 GE, 1 EQ, 5 LE, 0 NULL/FREE, 1 RANGED.<br />

[g] CAPACITY USED BY CATEGORY-<br />

0.0% VARIABLE, 0.1% CONSTRAINT, 48 NON-ZEROS, WORK 101,977<br />

[h] MINIMIZATION.<br />

[i] O P T I M A L S O L U T I O N ---> OBJECTIVE 296.21661<br />

SOLVE TIME 00:00:00 ITER 12 MEMORY USED 0.1%<br />

E x p l a n a t i o n<br />

Starting with the first line in the STATISTICS REPORT:<br />

a) The first line has the problem filename ALLOY, the title for the problem file, today’s date<br />

and time.<br />

b) Current version of the XA System, platform run on, and the amount of usable memory.<br />

c) Number of variables in the problem.<br />

d) Detailed breakdown of how many variables have upper bounds (==), free, and integer.<br />

e) Number of constraints in the problem.<br />

f) Detailed breakdown of the number of GE (=), null/free, and ‘ranged’<br />

constraints.


GAMS/XA 15<br />

g) Capacity used by category - as your problem grows, these percentages will also increase.<br />

h) Statement of the objective of the problem.<br />

i) Status Line. The problem has been solved and these are the results: 1) optimal solution, 2)<br />

value of the objective function, 3) iteration time hh:mm:ss, 4) number of iterations, and 5)<br />

MEMORY USED -- important to watch as this resource usually grows the fastest. More on<br />

‘Status Line’ later.<br />

9.2 MATRIX LISTING<br />

The XA System can display your problem in equation format. This is helpful for verifying your<br />

formulation and locating errors. This report is controlled with the MATLIST command line<br />

parameter --<br />

Example: MatList Variables<br />

Minimize<br />

COST: 0.03 BIN1 + 0.08 BIN2 + 0.17 BIN3 + 0.12 BIN4 + 0.15 BIN5 + 0.21 ALUM<br />

0.38 SILICON<br />

Constraints<br />

YIELD: BIN1 + BIN2 + BIN3 + BIN4 + BIN5 + ALUM + SILICON = 2000<br />

FE: 0.15 BIN1 + 0.04 BIN2 + 0.02 BIN3 + 0.04 BIN4 + 0.02 BIN5 + 0.01 ALUM<br />

0.03 SILICON


GAMS/XA 16<br />

9.3 ITERATION LOG LINE<br />

After the Statistical Report is printed, the iteration log line is displayed. The displaying of the<br />

iteration log line is controlled by the MUTE command line parameter. MUTE YES or Set<br />

FreqLog 0 suppresses displaying of the iteration log line. If the iteration log line is displayed, the<br />

following information is shown. The LP iteration log line has three different formats depending<br />

upon XA’s progress:<br />

During LP infeasible iterations -<br />

Iter: ### Inf: #### #<br />

Iteration count, amount of infeasibility, and number of infeasible<br />

columns.<br />

During LP feasible iterations -<br />

Iter: ### Obj: ### #<br />

Iteration count, objective function, and number of columns priced.<br />

During Integer branch and bound -<br />

Node IInf ToGo.Map Best.Obj Cur.Obj Int.Obj X Column<br />

U/D<br />

#### ### ######### ####### ####### ###### # #######<br />

#<br />

Description<br />

Node Active node, the smaller the better, value increases and decreases as the branchand-bound<br />

proceeds.<br />

IInf Number of columns not sequenced aligned. This number converges to 0 as XA<br />

approaches an integer solution.<br />

ToGo.Ma<br />

p<br />

A numeric picture of unexplored nodes. Each digits represents the remaining<br />

nodes in that digits position.<br />

For example, 435 means:<br />

• 4 unexplored nodes between nodes 20 and 29.<br />

• 3 unexplored nodes between nodes 10 and 19.<br />

• 5 unexplored nodes between nodes 0 and 9.<br />

Best.Obj Best possible integer objective function. As the branch-and-bound algorithm<br />

proceeds this number bounds the Optimal Integer Solution. This number does not<br />

change very fast.<br />

Cur.Obj Current objective function. If an integer solution is found it cannot be any better<br />

than this value.<br />

Int.Obj Objective function of the best integer solution found so far. This value improves<br />

as additional integer solutions are found.<br />

X Number of improving integer solutions found.<br />

Column Column selected by the branch-and-bound process.<br />

U/D Direction the Column branched, Down(-) or Up(+).


GAMS/XA 17<br />

Display of the iteration log line may be toggled on and off by entering a CTRL/U during the<br />

iteration process. Use the Set FreqLog command line parameter to minimize the number line<br />

displayed. Displaying the iteration log line can significantly slow the solving process down.<br />

9.4 STATUS LINE<br />

After XA solves (or indicates no solution) your problem, you are greeted with one of the<br />

following ‘status lines’:<br />

O P T I M A L S O L U T I O N - - -> OBJECTIVE: ####.####<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

N O F E A S I B L E S O L U T I O N<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

Identifies constraints and amount(s) of adjustments to make the problem ‘feasible’. If FORCE<br />

YES is set, then XA automatically makes these adjustments and continues.<br />

U N B O U N D E D S O L U T I O N<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

The unbounded variable is identified. If FORCE YES is set, then XA automatically sets<br />

variable to lower limit and continues.<br />

U S E R L I M I T S E X C E E D E D<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

Entering a CTRL/Z terminates XA and the current solution is saved for restarting.<br />

T I M E L I M I T E X C E E D E D<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

The time allowed to solve this problem was exceeded. The current solution is saved for<br />

restarting.<br />

W O R K S P A C E S I Z E E X C E E D E D<br />

SOLVE TIME hh:mm:ss ITER xx MEMORY USED x.x%<br />

Problem too large. See GAM/XA MEMORY USAGE Section.<br />

If the problem terminates with anything other than ‘optimal solution’ then the ‘title’ is changed<br />

to reflect the reason.


GAMS/XA 18<br />

10 SOLUTION INTERRUPTION<br />

This feature gives you complete control over the operation of your computer. Long running<br />

problems may be interrupted without losing the time already spent on solving the problem. This<br />

feature can be used as a checkpoint for restarting in event of a power failure, etc. d A problem<br />

may be interrupted any number of times, plus you may change your original problem any way<br />

you wish. The XA System will make the appropriate changes and solve the newly modified<br />

problem. This feature may NOT be used if you enter BASIS NEVER as an XA command line<br />

parameter<br />

To interrupt your problem, simply type a CTRL/Z (a Z on Unix platforms) during the ‘iteration’<br />

process. The XA System will save the current solution into filename.sav (because the current<br />

iteration must complete its run, there may be a slight delay before XA responds to your<br />

CTRL/Z). When you interrupt XA with a CTRL/Z the following message is displays.<br />

U S E R L I M I T S E X C E E D E D<br />

To restart your problem, simply re-enter what you originally entered to execute GAMS.


GAMS/XPRESS<br />

Contents<br />

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />

2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />

3 <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

3.1 General <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

3.2 LP <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4<br />

3.3 MIP <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

3.4 Newton-Barrier <strong>Options</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

4 Helpful Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

1 Introduction<br />

This document describes the GAMS/XPRESS linear and mixed-integer programming solver. The GAMS/XPRESS<br />

solver is based on the XPRESS-MP Optimization Subroutine Library, and runs only in conjunction with the<br />

GAMS modeling system.<br />

GAMS/XPRESS (also simply referred to as XPRESS) is a versatile, high - performance optimization system. The<br />

system integrates a powerful simplex-based LP solver, a MIP module with cut generation for integer programming<br />

problems and a barrier module implementing a state-of-the-art interior point algorithm for very large LP problems.<br />

The GAMS/XPRESS solver is installed automatically with your GAMS system. Without a license, it will run in<br />

student or demonstration mode (i.e. it will solve small models only). If your GAMS license includes XPRESS,<br />

there is no size or algorithm restriction imposed by the license, nor is any separate licensing procedure required.<br />

2 Usage<br />

If you have installed the system and configured XPRESS as the default LP, RMIP 1 and MIP solver, all LP, RMIP<br />

and MIP models without a specific solver option will use XPRESS. If you installed another solver as the default,<br />

you can explicitly request a particular model to be solved by XPRESS by inserting the statement<br />

option LP = xpress; { or MIP or RMIP }<br />

somewhere before the solve statement.<br />

The following standard GAMS options can be used to control XPRESS:<br />

• option reslim =x; or modelname.reslim = x;<br />

Stop the algorithm after x seconds and report the best solution found so far. Modelname is the name of the<br />

model as specified in a previous model statement.<br />

1 RMIP means: Relaxed Mixed Integer Programming. You can solve a MIP model as an RMIP. This will ignore the integer<br />

restrictions and thus solves the problem as an LP.


2 GAMS/XPRESS<br />

• option iterlim=n; or modelname.iterlim = n;<br />

Stop the algorithm after n simplex iterations and report the best solution found so far. For MIP models,<br />

this places a cumulative limit on simplex iterations for the relaxed LP and the nodes of the B&B tree.<br />

Modelname is the name of the model as specified in a previous model statement.<br />

• option sysout=on;<br />

Echo more detailed information about the solution process to the listing file.<br />

• option optcr=x;<br />

In a MIP stop the search as soon as the relative gap is less than x.<br />

• option optca=x;<br />

In a MIP stop the search as soon as the absolute gap is less than x.<br />

• option bratio=x;<br />

Determines whether or not an advanced basis is passed on to the solver. Bratio=1 will always ignore an<br />

existing basis (in this case XPRESS will use a crash routine to find a better basis than an all-slack basis),<br />

while bratio=0 will always accept an existing basis. Values between 0 and 1 use the number of non-basic<br />

variables found to determine if a basis is likely to be good enough to start from.<br />

• modelname.prioropt = 1;<br />

Turns on usage of user-specified priorities. Priorities can be assigned to integer and binary variables using<br />

the syntax: variablename.prior = x;. Default priorities are 0.0. Variables with a priority v1 are branched<br />

upon earlier than variables with a priority v2 if v1


GAMS/XPRESS 3<br />

Keywords are not case sensitive. I.e. whether you specify iterlim, ITERLIM, or Iterlim the same option is set.<br />

To use an options file you specify a model suffix modelname.optfile=1; or use command line options optfile=1.<br />

The tables that follow contain the XPRESS options. They are organized by function (e.g. LP or MIP) and also<br />

by type: some options control the behavior of the GAMS/XPRESS link and will be new even to experienced<br />

XPRESS users, while other options exist merely to set control variables in the XPRESS library and may be<br />

familiar to XPRESS users.<br />

3.1 General <strong>Options</strong><br />

The following general options control the behavior of the GAMS/XPRESS link.<br />

Option Description Default<br />

advbasis 0: don’t use advanced basis provided by GAMS<br />

1: use advanced basis provided by GAMS<br />

This option overrides the GAMS BRATIO option.<br />

algorithm simplex: use simplex solver<br />

barrier: use barrier algorithm<br />

This option is used to select the barrier method to solve LP’s. By default the<br />

barrier method will do a crossover to find a basic solution.<br />

basisout If specified an MPS basis file is written. In general this option is not used in a<br />

GAMS environment, as GAMS maintains basis information for you automatically.<br />

iterlim Sets the iteration limit for simplex algorithms. When this limit is reached<br />

the system will stop and report the best solution found so far. Overrides the<br />

GAMS ITERLIM option.<br />

mpsoutputfile If specified XPRESS-MP will generate an MPS file corresponding to the GAMS<br />

model. The argument is the file name to be used. It can not have an extension:<br />

XPRESS-MP forces the extension to be .MAT even if an extension was<br />

specified. You can prefix the file name with a path.<br />

rerun Applies only in cases where presolve is turned on and the model is diagnosed<br />

as infeasible or unbounded. If rerun is nonzero, we rerun the model using<br />

primal simplex with presolve turned off in hopes of getting better diagnostic<br />

information. If rerun is zero, no good diagnostic information exists, so we<br />

return no solution, only an indication of unboundedness/infeasibility.<br />

reslim Sets the resource limit. When the solver has used more than this amount of<br />

CPU time (in seconds) the system will stop the search and report the best<br />

solution found so far. Overrides the GAMS RESLIM option.<br />

Determined<br />

by GAMS<br />

simplex<br />

Don’t<br />

write a<br />

basis file.<br />

10000<br />

Don’t<br />

write an<br />

MPS file.<br />

1<br />

1000.0<br />

The following general options set XPRESS library control variables, and can be used to fine-tune XPRESS.<br />

Option Description Default<br />

crash A crash procedure is used to quickly find a good basis. This option is only<br />

relevant when no advanced basis is available.<br />

0: no crash<br />

1: singletons only (one pass)<br />

2: singletons only (multi-pass)<br />

3: multiple passes through the matrix considering slacks<br />

4: multiple passes (10: as 4 but perform n-10 passes<br />

100: default crash behavior of XPRESS-MP version 6<br />

extrapresolve The initial number of extra elements to allow for in the presolve. The space<br />

required to store extra presolve elements is allocated dynamically, so it is not<br />

necessary to set this control. In some cases, the presolve may terminate early<br />

if this is not increased.<br />

0 when<br />

GAMS<br />

provides<br />

an<br />

advanced<br />

basis, and<br />

2<br />

otherwise<br />

automatic


4 GAMS/XPRESS<br />

Option Description Default<br />

lpiterlimit Sets the iteration limit for simplex algorithms. For MIP models, this is a MAXINT<br />

mpsnamelength<br />

per-node iteration limit for the B&B tree. Overrides the iterlim option.<br />

Maximum length of MPS names in characters. Internally it is rounded up to<br />

the smallest multiple of 8. MPS names are right padded with blanks. Maximum<br />

value is 64.<br />

0<br />

presolve -1:Presolve applied, but a problem will not be declared infeasible if primal 1<br />

infeasibilities are detected. The problem will be solved by the LP opti-<br />

scaling<br />

mization algorithm, returning an infeasible solution, which can sometimes<br />

be helpful.<br />

0: Presolve not applied.<br />

1: Presolve applied.<br />

2: Presolve applied, but redundant bounds are not removed. This can sometimes<br />

increase the efficiency of the barrier algorithm.<br />

As XPRESS does a basis preserving presolve, you don’t have to turn off the<br />

presolver when using an advanced basis.<br />

Bitmap to determine how internal scaling is done. If set to 0, no scaling will 35<br />

take place. The default of 35 implies row and column scaling done by the<br />

maximum element method.<br />

Bit 0 = 1: Row scaling.<br />

Bit 1 = 2: Column scaling.<br />

Bit 2 = 4: Row scaling again.<br />

Bit 3 = 8: Maximin.<br />

trace<br />

Bit 4 = 16: Curtis-Reid.<br />

Bit 5 = 32: Off implies scale by geometric mean, on implies scale by maximum<br />

element. Not applicable for maximin and Curtis-Reid scaling.<br />

Control of the infeasibility diagnosis during presolve - if nonzero, the infea- 0<br />

sibility will be explained. This output appears on the log file, and is best<br />

understood if you also set mpsnamelength to a positive value.<br />

3.2 LP <strong>Options</strong><br />

The following options set XPRESS library control variables, and can be used to fine-tune the XPRESS LP solver.<br />

Option Description Default<br />

bigmmethod 0: for phase I / phase II<br />

1: if ’big M’ method to be used<br />

automatic<br />

bigm The infeasibility penalty used in the ”Big M” method automatic<br />

defaultalg 1: automatic<br />

2: dual simplex<br />

3: primal simplex<br />

4: Newton barrier<br />

1<br />

etatol Zero tolerance on eta elements. During each iteration, the basis inverse is 1.0e-12<br />

feastol<br />

premultiplied by an elementary matrix, which is the identity except for one<br />

column: the eta vector. Elements of eta vectors whose absolute value is smaller<br />

than etatol are taken to be zero in this step.<br />

This is the zero tolerance on right hand side values, bounds and range values.<br />

If one of these is less than or equal to feastol in absolute value, it is treated<br />

as zero.<br />

1.0e-6<br />

invertfreq The frequency with which the basis will be inverted. A value of -1 implies<br />

automatic.<br />

-1<br />

invertmin The minimum number of iterations between full inversions of the basis matrix. 3


GAMS/XPRESS 5<br />

Option Description Default<br />

lplog The frequency at which the simplex iteration log is printed.<br />

n < 0: detailed output every -n iterations<br />

100<br />

n = 0: log displayed at the end of the solution process<br />

matrixtol<br />

n > 0: summary output every n iterations<br />

The zero tolerance on matrix elements. If the value of a matrix element is less<br />

than or equal to matrixtol in absolute value, it is treated as zero.<br />

1.0e-9<br />

optimalitytol This is the zero tolerance for reduced costs. On each iteration, the simplex<br />

method searches for a variable to enter the basis which has a negative reduced<br />

cost. The candidates are only those variables which have reduced costs less<br />

than the negative value of optimalitytol.<br />

1.0e-6<br />

penalty Minimum absolute penalty variable coefficient used in the “Big M” method. automatic<br />

pivottol The zero tolerance for matrix elements. On each iteration, the simplex method<br />

seeks a nonzero matrix element to pivot on. Any element with absolute value<br />

less than pivottol is treated as zero for this purpose.<br />

1.0e-9<br />

pricingalg This determines the pricing method to use on each iteration, selecting which<br />

variable enters the basis. In general Devex pricing requires more time on each<br />

iteration, but may reduce the total number of iterations, whereas partial pricing<br />

saves time on each iteration, although possibly results in more iterations.<br />

0<br />

-1:if partial pricing is to be used<br />

0: if the pricing is to be decided automatically.<br />

1: if DEVEX pricing is to be used<br />

relpivottol At each iteration a pivot element is chosen within a given column of the matrix.<br />

The relative pivot tolerance, relpivottol, is the size of the element chosen<br />

relative to the largest possible pivot element in the same column.<br />

3.3 MIP <strong>Options</strong><br />

1.0e-6<br />

In some cases, the branch-and-bound MIP algorithm will stop with a proven optimal solution or when unboundedness<br />

or (integer) infeasibility is detected. In most cases, however, the global search is stopped through one of<br />

the generic GAMS options:<br />

1. iterlim (on the cumulative pivot count), reslim (in seconds of CPU time),<br />

2. optca & optcr (stopping criteria based on gap between best integer solution found and best possible) or<br />

3. nodlim (on the total number of nodes allowed in the B&B tree).<br />

It is also possible to set the maxnode and maxmipsol options to stop the global search: see the table of XPRESS<br />

control variables for MIP below.<br />

The following options control the behavior of the GAMS/XPRESS link on MIP models.<br />

Option Description Default<br />

mipcleanup If nonzero, clean up the integer solution obtained, i.e. round and fix the 1<br />

discrete variables and re-solve as an LP to get some marginal values for the<br />

discrete vars.<br />

miptrace A miptrace file with the specified name will be created. This file records none<br />

miptracenode<br />

the best integer and best bound values every miptracenode nodes and at<br />

miptracetime-second intervals.<br />

Specifies the node interval between entries to the miptrace file. 100<br />

miptracetime Specifies the time interval, in seconds, between entries to the miptrace file. 1


6 GAMS/XPRESS<br />

The following options set XPRESS library control variables, and can be used to fine-tune the XPRESS MIP<br />

solver.<br />

Option Description Default<br />

backtrack This determines how the next node in the tree search is selected for processing. 3<br />

1: If miptarget is not set, choose the node with the best estimate. If<br />

miptarget is set (by the user or by the global search previously finding<br />

an integer solution), the choice is based on the Forrest-Hirst-Tomlin Criterion,<br />

which takes into account the best current integer solution and seeks<br />

a new node which represents a large potential improvement.<br />

2: Always choose the node with the best estimated solution.<br />

3: Always choose the node with the best bound on the solution.<br />

breadthfirst If nodeselection = 4, this determines the number of nodes to include in a<br />

breadth-first search.<br />

covercuts The number of rounds of lifted cover inequalities at the top node. A lifted<br />

cover inequality is an additional constraint that can be particularly effective<br />

at reducing the size of the feasible region without removing potential integral<br />

solutions. The process of generating these can be carried out a number of<br />

times, further reducing the feasible region, albeit incurring a time penalty.<br />

There is usually a good payoff from generating these at the top node, since<br />

these inequalities then apply to every subsequent node in the tree search.<br />

cpmaxcuts The initial maximum number of cuts that will be stored in the cut pool. During<br />

optimization, the cut pool is subsequently resized automatically.<br />

cpmaxelems The initial maximum number of nonzero coefficients which will be held in the<br />

cut pool. During optimization, the cut pool is subsequently resized automati-<br />

cally.<br />

cutdepth Sets the maximum depth in the tree search at which cuts will be generated.<br />

Generating cuts can take a lot of time, and is often less important at deeper<br />

levels of the tree since tighter bounds on the variables have already reduced<br />

the feasible region. A value of 0 signifies that no cuts will be generated.<br />

cutfreq This specifies the frequency at which cuts are generated in the tree search. If<br />

the depth of the node modulo cutfreq is zero, then cuts will be generated.<br />

cutstrategy This specifies the cut strategy. An aggressive cut strategy, generating a greater<br />

number of cuts, will result in fewer nodes to be explored, but with an associated<br />

time cost in generating the cuts. The fewer cuts generated, the less time taken,<br />

but the greater subsequent number of nodes to be explored.<br />

-1: Automatic selection of either the conservative or aggressive cut strategy.<br />

0: No cuts.<br />

1: Conservative cut strategy.<br />

2: Aggressive cut strategy.<br />

degradefactor Factor to multiply estimated degradations associated with an unexplored node<br />

in the tree. The estimated degradation is the amount by which the objective<br />

function is expected to worsen in an integer solution that may be obtained<br />

through exploring a given node.<br />

gomcuts The number of rounds of Gomory cuts at the top node. These can always be<br />

generated if the current node does not yield an integral solution. However,<br />

Gomory cuts are not usually as effective as lifted cover inequalities in reducing<br />

the size of the feasible region.<br />

maxmipsol This specifies a limit on the number of integer solutions to be found (the total<br />

number, not necessarily the number of solutions with distinct objectives). 0<br />

means no limit.<br />

maxnode The maximum number of nodes that will be explored. If the GAMS nodlim<br />

model suffix is set, that setting takes precedence.<br />

10<br />

20<br />

100<br />

200<br />

0<br />

8<br />

-1<br />

1.0<br />

2<br />

0<br />

100,000,000


GAMS/XPRESS 7<br />

Option Description Default<br />

mipabscutoff If the user knows that they are interested only in values of the objective function<br />

which are better than some value, this can be assigned to mipabscutoff.<br />

This allows the Optimizer to ignore solving any nodes which may yield worse<br />

objective values, saving solution time.<br />

automatic<br />

mipaddcutoff The amount to add to the objective function of the best integer solution found<br />

to give the new cutoff. Once an integer solution has been found whose objective<br />

function is equal to or better than mipabscutoff, improvements on<br />

this value may not be interesting unless they are better by at least a certain<br />

amount. If mipaddcutoff is nonzero, it will be added to mipabscutoff each<br />

time an integer solution is found which is better than this new value. This<br />

cuts off sections of the tree whose solutions would not represent substantial<br />

improvements in the objective function, saving processor time. Note that this<br />

should usually be set to a negative number for minimization problems, and<br />

positive for maximization problems. Notice further that the maximum of the<br />

absolute and relative cut is actually used.<br />

0.0<br />

miplog Global print control<br />

-100<br />

0: No printout in global.<br />

1: Only print out summary statement at the end.<br />

2: Print out detailed log at all solutions found.<br />

3: Print out detailed eight-column log at each node.<br />

n < 0: Print out summary six-column log at each -n nodes, or when a new<br />

solution is found<br />

mippresolve Bitmap determining type of integer processing to be performed. If set to 0, no<br />

processing will be performed.<br />

automatic<br />

Bit 0: Reduced cost fixing will be performed at each node. This can simplify<br />

the node before it is solved, by deducing that certain variables’ values<br />

can be fixed based on additional bounds imposed on other variables<br />

at this node.<br />

Bit 1: Logical preprocessing will be performed at each node. This is performed<br />

on binary variables, often resulting in fixing their values based<br />

on the constraints. This greatly simplifies the problem and may even<br />

determine optimality or infeasibility of the node before the simplex<br />

method commences.<br />

Bit 2: Probing of binary variables is performed at the top node. This sets<br />

certain binary variables and then deduces effects on other binary<br />

variables occurring in the same constraints.<br />

miprelcutoff Percentage of the LP solution value to be added to the value of the objective<br />

function when an integer solution is found, to give the new value of<br />

mipabscutoff. The effect is to cut off the search in parts of the tree whose<br />

best possible objective function would not be substantially better than the<br />

current solution.<br />

0.0<br />

miptarget The target objective value for integer solutions. This is automatically set after<br />

solving the LP unless set by the user.<br />

1e40<br />

miptol This is the tolerance within which a decision variable’s value is considered to<br />

be integral.<br />

5.0e-6<br />

nodeselection This determines which nodes will be considered for solution once the current<br />

node has been solved.<br />

1: Local first: Choose among the two descendant nodes, if none among all<br />

active nodes.<br />

2: Best first: All nodes are always considered.<br />

3: Depth first: Choose deepest node, but explore two descendents first.<br />

automatic<br />

4: Best first, then local first: All nodes are considered for the first<br />

breadthfirst nodes, after which the usual default behavior is resumed.


8 GAMS/XPRESS<br />

Option Description Default<br />

pseudocost The default pseudo cost used in estimation of the degradation associated with<br />

an unexplored node in the tree search. A pseudo cost is associated with each<br />

integer decision variable and is an estimate of the amount by which the objective<br />

function will be worse if that variable is forced to an integral value.<br />

0.01<br />

treecovercuts The number of rounds of lifted cover inequalities generated at nodes other than<br />

the top node in the tree. Compare with the description for covercuts.<br />

2<br />

treegomcuts The number of rounds of Gomory cuts generated at nodes other than the first<br />

node in the tree. Compare with the description for gomcuts.<br />

0<br />

varselection This determines how to combine the pseudo costs associated with the integer<br />

variables to obtain an overall estimated degradation in the objective function<br />

that may be expected in any integer solution from exploring a given node. It<br />

is relevant only if backtrack has been set to 1.<br />

1: Sum the minimum of the ’up’ and ’down’ pseudo costs.<br />

2: Sum all of the ’up’ and ’down’ pseudo costs.<br />

3: Sum the maximum, plus twice the minimum of the ’up’ and ’down’ pseudo<br />

costs.<br />

4: Sum the maximum of the ’up’ and ’down’ pseudo costs.<br />

5: Sum the ’down’ pseudo costs.<br />

6: Sum the ’up’ pseudo costs.<br />

1<br />

3.4 Newton-Barrier <strong>Options</strong><br />

The barrier method is invoked by using one of the options<br />

algorithm barrier<br />

defaultalg 4<br />

The barrier method is likely to use more memory than the simplex method. No warm start is done, so if an<br />

advanced basis exists, you may not wish to use the barrier solver.<br />

The following options set XPRESS library control variables, and can be used to fine-tune the XPRESS barrier<br />

solver.<br />

Option Description Default<br />

bariterlimit Maximum number of barrier iterations 200<br />

crossover Determines whether the barrier method will cross over to the simplex method<br />

when at optimal solution has been found, in order to provide an end basis.<br />

0: No crossover.<br />

1: Crossover to a basic solution.<br />

1<br />

4 Helpful Hints<br />

The comments below should help both novice and experienced GAMS users to better understand and make use<br />

of GAMS/XPRESS.<br />

• Infeasible and unbounded models The fact that a model is infeasible/unbounded can be detected at two<br />

stages: during the presolve and during the simplex or barrier algorithm. In the first case we cannot recover<br />

a solution, nor is any information regarding the infeasible/unbounded constraint or variable provided (at<br />

least in a way that can be returned to GAMS). In such a situation, the GAMS link will automatically rerun<br />

the model using primal simplex with presolve turned off (this can be avoided by setting the rerun option<br />

to 0). It is possible (but very unlikely) that the simplex method will solve a model to optimality while the<br />

presolve claims the model is infeasible/unbounded (due to feasibility tolerances in the simplex and barrier<br />

algorithms).


GAMS/XPRESS 9<br />

• The barrier method does not make use of iterlim. Use bariterlim in an options file instead. The number<br />

of barrier iterations is echoed to the log and listing file. If the barrier iteration limit is reached during the<br />

barrier algorithm, XPRESS continues with a simplex algorithm, which will obey the iterlim setting.<br />

• Semi-integer variables are not implemented in the link, nor are they supported by XPRESS; if present, they<br />

trigger an error message.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!