16.01.2015 Views

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

GAMS — The Solver Manuals - Available Software

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

SNOPT 525<br />

SQOPT uses a reduced-Hessian active-set method implemented as a reduced-gradient method similar to that in<br />

MINOS [14].<br />

At each minor iteration, the constraints Ax − s = b are partitioned into the form<br />

Bx B + Sx S + Nx N = b,<br />

where the basis matrix B is square and nonsingular and the matrices S, N are the remaining columns of (A −I).<br />

<strong>The</strong> vectors x B , x S , x N are the associated basic, superbasic, and nonbasic variables components of (x, s).<br />

<strong>The</strong> term active-set method arises because the nonbasic variables x N are temporarily frozen at their upper or<br />

lower bounds, and their bounds are considered to be active. Since the general constraints are satisfied also, the<br />

set of active constraints takes the form<br />

(<br />

) ⎛ ⎞<br />

x B<br />

( )<br />

B S N ⎜ ⎟ b<br />

⎝ x S ⎠ = ,<br />

I<br />

x N<br />

x N<br />

where x N represents the current values of the nonbasic variables. (In practice, nonbasic variables are sometimes<br />

frozen at values strictly between their bounds.) <strong>The</strong> reduced-gradient method chooses to move the superbasic<br />

variables in a direction that will improve the objective function. <strong>The</strong> basic variables “tag along” to keep Ax−s = b<br />

satisfied, and the nonbasic variables remain unaltered until one of them is chosen to become superbasic.<br />

At a nonoptimal feasible point (x, s) we seek a search direction p such that (x, s) + p remains on the set of active<br />

constraints yet improves the QP objective. If the new point is to be feasible, we must have Bp B +Sp S +Np N = 0<br />

and p N = 0. Once p S is specified, p B is uniquely determined from the system Bp B = −Sp S . It follows that the<br />

superbasic variables may be regarded as independent variables that are free to move in any desired direction. <strong>The</strong><br />

number of superbasic variables (n S say) therefore indicates the number of degrees of freedom remaining after the<br />

constraints have been satisfied. In broad terms, n S is a measure of how nonlinear the problem is. In particular,<br />

n S need not be more than one for linear problems.<br />

2.5 <strong>The</strong> reduced Hessian and reduced gradient<br />

<strong>The</strong> dependence of p on p S may be expressed compactly as p = Zp S , where Z is a matrix that spans the null<br />

space of the active constraints:<br />

⎛ ⎞<br />

−B −1 S<br />

⎜ ⎟<br />

Z = P ⎝ I ⎠ (32.3)<br />

0<br />

where P permutes the columns of (A −I) into the order (B S N). Minimizing q(x, x k ) with respect to p S now<br />

involves a quadratic function of p S :<br />

g T Zp S + 1 2 pT S Z T HZp S , (32.4)<br />

where g and H are expanded forms of g k and H k defined for all variables (x, s). This is a quadratic with<br />

Hessian Z T HZ (the reduced Hessian) and constant vector Z T g (the reduced gradient). If the reduced Hessian is<br />

nonsingular, p S is computed from the system<br />

Z T HZp S = −Z T g. (32.5)<br />

<strong>The</strong> matrix Z is used only as an operator, i.e., it is not stored explicitly. Products of the form Zv or Z T g are<br />

obtained by solving with B or B T . <strong>The</strong> package LUSOL[13] is used to maintain sparse LU factors of B as the<br />

BSN partition changes. From the definition of Z, we see that the reduced gradient can be computed from<br />

B T π = g B , Z T g = g S − S T π,<br />

where π is an estimate of the dual variables associated with the m equality constraints Ax − s = b, and g B is the<br />

basic part of g.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!