08.11.2012 Views

multivariate production systems optimization - Stanford University

multivariate production systems optimization - Stanford University

multivariate production systems optimization - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The second step of the Greenstadt (1967) modification, replacing small eigenvalues<br />

with infinity, is based on the objective function being insensitive to any small eigenvalues.<br />

Since the objective function is insensitive to a parameter with a small eigenvalue, this<br />

parameter should not be allowed to influence the step-direction. This is precisely what is<br />

achieved by replacing the eigenvalue with infinity. Replacing an eigenvalue with infinity is<br />

accomplished by replacing the reciprocal of the eigenvalue with zero when inverting the<br />

Hessian.<br />

Conceptually, the Greenstadt (1967) modification applied to Newton’s Method<br />

should result in a robust algorithm that is rapidly convergent and not affected by insensitive<br />

parameters; the caveat being that it is very expensive computationally. For information on<br />

the performance of the Newton-Greenstadt algorithm applied to petroleum engineering<br />

problems, see Barua, Horne, Greenstadt, and Lopez (1989).<br />

6.3 Quasi-Newton Methods<br />

Newton’s Method is quadratically convergent when starting with a good initial estimate.<br />

However, it is very expensive since the Hessian must be built and solved every iteration,<br />

particularly when the Hessian is built with finite difference approximations. The idea<br />

behind Quasi-Newton methods is to compromise on the speed of convergence while saving<br />

on the expense associated with building and solving the Hessian. Instead of building and<br />

solving an exact Hessian every iteration, Quasi-Newton methods attempt to update an<br />

approximation of the Hessian.<br />

Quasi-Newton theory is based on multidimensional generalizations of the secant<br />

method. The object is to build up secant information as the iterations proceed. Suppose at<br />

the kth iteration, the Newton step {xk+1 - xk } causes a change in the gradient of {gk+1 -<br />

gk }. Then the next Hessian approximation will satisfy the secant passing through these<br />

two iterates if<br />

xk+1 - xk = Hk+1 gk+1 - gk<br />

71<br />

(6.28)<br />

This condition is termed the Quasi-Newton condition and is the design criteria for Quasi-<br />

Newton methods. Notice the similarity between this condition and the Newton condition,<br />

Equation 6.12. This condition forces the Hessian approximation to exactly match the<br />

gradient of the function in the displacement direction, {xk+1 - xk }. For multiple

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!