27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Multidimensional Strategies 75<br />

another point x (k) :<br />

where<br />

F (x (k+1) )=F (x (k) )+h T rF (x (k) )+ 1<br />

2 hT r 2 F (x (k) ) h + ::: (3.26)<br />

h = x (k+1) ; x (k)<br />

In this Taylor series, as it is called, all the terms of higher than second order are<br />

zero if F (x) is quadratic. Di erentiating Equation (3.26) with respect to h <strong>and</strong> setting<br />

the derivative equal to zero, one obtains a condition for the stationary points of a second<br />

order function:<br />

or<br />

rF (x (k+1) )=rF (x (k) )+r 2 F (x (k) )(x (k+1) ; x (k) )=0<br />

x (k+1) = x (k) ; [r 2 F (x (k) )] ;1 rF (x (k) ) (3.27)<br />

If F (x) is quadratic <strong>and</strong> r 2 F (x (0) )ispositive-de nite, Equation (3.27) yields the<br />

solution x (1) in a single step from any starting point x (0) without needing a line search. If<br />

Equation (3.27) is taken as the iteration rule in the general case it represents the extension<br />

of the Newton-Raphson method to functions of several variables (Householder, 1953). It<br />

is also sometimes called a second order gradient method with the choice of direction <strong>and</strong><br />

step length (Crockett <strong>and</strong> Cherno , 1955)<br />

v (k) = ;[r 2 F (x (k) )] ;1 rF (x (k) )<br />

s (k) = 1 (3.28)<br />

The real length of the iteration step is hidden in the non-normalized Newton direction<br />

v (k) . Since no explicit value of the objective function is required, but only its derivatives,<br />

the Newton-Raphson strategy is classi ed as an indirect or analytic optimization method.<br />

Its ability to predict the minimum of a quadratic function in a single calculation at rst<br />

sight looksvery attractive. This single step, however, requires a considerable e ort. Apart<br />

from the necessityofevaluating n rst <strong>and</strong> n(n<br />

+1) second partial derivatives, the Hessian<br />

2<br />

matrix r 2 F (x (k) )must be inverted. This corresponds to the problem of solving a system<br />

of linear equations<br />

r 2 F (x (k) ) 4 x (k) = ;rF (x (k) ) (3.29)<br />

for the unknown quantities 4x (k) . All the st<strong>and</strong>ard methods of linear algebra, e.g., Gaussian<br />

elimination (Brown <strong>and</strong> Dennis, 1968 Brown, 1969) <strong>and</strong> the matrix decomposition<br />

method of Cholesky (Wilkinson, 1965), need O(n 3 ) computational operations for this<br />

(see Schwarz, Rutishauser, <strong>and</strong> Stiefel, 1968). For the same cost, the strategies of conjugate<br />

directions <strong>and</strong> conjugate gradients can execute O(n) steps. Thus, in principle, the<br />

Newton-Raphson iteration o ers no advantage in the quadratic case.<br />

If the objective function is not quadratic, then<br />

v (0) does not in general point towards a minimum. The iteration rule (Equation<br />

(3.27)) must be applied repeatedly.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!