27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Multidimensional Strategies 77<br />

3.2.3.1 DFP: Davidon-Fletcher-Powell Method<br />

(Quasi-Newton Strategy, Variable Metric Strategy)<br />

Much greater interest has been shown for a group of second order gradient methods that<br />

attempt to approximate the Hessian matrix <strong>and</strong> its inverse during the iterations only from<br />

rst order data. This now extensive class of quasi-Newton strategies has grown out of<br />

the work of Davidon (1959). Fletcher <strong>and</strong> Powell (1963) improved <strong>and</strong> translated it into<br />

a practical procedure. The Davidon-Fletcher-Powell or DFP method <strong>and</strong> some variants<br />

of it are also known as variable metric strategies. They are sometimes also regarded<br />

as conjugate gradient methods, because in the quadratic case they generate conjugate<br />

directions. For higher order objective functions this is no longer so. Whereas the variable<br />

metric concept is to approximate Newton directions, this is not the case for conjugate<br />

gradient methods. The basic recursion formula for the DFP method is<br />

with<br />

<strong>and</strong><br />

x (k+1) = x (k) + s (k) v (k)<br />

v (k) = ;H (k)T rF (x (k) )<br />

H (0) = I<br />

H (k+1) = H (k) + A (k)<br />

The correction A (k) to the approximation for the inverse Hessian matrix, H (k) ,is<br />

derived from information collected during the last iteration thus from the change in the<br />

variable vector<br />

y (k) = x (k+1) ; x (k) = s (k) v (k)<br />

<strong>and</strong> the change in the gradient vector<br />

it is given by<br />

z (k) = rF (x (k+1) ) ;rF (x (k) )<br />

A (k) = y(k) y (k)T<br />

y (k)T z (k) ; H (k) z (k) (H (k) z (k) ) T<br />

z (k)T H (k) z (k)<br />

(3.31)<br />

The step length s (k) is obtained by a line search along v (k) (Equation (3.30)). Since<br />

the rst partial derivatives are needed in any case they can be made use of in the one<br />

dimensional minimization. Fletcher <strong>and</strong> Powell do so in the context of a cubic Hermitian<br />

interpolation (see Sect. 3.1.2.3.4). A corresponding ALGOL program has been<br />

published by Wells (1965) (for corrections see Fletcher, 1966 Hamilton <strong>and</strong> Boothroyd,<br />

1969 House, 1971). The rst derivatives must be speci ed as functions, which is usually<br />

inconvenient <strong>and</strong> often impossible. The convergence properties of the DFP method have<br />

been thoroughly investigated, e.g., by Broyden (1970b,c), Adachi (1971), Polak (1971),<br />

<strong>and</strong> Powell (1971, 1972a,b,c). Numerous suggestions have thereby been made for improvements.<br />

Convergence is achieved if F (x) isconvex. Under stricter conditions it can<br />

be proved that the convergence rate is greater than linear <strong>and</strong> the sequence of iterations

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!