27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

74 Hill climbing Strategies<br />

i = @2<br />

@s 2(P (x + svi)) = ;2 (b ; c) Fa +(c ; a) Fb +(a ; b) Fc<br />

(b ; c)(c ; a)(a ; b)<br />

Powell uses this quantity i for all subsequent interpolations in the direction vi as a scale<br />

for the second partial derivative of the objective function. He scales the directions vi,<br />

which in his case are not normalized, by1= p i. This allows the possibility of subsequently<br />

carrying out a simpli ed interpolation with only two argument values, x <strong>and</strong> x + si vi.<br />

It is a worthwhile procedure, since each direction is used several times. The predicted<br />

minimum, assuming that the second partial derivatives have value unity, is then<br />

x 0 = x + 1<br />

2 si ; 1<br />

si<br />

[F (x + si vi) ; F (x)] vi<br />

For the trial step lengths si, Powell uses the empirical recursion formula<br />

s (k)<br />

q<br />

(k) (k;1)<br />

i =0:4 F (x ) ; F (x )<br />

Because of the scaling, all the step lengths actually become the same. A more detailed<br />

justi cation can be found in Ho mann <strong>and</strong> Hofmann (1970).<br />

Contrary to most other optimization procedures, Powell's strategy is available as a<br />

precise algorithm in a tested code (Powell, 1970f). As Fletcher (1965) reports, this method<br />

of conjugate directions is superior for the case of a few variables both to the DSC method<br />

<strong>and</strong> to a strategy of Smith, especially in the neighborhood of minima. For manyvariables,<br />

however, the strict criterion for adopting a new direction more frequently causes the old<br />

set of directions to be retained <strong>and</strong> the procedure then converges slowly. A problem which<br />

had a singular Hessian matrix at the minimum made the DSC strategy look better. In<br />

a later article, Fletcher (1972a) de nes a limit of n = 10 to 20, above whichthePowell<br />

strategy should no longer be applied. This is con rmed by the test results presented<br />

in Chapter 6. Zangwill (1967) combines the basic idea of Powell with relaxation steps<br />

in order to avoid linear dependence of the search directions. Some results of Rhead<br />

(1971) lead to the conclusion that Powell's improved concept is superior to Zangwill's.<br />

Brent (1973) also presents a variant of the strategy without derivatives, derived from<br />

Powell's basic idea, which is designed to prevent the occurrence of linear dependence of<br />

the search directions without endangering the quadratic convergence. After every n +1<br />

iterations the set of directions is replaced by an orthogonal set of vectors. So as not to<br />

lose all the information, however, the unit vectors are not chosen. For quadratic objective<br />

functions the new directions remain conjugate to each other. This procedure requires<br />

O(n 3 ) computational operations to determine orthogonal eigenvectors. As, however, they<br />

are only performed every O(n 2 ) line searches, the extra cost is O(n) per function call <strong>and</strong><br />

is thus of the same order as the cost of evaluating the objective function itself. Results of<br />

tests by Brent con rm the usefulness of the strategy.<br />

3.2.3 Newton Strategies<br />

Newton strategies exploit the fact that, if a function can be di erentiated any number of<br />

times, its value at the point x (k+1) can be represented by a series of terms constructed at

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!