27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A Multimembered <strong>Evolution</strong> Strategy 149<br />

5.2.6 Global Convergence<br />

In our discussion of deterministic optimization methods (Chap. 3) we haveestablished that only simultaneous strategies are capable of locating with certainty global minima<br />

of arbitrary objective functions. The computational cost of their application increases<br />

with the volume of the space under consideration <strong>and</strong> thus with the power of n. The<br />

dynamic programming technique of Bellman allows the reliabilityofglobalconvergence to<br />

be maintained at less cost, but only if the objective function has a rather special structure,<br />

such that only a part of the space IR n needs to be investigated. Of the stochastic search<br />

procedures, the Monte-Carlo method has the best chance of global convergence it o ers a<br />

high probability rather than certainty of nding the global optimum. If one requires a 90%<br />

probability, its cost is greater than that of the equidistant grid search. However, the (1+1)<br />

evolution strategy can also be credited with a nite probability of global convergence if the<br />

step lengths (variances) of the r<strong>and</strong>om changes are held constant (see Rechenberg, 1973<br />

Born, 1978 Beyer, 1989, 1990). How great the chance is of nding an absolute minimum<br />

among several local minima depends on the topology, in particular on the disposition <strong>and</strong><br />

\width" of the minima.<br />

If the user wishes to realize the possibility of a jump from a local to a global extremum,<br />

it requires a trial of patience. The requirement of approaching an optimum as quickly <strong>and</strong><br />

as accurately as possible is always diametrically opposed to maintaining the reliability of<br />

global convergence. In the formulation of the algorithms of the evolution strategies we<br />

have mainly strived to satisfy the rst requirement of rapid convergence, by adaptation<br />

of the step lengths. Thus for both strategies no claims can be made for good global<br />

convergence properties.<br />

With > 1 in the multimembered evolution scheme, several state vectors x (g)<br />

k 2<br />

IR n k = 1(1) are stored in each generation g. If the x (g)<br />

k are very di erent, the<br />

probability is greater that at least one point is situated near the global optimum <strong>and</strong> that<br />

the others will approach it in the process of generation. The likelihood of this is less if<br />

the x (g)<br />

k fall close together, with the associated reduction in the step lengths. It always<br />

remains nite, however, <strong>and</strong> increases with ,thenumber of parents. This advantage<br />

over the (1+1) strategy is best exploited if one starts the search with initial vectors x (0)<br />

k<br />

roughly evenly distributed over the whole region of interest, <strong>and</strong> chooses fairly large initial<br />

(0)<br />

values of the st<strong>and</strong>ard deviations k 2 IRn k = 1(1) . Here too the ( )scheme is<br />

preferable to the ( + ) because concentration at a locally very favorable position is at<br />

least delayed.<br />

5.2.7 Program Details of the ( + ) ES Subroutines<br />

Appendix A, Section A.2 contains FORTRAN listings of the multimembered ( + )<br />

evolution strategy developed here, with the alternatives<br />

GRUP without recombination<br />

REKO with recombination (intermediary recombination for the step lengths)<br />

KORR the so far most general form with correlated mutations as well as ve<br />

di erent recombination types (see Chap. 7)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!