27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A Multimembered <strong>Evolution</strong> Strategy 145<br />

We shall not go into further details since another kind of individual step length control<br />

will o er itself later, i.e., recombination.<br />

At this point afurtherword should be said about the alternative (1+ )or(1, )<br />

strategies. Let us assume that by virtue of a jump l<strong>and</strong>ing far from the expectation value,<br />

a descendant has made a very large <strong>and</strong> useful step towards the optimum, thus becoming<br />

a parent of the next generation. While the variance allocated to it was eminently suitable<br />

for the preceding situation, it is not suited to the new one, being in general much too<br />

big. The probability that one of the new descendants will be successful is thereby low.<br />

Because the (1+ ) strategy permits no worsening of the objective function value, the<br />

parent survives{<strong>and</strong> may do so for many generations. This increases the probability ofa<br />

successful mutation still having a poorly adapted step length. In the (1 , ) strategy such<br />

a stray member will indeed also turn up in a generation, but it will be in e ect revoked in<br />

the following generation. The descendant that regresses the least survives <strong>and</strong> is therefore<br />

probably the one that most reduces the variance. The scheme thus has better adaptation<br />

properties with regard to the step length. In fact this phenomenon can be observed in the<br />

simulation. Since we have seen that for 5 the maximum rate of progress is practically<br />

independent of whether or not the parent survives, we should favor a ( , ) strategy, at<br />

least when = is not chosen to be very small, e.g., less than 5 or 6.<br />

5.2.4 The Convergence Criterion for > 1 Parents<br />

In Section 5.2.2 wewere really looking for the rate of progress of a ( , )evolution method.<br />

Because of the analytical di culties, however, we had to fall back onthe = 1 case,<br />

with only one parent. We shall now proceed again on the assumption that > 1. In<br />

each generation state vectors xE <strong>and</strong> associated step lengths are stored, which should<br />

always be the best of the mutants of the previous generation. We naturally require<br />

more storage space for doing this on the computer, but on the other h<strong>and</strong> we havemore suitable values at our disposal for each variable. Supposing that the topology of the<br />

objective function is complicated or even \pathological," <strong>and</strong> an individual reaches a point<br />

that is unfavorable to further progress, we still have su cient alternative starting points,<br />

which mayeven be much more favorable. According to the usefulness of their parameter<br />

sets, some parents place more mutants in the prime group of descendants than others.<br />

In general the best individuals of a generation will di er with respect to their variable<br />

vectors <strong>and</strong> objective function values as long as the optimum has not been reached. This<br />

provides us with a simple convergence criterion.<br />

From the population of<br />

function value:<br />

parents Ek k = 1(1) ,welet Fb be the best objective<br />

Fb = min fF (x<br />

k (g)<br />

k )k= 1(1) g<br />

<strong>and</strong> Fw the worst<br />

Fw = max<br />

k<br />

Then for ending the search we require that either<br />

fF (x(g)<br />

k )k= 1(1) g<br />

Fw ; Fb "c

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!