27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The Two Membered <strong>Evolution</strong> Strategy 113<br />

the quantity eventually becomes zero after a nite number of divisions. Every subsequent<br />

multiplication leaves the value as zero. If it happens to one of the st<strong>and</strong>ard deviations i,<br />

the a ected variable xi remains constant thereafter. The optimization continues only in a<br />

subspace of IR n .To guard against this it must be required that i > 0 for all i = 1(1)n.<br />

The r<strong>and</strong>om changes should furthermore be su ciently large that at least the last stored<br />

place of a variable is altered. There are therefore two requirements:<br />

Lower limits for the \step lengths":<br />

<strong>and</strong><br />

where<br />

"a > 0<br />

1+"b > 1<br />

(g)<br />

i "a for all i =1(1)n<br />

(g)<br />

i "b x (g)<br />

i for all i = 1(1)n<br />

)<br />

according to the computational accuracy<br />

It is thereby ensured that the r<strong>and</strong>om variations are always active <strong>and</strong> the region of the<br />

search stays spanned in all dimensions.<br />

5.1.3 The Convergence Criterion<br />

In experimental optimization it is usually decided heuristically when to terminate the<br />

series of trials: for example, when the trial results indicate that no further signi cant<br />

improvement can be gained. One always has an overall view of how the experiment is<br />

running. In numerical optimization, if the calculations are made by computer, one must<br />

build into the program a rule saying when the iteration sequence is to be terminated. For<br />

this purpose objective, quantitative criteria are needed that refer to the data available at<br />

any time. Sometimes, although not always, one will be concerned to obtain a solution as<br />

exactly as possible, i.e., accurate to the last stored digit. This requirement can relate to<br />

the variables or to the objective function. Remember that the optimum may beaweak<br />

one.<br />

Towards the minimum, the step lengths <strong>and</strong> distances covered normally become<br />

smaller <strong>and</strong> smaller. A frequently used convergence criterion consists of ending the search<br />

when the changes in the variables become zero (in which case no further improvementin<br />

the objective function is made), or when the step lengths have become zero. As a rule one<br />

sets the lower bound not to zero but to a su ciently small, nite value. This procedure<br />

has however one disadvantage that can be serious. Small step lengths occur not only if<br />

the minimum is nearby, but also if the search ismoving through a narrow valley. The<br />

optimization may then be practically halted long before the extreme value being sought is<br />

reached. In Equations (5.4) <strong>and</strong> (5.5), r can equally well be thought of as the local radius<br />

of curvature. Neither ', the distance covered, nor , the step length, are a measure of the<br />

closeness to the optimum. Rather they convey information about the complexity ofthe<br />

minimum problem: the number of variables <strong>and</strong> the narrowness of valleys encountered.<br />

The requirement >"or kx (g) ; x (g;1) k >"for the continuation of the search isthus<br />

no guarantee of su cient convergence.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!