27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Multidimensional Strategies 49<br />

where<br />

<strong>and</strong><br />

wi =<br />

8<br />

><<br />

>:<br />

ai <br />

P<br />

ai ; i;1<br />

(a<br />

j=1<br />

T i v (k+1)<br />

j<br />

ai =<br />

nX<br />

j=i<br />

) v (k+1)<br />

<br />

j<br />

d (k)<br />

j v (k)<br />

j for all i = 1(1)n<br />

for i =1<br />

for i = 2(1)n<br />

(3.21)<br />

A scalar d (k)<br />

i represents the distance covered in direction v (k)<br />

i in the kth iteration.<br />

Thus v (k+1)<br />

1 points in the overall successful direction of the step k. It is expected that a<br />

particularly large search step can be taken in this direction at the next iteration. The<br />

requirement ofwaiting for at least one success in each direction has the e ect that no<br />

direction is lost, <strong>and</strong> the v (k)<br />

i always span the full n-dimensional Euclidean space. The<br />

termination rule or convergence criterion is determined by the lengths of the vectors<br />

a (k)<br />

1<br />

<strong>and</strong> a (k)<br />

2 . Before each orthonormalization there is a test whether ka (k)<br />

1 k < " <strong>and</strong><br />

ka (k)<br />

2 k > 0:3 ka (k)<br />

1 k. When this condition is satis ed in six consecutive iterations, the<br />

search is ended. The second condition is designed to ensure that a premature termination<br />

of the search does not occur just because the distances covered have become small. More<br />

signi cantly, the requirement is also that the main success direction changes su ciently<br />

rapidly something that Rosenbrock regards as a sure sign of the proximity of a minimum.<br />

As the strategy comparison will show (see Chap. 6), this requirement is often too strong.<br />

It even hinders the ending of the procedure in many cases.<br />

In his original publication Rosenbrock has already given detailed rules as to how<br />

inequality constraints can be treated. His procedure for doing this can be viewed as<br />

a partial penalty function method, since the objective function is only altered in the<br />

neighborhood of the boundaries. Immediately after each variation of the variables, the<br />

objective function value is tested. If the comparison is unfavorable, a failure is registered<br />

as in the unconstrained case. For equality oranimprovement, however, if the iteration<br />

point lies near a boundary of the region, the success criterion changes. For example, for<br />

constraints of the form Gj(x) 0 for all j = 1(1)m, the extended objective function<br />

~F (x) takes the form (this is one of several suggestions of Rosenbrock):<br />

in which<br />

<strong>and</strong><br />

'j(x) =<br />

~F(x) =F (x)+<br />

8<br />

><<br />

>:<br />

mX<br />

j=1<br />

0 <br />

3 ; 4 2 +2 3 <br />

1 <br />

'j(x)(fj ; F (x))<br />

=1; 1 Gj(x)<br />

if Gj(x)<br />

if 0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!