13.08.2013 Views

Self-Adaptive Genetic Algorithms with Simulated Binary Crossover

Self-Adaptive Genetic Algorithms with Simulated Binary Crossover

Self-Adaptive Genetic Algorithms with Simulated Binary Crossover

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

x(t+1) i=(t) i=x(t) (t+1) for(0) iexp?0N(0;1)+Ni(0;1); i+(t+1) iNi(0;1); 3.2 Non-isotropic self-adaptation<br />

A different mutation strengthiis used for each variable. This is capable of learning to self-adapt to<br />

problems where variables are unequally scaled in the objective function. In addition toNobject variables,<br />

there areNother strategy parameters. The update rules are as follows:<br />

(10)<br />

(11)<br />

where0/(2n)?1=2and/(2n1=2)?1=2.<br />

~x(t+1)=~x(t)+~N~0;C(~(t+1);~(t+1)); i=(t) (t+1) j=(t) iexp?0N(0;1)+Ni(0;1); j+Nj(0;1);<br />

Due to lack of any theoretical results on this self-adaptive ES,<br />

we use the progress coefficient of the(;)-ES or(=;)-ESs as the constant of proportionality of both<br />

0and . Similar initial valuesias discussed for isotropic self-adaptive ESs are used here.<br />

3.3 Correlated self-adaptation<br />

Here, different mutation strengths and rotation angles are used to represent the covariances for pair-wise<br />

interactions among variables. Thus, in addition toNobject variables there are a total ofNmutation<br />

strengths andN(N?1)=2rotation angles used explicitly in each population member. The update rules<br />

are as follows:<br />

(x(t+1)<br />

(12)<br />

(13)<br />

=(t)exp(0N(0;1))N(0;1): (x(t)<br />

(14)<br />

where~N~0;C(~(t+1);~(t+1))is a realization of correlated mutation vector <strong>with</strong> a zero mean vector and<br />

covariance matrixC. The parameter is fixed as 0.0873 (or5o) (Schwefel, 1974). The parameters0and<br />

are used the same as before. We initialize the rotation angles <strong>with</strong>in zero and 180 degrees at random.<br />

4 Connection Between GAs <strong>with</strong> SBX and <strong>Self</strong>-<strong>Adaptive</strong> ESs<br />

Without loss of generality, we try to argue the similarity in the working of GAs <strong>with</strong> SBX and self-adaptive<br />

ESs by considering only isotropic self-adaptive ESs. Under isotropic self-adaptive ES, the difference (say<br />

) between the child i) and its parent i) can be written from equations 8 and 9:<br />

(15)<br />

=p 2(q?1);<br />

Thus, an instantiation of is a normal distribution <strong>with</strong> zero mean and a variance which depends on(t), 0, and the instantiation of the lognormal distribution. For our argument, there are two aspects of this<br />

procedure:<br />

1. For a particular realization of lognormal distribution, the difference is normally distributed <strong>with</strong><br />

zero mean. That is, children solutions closer to parents are monotonically more likely to be created<br />

than children solutions away from parents.<br />

2. The standard deviation of is proportional to the mutation strength(t), which signifies, in some<br />

sense, the population diversity.<br />

Under the SBX operator, we write the term using equations 5 and 6 as follows:<br />

(16)<br />

7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!