10.07.2015 Views

A Note on Usefulness of Geometrical Crossover for Numerical ...

A Note on Usefulness of Geometrical Crossover for Numerical ...

A Note on Usefulness of Geometrical Crossover for Numerical ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

A <str<strong>on</strong>g>Note</str<strong>on</strong>g> <strong>on</strong> <strong>Usefulness</strong> <strong>of</strong> <strong>Geometrical</strong> <strong>Crossover</strong> <strong>for</strong> <strong>Numerical</strong>Zbigniew MichalewiczDepartment <strong>of</strong> Computer ScienceUniversity <strong>of</strong> North CarolinaCharlotte, NC 28223zbyszek@uncc.eduOptimizati<strong>on</strong> ProblemsMaciej MichalewiczInstitute <strong>of</strong> Computer SciencePolish Academy <strong>of</strong> Sciencesul. Ord<strong>on</strong>a 2101-237 Warsaw, Polandmichalew@ipipan.waw.plGirish NazhiyathDepartment <strong>of</strong> Computer ScienceUniversity <strong>of</strong> North CarolinaCharlotte, NC 28223gnazhiya@uncc.eduAbstract<strong>Numerical</strong> optimizati<strong>on</strong> problems enjoy a signicantpopularity in evoluti<strong>on</strong>ary computati<strong>on</strong>community; all major evoluti<strong>on</strong>ary techniques (geneticalgorithm, evoluti<strong>on</strong> strategies, evoluti<strong>on</strong>aryprogramming) have been applied to these problems.However, many <strong>of</strong> these techniques (as well asother, classical optimizati<strong>on</strong> methods) have dicultiesin solving some real-world problems whichinclude n<strong>on</strong>-trivial c<strong>on</strong>straints. For such problems,very <strong>of</strong>ten the global soluti<strong>on</strong> lies <strong>on</strong> theboundary <strong>of</strong> the feasible regi<strong>on</strong>. Thus it is importantto investigate some problem-specic operators,which search this boundary in an ecientway.In this study we discuss a new experimental evidence<strong>on</strong> usefulness <strong>of</strong> so-called geometrical crossover,which might be used <strong>for</strong> a boundary search<strong>for</strong> particular problems. This operator enhancesalso the eectiveness <strong>of</strong> evoluti<strong>on</strong>ary algorithms(based <strong>on</strong> oating point representati<strong>on</strong>) in a signicantway.1 Introducti<strong>on</strong>For many years, most evoluti<strong>on</strong>ary techniques were evaluatedand compared with each other in the domain <strong>of</strong>functi<strong>on</strong> optimizati<strong>on</strong>. It seems also that the domain<strong>of</strong> functi<strong>on</strong> optimizati<strong>on</strong> will remain the primary testbed<strong>for</strong> many new comparis<strong>on</strong>s and new features <strong>of</strong> variousalgorithms. In particular, numerical optimizati<strong>on</strong>problems enjoy a signicant popularity in evoluti<strong>on</strong>arycomputati<strong>on</strong> community; all major evoluti<strong>on</strong>ary techniques(genetic algorithms, evoluti<strong>on</strong> strategies, evoluti<strong>on</strong>aryprogramming) can be applied to these problems.However, in this study we depart from dierences betweenvarious evoluti<strong>on</strong>ary techniques <strong>for</strong> numerical optimizati<strong>on</strong>problems, and discuss rather some experimentalevidence <strong>on</strong> usefulness <strong>of</strong> a new operator, so-called geometricalcrossover. This operator seems to enhance theeectiveness <strong>of</strong> evoluti<strong>on</strong>ary algorithms (based <strong>on</strong> oatingpoint representati<strong>on</strong>) in a signicant way; later inthe paper we compare it with a better known arithmeticalcrossover <strong>on</strong> a few test cases.The geometrical crossover emerged as a problem-specicoperator <strong>for</strong> <strong>on</strong>e particular c<strong>on</strong>strained problem. Ina similar way, we can c<strong>on</strong>struct (<strong>for</strong> other c<strong>on</strong>strainedproblems) various crossover operators (e.g., sphere crossover;see C<strong>on</strong>clusi<strong>on</strong>s) and analyse their per<strong>for</strong>mance.Thus this study indicates another possible way <strong>for</strong> c<strong>on</strong>strainthandling in evoluti<strong>on</strong>ary methods, based <strong>on</strong> asearch <strong>for</strong> a global soluti<strong>on</strong> <strong>on</strong> the boundary <strong>of</strong> the feasibleregi<strong>on</strong>. Some other heuristic techniques (e.g., tabusearch) recognized recently the importance <strong>of</strong> such searchby investigating strategic oscillati<strong>on</strong> (see Kelly et al. 1993,Glover and Kochenberger, 1995).The paper is organized as follows. The followingtwo secti<strong>on</strong>s survey briey several operators and a fewc<strong>on</strong>straint-handling techniques <strong>for</strong> numerical optimizati<strong>on</strong>problems, which have emerged in evoluti<strong>on</strong>ary computati<strong>on</strong>techniques (oating point representati<strong>on</strong>) overthe years. Secti<strong>on</strong> 4 presents a test case <strong>of</strong> a c<strong>on</strong>strainedoptimizati<strong>on</strong> problem, which proved to be extremely dif-cult <strong>for</strong> most optimizati<strong>on</strong> methods. Secti<strong>on</strong> 5 discussesa special, problem-specic evoluti<strong>on</strong>ary system, which


was \tuned" towards this particular problem; the systemincorporates a new operator, which we call geometricalcrossover. There is some experimental evidence <strong>on</strong> a generalusefulness <strong>of</strong> this operator, which outper<strong>for</strong>ms a betterknown arithmetical crossover; this is presented in Secti<strong>on</strong>6. We c<strong>on</strong>clude the paper by providing an example<strong>of</strong> another problem-specic crossover, sphere crossover,and indicate its usefulness <strong>on</strong> another test case.2 <strong>Numerical</strong> Optimizati<strong>on</strong> and OperatorsMost evoluti<strong>on</strong>ary algorithms use vectors <strong>of</strong> oating pointnumbers <strong>for</strong> their chromosomal representati<strong>on</strong>s. For suchrepresentati<strong>on</strong>, many operators have been proposed duringthe last 30 years. We discuss them briey in turn.The most popular mutati<strong>on</strong> operator is Gaussian mutati<strong>on</strong>,which modied all comp<strong>on</strong>ents <strong>of</strong> the soluti<strong>on</strong>vector ~x = hx 1 ; : : : ; x n i by adding a random noise:~x t+1 = ~x t + N(0; ~),N(0; ~) is a vector <strong>of</strong> independent random Gaussian numberswith a mean <strong>of</strong> zero and standard deviati<strong>on</strong>s ~.Such a mutati<strong>on</strong> is used in evoluti<strong>on</strong> strategies (Back etal. 1991) and evoluti<strong>on</strong>ary programming (Fogel 1992).(One <strong>of</strong> the dierences between these techniques lies inadjusting vector <strong>of</strong> standard deviati<strong>on</strong>s ~.)Other types <strong>of</strong> mutati<strong>on</strong>s include n<strong>on</strong>-uni<strong>for</strong>m mutati<strong>on</strong>,wherex t+1k= xtk+ 4(t; right(k) x k ) if r is 0x t k 4(t; x k left(k)) if r is 1<strong>for</strong> k = 1; : : : ; n (r is a random binary digit). The functi<strong>on</strong>4(t; y) returns a value in the range [0; y] such thatthe probability <strong>of</strong> 4(t; y) being close to 0 increases ast increases (t is the generati<strong>on</strong> number). This propertycauses this operator to search the space uni<strong>for</strong>mly initially(when t is small), and very locally at later stages.In experiments reported in Michalewicz et al. (1994),the following functi<strong>on</strong> was used:4(t; y) = y r (1tT )b ;where r is a random number from [0::1], T is the maximalgenerati<strong>on</strong> number, and b is a system parameterdetermining the degree <strong>of</strong> n<strong>on</strong>{uni<strong>for</strong>mity.It is also possible to experiment with a uni<strong>for</strong>m mutati<strong>on</strong>,which changes a single comp<strong>on</strong>ent <strong>of</strong> the soluti<strong>on</strong>vector; e.g., if ~x t = (x 1 ; : : :; x k ; : : : ; x q ), then ~x t+1 =(x 1 ; : : : ; x 0 k ; : : : ; x q), where x 0 kis a random value (uni<strong>for</strong>mprobability distributi<strong>on</strong>) from the domain <strong>of</strong> variable x k .A special case <strong>of</strong> uni<strong>for</strong>m mutati<strong>on</strong> is boundary mutati<strong>on</strong>,where x 0 k is either left <strong>of</strong> right boundary <strong>of</strong> the domain<strong>of</strong> x k .There are a few interesting crossover operators. Therst <strong>on</strong>e, uni<strong>for</strong>m crossover (known also as discrete crossoverin evoluti<strong>on</strong> strategies; Schwefel 1981), works as follows.Two parents,~x 1 = hx 1 1 ; : : : ; x1 n i and ~x2 = hx 2 1 ; : : : ; x2 n i,produce an ospring,~x = hx q11 ; : : : ; xqn n i,where q i = 1 or q i = 2 with equal probability <strong>for</strong> alli = 1; : : :; n (the sec<strong>on</strong>d ospring is created by settingq i := 3 q i ). Of course, the uni<strong>for</strong>m crossover is a generalizati<strong>on</strong><strong>of</strong> 1-point, 2-point, and multi-point crossovers.Another possibility is arithmetical crossover. Heretwo vectors, ~x 1 and ~x 2 produce two ospring, ~x 3 and ~x 4 ,which are a linear combinati<strong>on</strong> <strong>of</strong> their parents, i.e.,~x 3 = a ~x 1 + (1 a) ~x 2 and~x 4 = (1 a) ~x 1 + a ~x 2 .Such a crossover was called also a guaranteed averagecrossover (Davis, 1989, when a = 1=2), and an intermediatecrossover (Schwefel, 1981).There is also an interesting heuristic crossover proposedby Wright (1991); it is a unique crossover <strong>for</strong> thefollowing reas<strong>on</strong>s: (1) it uses values <strong>of</strong> the objective functi<strong>on</strong>in determining the directi<strong>on</strong> <strong>of</strong> the search, (2) itproduces <strong>on</strong>ly <strong>on</strong>e ospring, and (3) it may produce noospring at all. The operator generates a single ospring~x 3 from two parents, ~x 1 and ~x 2 according to the followingrule:~x 3 = r (~x 2 ~x 1 ) + ~x 2 ,where r is a random number between 0 and 1, and theparent ~x 2 is not worse than ~x 1 , i.e., f(~x 2 ) f(~x 1 ) <strong>for</strong>maximizati<strong>on</strong> problems and f(~x 2 ) f(~x 1 ) <strong>for</strong> minimizati<strong>on</strong>problems.Muhlenbein and Voigt (1995) investigated the properties<strong>of</strong> a recombinati<strong>on</strong> operator, called gene pool recombinati<strong>on</strong>(default recombinati<strong>on</strong> mechanism in evoluti<strong>on</strong>strategies, see Schwefel, 1981), where the genesare randomly picked from the gene pool dened by theselected parents. An interesting aspect <strong>of</strong> this operatoris that it allows so-called orgies: several parents in producingan ospring. Such a multi-parent crossover wasalso investigated also by Eiben et al. (1994) in the c<strong>on</strong>text<strong>of</strong> combinatorial optimizati<strong>on</strong>. Renders and Bersini(1994) experimented with simplex crossover <strong>for</strong> numericaloptimizati<strong>on</strong> problems; this crossover involves computingcentroid <strong>of</strong> group <strong>of</strong> parents and moving fromthe worst individual bey<strong>on</strong>d the centroid point. Also,Glover's (1977) scatter search techniques propose the use<strong>of</strong> multiple parents.3 C<strong>on</strong>straint-Handling MethodsDuring the last two years several methods were proposed<strong>for</strong> handling c<strong>on</strong>straints by genetic algorithms <strong>for</strong> numericaloptimizati<strong>on</strong> problems. Most <strong>of</strong> them are based <strong>on</strong>2


the c<strong>on</strong>cept <strong>of</strong> penalty functi<strong>on</strong>s, which penalize unfeasiblesoluti<strong>on</strong>s, i.e., f(X);if X 2 Feval(X) =f(X) + penalty(X); if X 2 S F;where the set S R n denes the search space and the setF S denes a feasible search space, penalty(X) is zero,if no violati<strong>on</strong> occurs, and is positive, otherwise (in therest <strong>of</strong> this secti<strong>on</strong> we assume minimizati<strong>on</strong> problems).In most methods a set <strong>of</strong> functi<strong>on</strong>s f j (1 j m) isused to c<strong>on</strong>struct the penalty; the functi<strong>on</strong> f j measuresthe violati<strong>on</strong> <strong>of</strong> the j-th c<strong>on</strong>straint in the following way: maxf0; gj (X)g; if 1 j qf j (X) =jh j (X)j; if q + 1 j m:However, these methods dier in many important details,how the penalty functi<strong>on</strong> is designed and appliedto unfeasible soluti<strong>on</strong>s.The most severe penalty is a death penalty; somemethod rejects unfeasible individuals. Such a methodhas been used by by many techniques, e.g., evoluti<strong>on</strong>strategies (Back et al. 1991) and simulated annealing.Some methods use static penalties; <strong>for</strong> example, amethod proposed by Homaifar, Lai, and Qi (1994) assumesthat <strong>for</strong> every c<strong>on</strong>straint we establish a family<strong>of</strong> intervals which determine appropriate penalty coecient.It works by creating (<strong>for</strong> each c<strong>on</strong>straint) several(`) levels <strong>of</strong> violati<strong>on</strong>; creating a penalty coecient R ij(<strong>for</strong> each level <strong>of</strong> violati<strong>on</strong> and <strong>for</strong> each c<strong>on</strong>straint; higherlevels <strong>of</strong> violati<strong>on</strong> require larger values <strong>of</strong> this coecient;starting with a random populati<strong>on</strong> <strong>of</strong> individuals (feasibleor unfeasible); and evolving the populati<strong>on</strong>, wherethe evaluati<strong>on</strong> functi<strong>on</strong> used iseval(X) = f(X) + P mj=1 R ijf 2 j (X).Some other methods apply dynamic penalties. Forexample, in the method proposed by Joines and Houck(1994) individuals are evaluated (at the iterati<strong>on</strong> t) bythe following <strong>for</strong>mula:eval(X) = f(X) + (C t) P mj=1 f j (X),where C, and are c<strong>on</strong>stants. A reas<strong>on</strong>able choice <strong>for</strong>these parameters is C = 0:5, = = 2.Also, Michalewicz and Attia (1994) experimented withthe procedure, where the algorithm maintains feasibility<strong>of</strong> all linear c<strong>on</strong>straints using a set <strong>of</strong> closed operators,which c<strong>on</strong>vert a feasible soluti<strong>on</strong> (feasible in terms <strong>of</strong> linearc<strong>on</strong>straints <strong>on</strong>ly) into another feasible soluti<strong>on</strong>. Atevery iterati<strong>on</strong> the algorithm c<strong>on</strong>siders active c<strong>on</strong>straints<strong>on</strong>ly, the pressure <strong>on</strong> unfeasible soluti<strong>on</strong>s is increased dueto the decreasing values <strong>of</strong> temperature .It is also possible to incorporate adaptive penalties;Bean and Hadj-Alouane (1992) and Smith and Tate (1993)experimented with them. Bean and Hadj-Alouane (1992)change the penalty coecients <strong>on</strong> the basis <strong>of</strong> the number<strong>of</strong> feasible and infeasible individuals in the last k generati<strong>on</strong>s,whereas in the Smith and Tate (1993) approachpenalty measure depends <strong>on</strong> the number <strong>of</strong> violated c<strong>on</strong>straints,the best feasible objective functi<strong>on</strong> found, andthe best objective functi<strong>on</strong> value found.Yet another approach was proposed recently by LeRiche et al. (1995). The authors designed a (segregated)genetic algorithm which uses two values <strong>of</strong> penalty parameters(<strong>for</strong> each c<strong>on</strong>straint) instead <strong>of</strong> <strong>on</strong>e; these twovalues aim at achieving a balance between heavy andmoderate penalties by maintaining two subpopulati<strong>on</strong>s<strong>of</strong> individuals. The populati<strong>on</strong> is split into two cooperatinggroups, where individuals in each group are evaluatedusing either <strong>on</strong>e <strong>of</strong> the two penalty parameters.Additi<strong>on</strong>al method was proposed by Schoenauer andXanthakis (1993) The method requires a linear order <strong>of</strong>all c<strong>on</strong>straints which are processed in turn. At iterati<strong>on</strong>j, soluti<strong>on</strong>s that do not satisfy <strong>on</strong>e <strong>of</strong> the 1st, 2nd, ...,or (j 1)-th c<strong>on</strong>straint are eliminated from the populati<strong>on</strong>and the stop criteri<strong>on</strong> is the satisfacti<strong>on</strong> <strong>of</strong> thej-th c<strong>on</strong>straint by the ip threshold percentage <strong>of</strong> thepopulati<strong>on</strong>. In the last step the algorithm optimizes theobjective functi<strong>on</strong>, rejecting unfeasible individuals.Another method was developed by Powell and Skolnick(1993). The method is a classical penalty methodwith <strong>on</strong>e notable excepti<strong>on</strong>. Each individual is evaluatedby the <strong>for</strong>mula:eval(X) = f(X) + r P mj=1 f j(X) + (t; X),where r is a c<strong>on</strong>stant; however, there is also a comp<strong>on</strong>ent(t; X). This is an additi<strong>on</strong>al iterati<strong>on</strong> dependentfuncti<strong>on</strong> which inuences the evaluati<strong>on</strong>s <strong>of</strong> unfeasiblesoluti<strong>on</strong>s. The point is that the method distinguishes betweenfeasible and unfeasible individuals by adopting anadditi<strong>on</strong>al heuristic rule (suggested earlier by Richards<strong>on</strong>et al. 1989): <strong>for</strong> any feasible individual X and anyunfeasible individual Y : eval(X) < eval(Y ), i.e., anyfeasible soluti<strong>on</strong> is better than any unfeasible <strong>on</strong>e.One <strong>of</strong> the most recent methods (Genocop III; seeMichalewicz and Nazhiyath 1995) incorporates the originalGenocop system (Michalewicz et al. 1994), butalso extends it by maintaining two separate populati<strong>on</strong>s,where a development in <strong>on</strong>e populati<strong>on</strong> inuences evaluati<strong>on</strong>s<strong>of</strong> individuals in the other populati<strong>on</strong>. The rstpopulati<strong>on</strong> c<strong>on</strong>sists <strong>of</strong> so-called search points from Swhich satisfy linear c<strong>on</strong>straints <strong>of</strong> the problem (as in theoriginal Genocop system). The feasibility (in the sense<strong>of</strong> linear c<strong>on</strong>straints) <strong>of</strong> these points is maintained, asbe<strong>for</strong>e, by specialized operators. The sec<strong>on</strong>d populati<strong>on</strong>c<strong>on</strong>sists <strong>of</strong> so-called reference points from F; these pointsare fully feasible, i.e., they satisfy all c<strong>on</strong>straints. Referencepoints R, being feasible, are evaluated directly bythe objective functi<strong>on</strong> (i.e., eval(R) = f(R)). On theother hand, unfeasible search points are \repaired" <strong>for</strong>evaluati<strong>on</strong>.3


For an experimental comparis<strong>on</strong> <strong>of</strong> some <strong>of</strong> these methods<strong>on</strong> a few test cases, see Michalewicz (1995).4 The Test CaseAn interesting c<strong>on</strong>strained numerical optimizati<strong>on</strong> testcase emerged recently; the problem (Keane, 1994) is tomaximize a functi<strong>on</strong>:P ni=1f(~x) = jcos4 (x i ) 2 Q ni=1 cos2 (x i )pP n j;i=1 ix2 iwhereQ ni=1 x i 0:75, (1)P ni=1 x i 7:5n, (2)and 0 x i 10 <strong>for</strong> 1 i n.The problem has two c<strong>on</strong>straints; the functi<strong>on</strong> f is n<strong>on</strong>linearand its global maximum is unknown.10.80.60.40.201008060402000Figure 2: The graph <strong>of</strong> functi<strong>on</strong> f <strong>for</strong> n = 2. Infeasiblesoluti<strong>on</strong>s were assigned value zero2040608010012108642025201510500Figure 1: The graph <strong>of</strong> functi<strong>on</strong> f <strong>for</strong> n = 2To illustrate some potential diculties <strong>of</strong> solving thistest case, a few graphs (<strong>for</strong> case <strong>of</strong> n = 2) are displayedin Figures 1 { 4. Figure 1 gives a general overview <strong>of</strong> theobjective functi<strong>on</strong> f: the whole landscape seems to berelatively at except <strong>for</strong> a sharp peek around the point(0; 0). Figures 2 and 3 incorporate the active c<strong>on</strong>straint:infeasible soluti<strong>on</strong>s were assigned a value <strong>of</strong> zero. In thatcase the objective functi<strong>on</strong> f takes values from the rangeh0; 1i; because <strong>of</strong> the scaling, the landscape is more visible.Figure 3 displays <strong>on</strong>ly the area <strong>of</strong> interest (i.e., thearea <strong>of</strong> global optimum). Figure 4 presents a side view<strong>of</strong> the landscape al<strong>on</strong>g <strong>on</strong>e <strong>of</strong> the axis. Again, the <strong>on</strong>lyfeasible part <strong>of</strong> the landscape is displayed.All methods (described in secti<strong>on</strong> 3) were tried <strong>on</strong>this test case with quite poor results. As Keane (1994)noted:510152025\I am currently using a parallel GA with 12bitbinary encoding, crossover, inversi<strong>on</strong>, mutati<strong>on</strong>,niche <strong>for</strong>ming and a modied Fiacco-McCormickc<strong>on</strong>straint penalty functi<strong>on</strong> to tackle this. For n =20 I get values like 0.76 after 20,000 evaluati<strong>on</strong>s."We obtained the best results from Genocop III; however,the results varied from run to run (between 0.75 and 0.80,case <strong>of</strong> n = 20).5 A New System and the <strong>Geometrical</strong><strong>Crossover</strong>It seems that majority <strong>of</strong> c<strong>on</strong>straint-handling methodshave serious diculties in returning a high quality soluti<strong>on</strong><strong>for</strong> the above problem. It might be possible, however,to build a dedicated system <strong>for</strong> this particular testcase, which would incorporate the problem specic knowledge.This knowledge can emerge from analysis <strong>of</strong> theobjective functi<strong>on</strong> f and the c<strong>on</strong>straints; thus we assumethat (a) the domain <strong>for</strong> all variables is h0:1; 10:0i, (b) c<strong>on</strong>straint(1) is active at the global optimum, and (c) thec<strong>on</strong>straint (2) is not.Now it is possible to develop an evoluti<strong>on</strong>ary systemwhich would search just the surface dened by the rstc<strong>on</strong>straint:Q ni=1 x i = 0:75.Thus the search space is greatly reduced; we c<strong>on</strong>sider<strong>on</strong>ly points which satisfy the above equati<strong>on</strong>. Such asystem would start from a populati<strong>on</strong> <strong>of</strong> feasible points,i.e., points, <strong>for</strong> which a product <strong>of</strong> all coordinates is equalto 0.75. It is relatively easy to develop such initializati<strong>on</strong>routine:4


0.90.810.70.80.60.60.50.40.40.20.30010203040 051015200.20.10100500 20 40 60 80 100Figure 3: The graph <strong>of</strong> functi<strong>on</strong> f <strong>for</strong> n = 2. Infeasiblesoluti<strong>on</strong>s were assigned value zero; <strong>on</strong>ly the corneraround the global optimum is shown<strong>for</strong> i = 1 to n dobeginif i is evenx i 1 = random (0.1, 10)x i = 1=x i 1endif n is oddx n = 0:75else<strong>for</strong> some random ix i = 0:75 x iIt is important to note, that the above initializati<strong>on</strong>routine is not signicant <strong>for</strong> the per<strong>for</strong>mance <strong>of</strong> any system;it just ensures that all points in the populati<strong>on</strong> areat the boundary between feasible and infeasible regi<strong>on</strong>s.Many other methods were tried with such initialized populati<strong>on</strong>sand gave poor results. Also, the value <strong>of</strong> the bestindividual in the populati<strong>on</strong> initialized in such a way isaround 0.30.The geometrical crossover takes two parents and producesa single ospring; <strong>for</strong> parents ~x 1 and ~x 2 the ospring~x 3 is~x 3 = p p h x 1 1 x2 1 ; : : : ; x 1 n x2 ni. (3)<str<strong>on</strong>g>Note</str<strong>on</strong>g>, that the ospring ~x 3 also lies <strong>on</strong> the boundary <strong>of</strong>feasible regi<strong>on</strong>:Q ni=1 x3 i = 0:75.Of course, it is an easy task to generalize the above geometricalcrossover into~x 3 = h(x 1 1) (x 2 1) (1 ) ; : : :; (x 1 n) (x 2 n) (1 ) i, (4)Figure 4: The vertical view <strong>of</strong> the functi<strong>on</strong> f <strong>for</strong> n = 2<strong>for</strong> 0 1. Also it is possible to include several (say,k) parents:~x k+1 = h(x 1 1) 1 (x 2 1) 2 : : : (x k 1) k ; : : : ;(x 1 n) 1 (x 2 n) 2 : : : (x k n) k i, (5)where 1 +: : :+ k = 1. However, in the experiments reportedin this paper, we limited ourselves to the simplestgeometrical crossover (3).The task <strong>of</strong> designing a feasibility-preserving mutati<strong>on</strong>is relatively simple; if the i-th comp<strong>on</strong>ent is selected<strong>for</strong> mutati<strong>on</strong>, the following algorithm is executed:determine random j, 1 j n, j 6= iselect q such that:0:1 x i q 10:0 and0:1 x j =q 10:0x i = x i qx j = x j =qA simple evoluti<strong>on</strong>ary algorithm (200 lines <strong>of</strong> code!)with geometrical crossover and problem-specic mutati<strong>on</strong>gave an outstanding results. For the case n = 20the system reached the value <strong>of</strong> 0.80 in less than 4,000generati<strong>on</strong>s (with populati<strong>on</strong> size <strong>of</strong> 30, probability <strong>of</strong>crossover p c = 1:0, and probability <strong>of</strong> mutati<strong>on</strong> p m =0:06) in all runs. The best value found (namely 0.803553)was better than the best values <strong>of</strong> any method discussedearlier, whereas the worst value found was 0.802964. Similarly,<strong>for</strong> n = 50, all results (in 30,000 generati<strong>on</strong>s) werebetter than 0.83 (with the best <strong>of</strong> 0.8331937);~x = (6.28006029, 3.16155291, 3.15453815, 3.14085174,3.12882447, 3.11211085, 3.10170507, 3.08703685,3.07571769, 3.06122732, 3.05010581, 3.03667951,3.02333045, 3.00721049, 2.99492717, 2.97988462,2.96637058, 2.95589066, 2.94427204, 2.92796040,5


0.40970641, 2.90670991, 0.46131119, 0.48193336,0.46776962, 0.43887550, 0.45181099, 0.44652876,0.43348753, 0.44577143, 0.42379948, 0.45858049,0.42931050, 0.42928645, 0.42943302, 0.43294361,0.42663351, 0.43437257, 0.42542559, 0.41594154,0.43248957, 0.39134723, 0.42628688, 0.42774364,0.41886297, 0.42107263, 0.41215360, 0.41809589,0.41626775, 0.42316407),It was interesting to note the importance <strong>of</strong> geometricalcrossover. With xed populati<strong>on</strong> size (kept c<strong>on</strong>stantat 30), the higher values <strong>of</strong> probability <strong>of</strong> crossover p c ,the better results <strong>of</strong> the system were observed. Similarly,the best mutati<strong>on</strong> rates were relatively low (p m 0:06).We illustrate the average values (out <strong>of</strong> 10 runs) <strong>of</strong> thebest values found <strong>for</strong> dierent probabilities <strong>of</strong> mutati<strong>on</strong>(with xed p c = 1:0) in Figure 5.f(x)0:0 x i 10:0, i = 1; 2; 3; 4.The global soluti<strong>on</strong> is X = (1; 1; 1; 1), and f(X ) = 0.We experimented with two simple systems (A andB); both <strong>of</strong> them use two operators, and <strong>on</strong>e <strong>of</strong> these operators(in both systems) was a n<strong>on</strong>-uni<strong>for</strong>m mutati<strong>on</strong>(described in secti<strong>on</strong> 2). The <strong>on</strong>ly dierence betweensystems A and B was, that the <strong>for</strong>mer system used arithmeticalcrossover, i.e., ospring ~z <strong>of</strong> parents ~x and ~y wasdened asz i = a x i + (1 a) y i <strong>for</strong> all 1 i n andrandom 0 a 1,whereas the system B used geometrical crossover:z i = x a i y1 ai0 a 1.<strong>for</strong> all 1 i n and randomFigure 6 illustrates the dierence in the per<strong>for</strong>mance <strong>of</strong>systems A and B.1.0f(x)0.50.00.00.060.25P m101−110−210−310−410−510********Figure 5: The per<strong>for</strong>mance <strong>of</strong> the system with p c = 1:0and a variable mutati<strong>on</strong> rate p m01 2 3 4 5 6 7x 10 4Clearly, geometrical crossover can be applied <strong>on</strong>ly <strong>for</strong>problems where each variable takes n<strong>on</strong>negative values<strong>on</strong>ly, so its use is quite restricted in comparis<strong>on</strong> withother types <strong>of</strong> crossovers (e.g., arithmetical crossover).However, <strong>for</strong> many engineering problems, all problemvariables are positive; moreover, it is always possible toreplace variable x i 2 ha i ; b i i which can take negative values(i.e., where a i < 0) with a new variable y i = x i a i .6 Further Experiments and ResultsWe compared arithmetical and geometrical crossovers <strong>on</strong>several test cases. The preliminary experiments indicatethat geometrical crossover outper<strong>for</strong>ms arithmeticalcrossover <strong>on</strong> majority <strong>of</strong> unc<strong>on</strong>strained test problems.For example, <strong>on</strong>e <strong>of</strong> the selected test cases was tominimize f(X) = 100(x 2 x 2 1) 2 + (1 x 1 ) 2 +90(x 4 x 2 3) 2 + (1 x 3 ) 2 +10:1((x 2 1) 2 + (x 4 1) 2 )+19:8(x 2 1)(x 4 1),subject to:Figure 6: The per<strong>for</strong>mance (number <strong>of</strong> generati<strong>on</strong>s in10,000s versus the average value <strong>of</strong> the objective functi<strong>on</strong>over 20 runs) <strong>of</strong> the systems A (marked by `') and B(marked by `*') <strong>on</strong> <strong>on</strong>e test case7 C<strong>on</strong>clusi<strong>on</strong>sIt seems that the experiments described in the previoussecti<strong>on</strong>s can be generalized in the following sense. Initialexperiments (using any evoluti<strong>on</strong>ary method) mayindicate active c<strong>on</strong>straints <strong>for</strong> a given problem. Thenit might be possible to build a specialized system withoperators to search a boundary between feasible and infeasibleparts <strong>of</strong> the search space. This was precisely thecase <strong>of</strong> the problem described in secti<strong>on</strong> 4; it might bethe case <strong>for</strong> many other optimizati<strong>on</strong> problems.Let us c<strong>on</strong>sider, <strong>for</strong> example, the following optimizati<strong>on</strong>problem:wheremaximize f(~x) = ( p n) n Q ni=1 x i6


P ni=1 x2 i 1 (6)and 0 x i 1, <strong>for</strong> 1 i n.Clearly, the objective functi<strong>on</strong> reaches its global optimumat~x = h1= p n; : : :; 1= p ni,and f(~x ) = 1.Again, it is relatively easy to initialize the populati<strong>on</strong>by boundary points, i.e., points, satisfyingP ni=1 x2 i = 1.Then, the sphere crossover produces an ospring ~x 3 fromtwo parents ~x 1 and ~x 2 in the following way:x 3 i = p ((x 1 i )2 + (x 2 i )2 )=2.As <strong>for</strong> geometrical crossover, it is easy to generalize spherecrossover:x 3 i = p (x 1 i )2 + (1 )(x 2 i )2 ,<strong>for</strong> 0 1, as well as <strong>for</strong> multiple parents:x k+1i= p 1 (x 1 i )2 + : : : + k (x k i )2 ,<strong>for</strong> 1 + : : : + k = 1.Clearly, ~x 3 lies also <strong>on</strong> the sphere dened by (6). Similarly,a problem-specic mutati<strong>on</strong> (<strong>for</strong> a parent vector ~x)can select two indices, i 6= j and a random number p fromthe range h0; 1i, and assign:= p x t i , andj = q x t j,x t+1ix t+1where q = q ( xixj )2 (1p 2 ) + 1. Such a simple evoluti<strong>on</strong>arysystem nds the global optimum easily (e.g., the case<strong>of</strong> n = 20 requires 10,000 generati<strong>on</strong>s with populati<strong>on</strong>size equal to 30, p c = 1:0, and p m = 0:06).It seems that problem-specic operators which searchthe boundary <strong>of</strong> a feasible regi<strong>on</strong> can enhance the search<strong>for</strong> a global optimum in a signicant way. At least <strong>for</strong>some numerical optimizati<strong>on</strong> problems, it is an interestingopti<strong>on</strong> to explore. Resulting evoluti<strong>on</strong>ary systemsare very easy to c<strong>on</strong>struct (less than 200 lines <strong>of</strong> code)and use. As indicated in secti<strong>on</strong> 6, such operators canbe useful also <strong>for</strong> problems without c<strong>on</strong>straints. Weplan further study <strong>on</strong> properties <strong>of</strong> various crossoversand boundary-search operators <strong>for</strong> numerical optimizati<strong>on</strong>problems.AcknowledgmentsThis material is based up<strong>on</strong> work supported by the Nati<strong>on</strong>alScience Foundati<strong>on</strong> under Grant IRI-9322400.ReferencesBack, T., F. Homeister and H.-P. Schwefel (1991). ASurvey <strong>of</strong> Evoluti<strong>on</strong> Strategies. In Proceedings <strong>of</strong> theFourth Internati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> Genetic Algorithms,Los Altos, CA, Morgan Kaufmann Publishers, 2{9.Bean, J.C. and A.B. Hadj-Alouane (1992). A Dual GeneticAlgorithm <strong>for</strong> Bounded Integer Programs. Department<strong>of</strong> Industrial and Operati<strong>on</strong>s Engineering, The University<strong>of</strong> Michigan, TR 92-53.Davis, L. (1989). Adapting Operator Probabilities in GeneticAlgorithms. In Proceeding <strong>of</strong> the 3rd Internati<strong>on</strong>alC<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, Morgan KaufmannPublishers, pp.61{69.Eiben, A.E., P.-E. Raue, and Zs. Ruttkay (1994). GeneticAlgorithms with Multi-parent Recombinati<strong>on</strong>. InProceedings <strong>of</strong> the Third Internati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> ParallelProblem Solving from Nature (PPSN), Springer-Verlag, New York, pp.78{87.Fogel, D.B. (1992). Evolving Articial Intelligence. PhDThesis, University <strong>of</strong> Cali<strong>for</strong>nia, San Diego.Glover, F. (1977). Heuristics <strong>for</strong> Integer ProgrammingUsing Surrogate C<strong>on</strong>straints. Decisi<strong>on</strong> Sciences, Vol.8,No.1, pp.156{166.Glover, F. and G. Kochenberger (1995). Critical EventTabu Search <strong>for</strong> Multidimensi<strong>on</strong>al Knapsack Problems.In Proceedings <strong>of</strong> the Internati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> Metaheuristics<strong>for</strong> Optimizati<strong>on</strong>, Kluwer Publishing, pp.113{133.Hadj-Alouane, A.B. and J.C. Bean (1992). A GeneticAlgorithm <strong>for</strong> the Multiple-Choice Integer Program. Department<strong>of</strong> Industrial and Operati<strong>on</strong>s Engineering, TheUniversity <strong>of</strong> Michigan, TR 92-50.C<strong>on</strong>-Simu-Homaifar, A., S. H.-Y. Lai and X. Qi (1994).strained Optimizati<strong>on</strong> via Genetic Algorithms.lati<strong>on</strong>, 62: 242{254.Joines, J.A. and C.R. Houck (1994). On the Use <strong>of</strong> N<strong>on</strong>-Stati<strong>on</strong>ary Penalty Functi<strong>on</strong>s to Solve N<strong>on</strong>linear C<strong>on</strong>strainedOptimizati<strong>on</strong> Problems With GAs. In Proceedings<strong>of</strong> the Evoluti<strong>on</strong>ary Computati<strong>on</strong> C<strong>on</strong>ference|PosterSessi<strong>on</strong>s, part <strong>of</strong> the IEEE World C<strong>on</strong>gress <strong>on</strong> Computati<strong>on</strong>alIntelligence, Orlando, 26{29 June 1994, 579{584.Keane, A., (1994). Genetic Algorithms Digest, Thursday,May 19, 1994, Volume 8, Issue 16.Kelly, J.P., B.L. Golden, and A.A. Assad (1993). LargeScale C<strong>on</strong>trolled Rounding Using Tabu Search with StrategicOscillati<strong>on</strong>. Annals <strong>of</strong> Operati<strong>on</strong>s Research, Vol.41,pp.69{84.7


Le Riche, R., C. Knopf-Lenoir, and R.T. Haftka (1995).A Segregated Genetic Algorithm <strong>for</strong> C<strong>on</strong>strained Optimizati<strong>on</strong>in Structural Mechanics. In Proceedings <strong>of</strong> theSixth Internati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> Genetic Algorithms,Los Altos, CA, Morgan Kaufmann Publishers, pp.558{565.Michalewicz, Z., (1995). Genetic Algorithms, <strong>Numerical</strong>Optimizati<strong>on</strong>, and C<strong>on</strong>straints. In Proceeding <strong>of</strong> the 6thInternati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, MorganKaufmann Publishers, pp.151{158.Michalewicz, Z. and N. Attia (1994). In Evoluti<strong>on</strong>aryOptimizati<strong>on</strong> <strong>of</strong> C<strong>on</strong>strained Problems. Proceedings <strong>of</strong>the 3rd Annual C<strong>on</strong>ference <strong>on</strong> Evoluti<strong>on</strong>ary Programming,eds. A.V. Sebald and L.J. Fogel, River Edge, NJ,World Scientic Publishing, 98{108.Schwefel, H.-P. (1981). <strong>Numerical</strong> Optimizati<strong>on</strong> <strong>for</strong> ComputerModels. Chichester, UK, Wiley.Smith, A. and D. Tate (1993). Genetic Optimizati<strong>on</strong>Using A Penalty Functi<strong>on</strong>. In Proceedings <strong>of</strong> the FifthInternati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, Los Altos,CA, Morgan Kaufmann Publishers, pp.499{503.Wright, A.H. (1991). Genetic Algorithms <strong>for</strong> Real ParameterOptimizati<strong>on</strong>. In Foundati<strong>on</strong>s <strong>of</strong> Genetic Algorithms,ed. G. Rawlins, First Workshop <strong>on</strong> the Foundati<strong>on</strong>s<strong>of</strong> Genetic Algorithms and Classier Systems, LosAltos, CA, Morgan Kaufmann Publishers, 205{218.Michalewicz, Z., T.D. Logan and S. Swaminathan (1994).Evoluti<strong>on</strong>ary Operators <strong>for</strong> C<strong>on</strong>tinuous C<strong>on</strong>vex ParameterSpaces. In Proceedings <strong>of</strong> the 3rd Annual C<strong>on</strong>ference<strong>on</strong> Evoluti<strong>on</strong>ary Programming, eds. A.V. Sebald andL.J. Fogel, River Edge, NJ, World Scientic Publishing,84{97.Michalewicz, Z. and G. Nazhiyath, G. (1995). GenocopIII: A Co-evoluti<strong>on</strong>ary Algorithm <strong>for</strong> <strong>Numerical</strong> Optimizati<strong>on</strong>Problems with N<strong>on</strong>linear C<strong>on</strong>straints. In Proceedings<strong>of</strong> the 2nd IEEE Internati<strong>on</strong>al C<strong>on</strong>ference <strong>on</strong>Evoluti<strong>on</strong>ary Computati<strong>on</strong>, Perth, 29 November { 1 December1995.Muhlenbein, H. and H.-M. Voigt (1995). Gene PooolRecombinati<strong>on</strong> <strong>for</strong> the Breeder Genetic Algorithm. InProceedings <strong>of</strong> the Metaheuristics Internati<strong>on</strong>al C<strong>on</strong>ference,Breckenridge, Colorado, July 22{26, pp.19{25.Powell, D. and M.M. Skolnick (1993). Using Genetic Algorithmsin Engineering Design Optimizati<strong>on</strong> with N<strong>on</strong>linearC<strong>on</strong>straints. In Proceedings <strong>of</strong> the Fifth Internati<strong>on</strong>alC<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, Los Altos, CA,Morgan Kaufmann Publishers, 424{430.Renders, J.-M., and H. Bersini (1994). Hybridizing GeneticAlgorithms with Hill-climbing Methods <strong>for</strong> GlobalOptimizati<strong>on</strong>: Two Possible Ways. In Proceedings <strong>of</strong> theFirst IEEE ICEC, pp.312{317.Richards<strong>on</strong>, J.T., M.R. Palmer, G. Liepins and M. Hilliard(1989). Some Guidelines <strong>for</strong> Genetic Algorithms withPenalty Functi<strong>on</strong>s. In Proceedings <strong>of</strong> the Third Internati<strong>on</strong>alC<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, Los Altos,CA, Morgan Kaufmann Publishers, 191{197.Schoenauer, M., and S. Xanthakis (1993). C<strong>on</strong>strainedGA Optimizati<strong>on</strong>. In Proceedings <strong>of</strong> the Fifth Internati<strong>on</strong>alC<strong>on</strong>ference <strong>on</strong> Genetic Algorithms, Los Altos, CA,Morgan Kaufmann Publishers, 573{580.8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!