11.07.2015 Views

View - Universidad de Almería

View - Universidad de Almería

View - Universidad de Almería

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Global multiobjective optimization using evolutionary methods: An experimental analysis 1191.61.45000 Function Evaluations20000 function Evaluations1.210.80.60.40.200.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Figure 3.Non-dominated solutions using msPESA with 5000 and 20000 fitness function evaluations.the first step is to assign values to the parameters. Some values are fixed and equal for all themethods during the whole comparison. Although undoubtedly, each algorithm could be improvedby assigning particular values to the parameters, the authors do not suggest anythingin this respect and leave the users freedom of choice. Therefore, and for reasons of simplicity,the following values, also chosen in other studies [5] have been used. Number of generations:T = 250, size of the internal population IP = 100, external population EP = 100 excepts forPESA where P E = 10, crossover probability: P x = 0.8 and mutation probability: P µ = 0.1.The benchmarks used were ZDT1 to ZDT6 [5], but for reason of space, we have not inclu<strong>de</strong>dall these results in this work.Figure 2 shows the non-dominated solutions obtained using msPESA, PESA, NSGA-II andSPEA2 for test (a)ZDT1, (b)ZDT3, and (c)ZDT6. It is clear that msPESA is able to better distributeits population along the obtained front than other MOEAs. Figure 2 (d) shows thenon-dominated solutions when the fitness evaluations are increased from 5000 to 20000 inmsPESA.Metric C presented in [5] have been used to evaluate the performance of the different optimizationmethods. This metric establishes a comparison between two algorithms, indicatinghow the results of the first cover those of the second. Table 1 summarizes results for theset of experiments in which each trial run was allowed just 5000 fitness evaluations for testZDT1, ZDT3 and ZDT6. This table shows that the best performing algorithm is msPESA (seemsPESA row in Table 1). That is true in proximity to the Pareto front as in diversity and spreadof solutions.Table 1.Comparison of C metric using msPESA, NSGA-II, PESA and SPEA2.msPESA NSGA-II PESA SPEA2msPESAZDT1 ZDT3 1.00 1.00 1.00 1.00 1.00 1.00ZDT6 - 1.00 - 1.00 - 1.00 -NSGA-II0.06 0.06 0.08 0.18 0.23 0.000.01 - 0.75 - 0.12 -PESA0.02 0.00 1.00 1.00 1.00 0.000.00 - 0.57 - 0.5 -SPEA20.09 0.16 0.79 1.00 0.18 1.000.01 - 1.0 - 0.9 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!