30.07.2013 Views

Progressively Interactive Evolutionary Multi-Objective Optimization ...

Progressively Interactive Evolutionary Multi-Objective Optimization ...

Progressively Interactive Evolutionary Multi-Objective Optimization ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

value function which are required to be determined optimally from the preference statements<br />

of the decision maker are k1, k2, l1 and l2. Following is the value function optimization problem<br />

(VFOP) which should be solved with the value function parameters (k1, k2, l1 and l2) as<br />

variables. The optimal solution to the VFOP assigns optimal values to the value function parameters.<br />

The above problem is a simple single objective optimization problem which can be<br />

solved using any single objective optimizer. In this paper the problem has been solved using a<br />

sequential quadratic programming (SQP) procedure from the KNITRO [1] software.<br />

Maximize ǫ,<br />

subject to V is non-negative at every point Pi,<br />

V is strictly increasing at every point Pi,<br />

V (Pi) − V (Pj) ≥ ǫ, for all (i, j) pairs<br />

satisfying Pi ≻ Pj,<br />

|V (Pi) − V (Pj)| ≤ δV , for all (i, j) pairs<br />

satisfying Pi ≡ Pj.<br />

The above optimization problem adjusts the value function parameters in such a way that<br />

the minimum difference in the value function values for the ordered pairs of points is maximum.<br />

4.2 Termination Criterion<br />

Distance of the current best point is computed from the best points in the previous generations. In<br />

the simulations performed, the distance is computed from the current best point to the best points<br />

in the previous 10 generations and if each of the computed distances δu(i), i ∈ {1, 2, . . . , 10} is<br />

found to be less than ǫu then the algorithm is terminated. A value of ǫu = 0.1 has been chosen<br />

for the simulations done in this paper.<br />

4.3 Modified Domination Principle<br />

In this sub-section we define the modified domination principle proposed in [10]. The value<br />

function V is used to modify the usual domination principle so that more focussed search can be<br />

performed in the region of interest to the decision maker. Let V (F1, F2) be the value function<br />

for a two objective case. The parameters for this value function are optimally determined from<br />

the VFOP. For the given η points, the value function assigns a value to each point. Let the values<br />

be V1, V2, . . . , Vη in the descending order. Now any two feasible solutions (x (1) and x (2) ) can<br />

be compared with their objective function values by using the following modified domination<br />

criteria:<br />

1. If both points have a value function value less than V2, then the two points are compared<br />

based on the usual dominance principle.<br />

2. If both points have a value function value more than V2, then the two points are compared<br />

based on the usual dominance principle.<br />

3. If one point has a value function value more than V2 and the other point has a value function<br />

value less than V2, then the former dominates the latter.<br />

The modified domination principle has been explained through Figure 1 which illustrates<br />

regions dominated by two points A and B. Let us consider that the second best point from a<br />

given set of η points has a value V2. The function V (F ) = V2 represents a contour which has<br />

been shown by a curved line 2 . The first point A has a value VA which is smaller than V2 and<br />

the region dominated by A is shaded in the figure. The region dominated by A is identical to<br />

what can be obtained using the usual domination principle. The second point B has a value VB<br />

2 The reason for using the contour corresponding to the second best point can be found in [10]<br />

120<br />

(3)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!