Progressively Interactive Evolutionary Multi-Objective Optimization ...
Progressively Interactive Evolutionary Multi-Objective Optimization ...
Progressively Interactive Evolutionary Multi-Objective Optimization ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
DM one or more pairs of alternative points found by an EMO<br />
algorithm and expected the DM to provide some preference<br />
information about the points. Some of the work in this<br />
direction has been done by Phelps and Köksalan [10], Fowler<br />
et al. [15], Jaszkiewicz [11], Branke et al. [6] and Korhonen<br />
et al. [7], [8].<br />
In a couple of recent studies [1], [2], the authors have<br />
proposed a progressively interactive approach where the<br />
information from the decision maker is elicited and used to<br />
direct the search of the EMO in a preferred region. The first<br />
study fits a quasi-concave value function to the preferences<br />
provided by the decision maker and then uses it to drive<br />
the EMO. The second study uses the preference information<br />
from the decision maker to construct a polyhedral cone which<br />
is again used to drive the EMO procedure towards the region<br />
of interest.<br />
A. Approximating Decision Maker’s Preference Information<br />
with a Value Function<br />
Information from the decision maker is usually elicited<br />
in the the form of his/her preferences. The DM (decision<br />
maker) is required to compare certain number of points in<br />
the objective space. The points are presented to the DM and<br />
pairwise comparisons of the given points result in either a<br />
solution being more preferred over the other or the solutions<br />
being incomparable. Based on such preference statements, a<br />
partial ordering of the points is done. If Pk, k ∈ {1, . . . , η}<br />
represents a set of η points in the decision space then<br />
for a given pair (i, j) the i-th point is either preferred<br />
over the j-th point (Pi ≻ Pj), or they are incomparable<br />
(Pi ≡ Pj). This information is used to fit a value function<br />
which matches the DM’s preferences. A number of available<br />
value functions can be chosen from the literature and the<br />
preference information can be fitted. Here, we describe the<br />
preference fitting task with three different value functions.<br />
The first value function is the CES [16] value function and<br />
the second one is the Cobb-Douglas [16] value function.<br />
These two value functions are commonly used in economics<br />
literature. The Cobb-Douglas value function is a special form<br />
of the CES value function. As the two value functions have a<br />
limited number of parameters, they can be used to fit only a<br />
certain class of convex preferences. In this study we propose<br />
a generalized polynomial value function which can be used<br />
to fit any kind of convex preference information. A special<br />
form of this value function has been suggested in an earlier<br />
study by Deb, Sinha, Korhonen and Wallenius [1]. They used<br />
this special form of the polynomial value function in the PI-<br />
EMO-VF procedure. In this section we discuss the process of<br />
fitting preference information to any value function and then<br />
incorporate the generalized value function in the PI-EMO-VF<br />
procedure in the later part of the paper.<br />
1) CES Value Function:<br />
V (f1, f2, . . . , fM) = ( M ρ 1<br />
i=1 αifi ) ρ ,<br />
such that<br />
αi ≥ 0, i = 1, . . . , m<br />
M i=1 αi = 1<br />
where fi are the objective functions<br />
and ρ, αi are the value function parameters<br />
(1)<br />
2) Cobb-Douglas Value Function:<br />
V (f1, f2, . . . , fM) = M αi<br />
i=1 fi ,<br />
such that<br />
αi ≥ 0, i = 1, . . . , m<br />
M i=1 αi = 1<br />
where fi are the objective functions<br />
and αi are the value function parameters<br />
(2)<br />
3) Polynomial Value Function: A generalized polynomial<br />
value function has been suggested which can be utilized to fit<br />
any number of preference information by choosing a higher<br />
degree polynomial.<br />
V (f1, f2, . . . , fM) = p M j=1 i=1 (αijfi + βj)<br />
such that<br />
0 = 1 − M i=0 αij, j = 1, . . . , p<br />
Sj = M i=1 (αijfi + βj) > 0, j = 1, . . . , p<br />
0 ≤ αij ≤ 1, j = 1, . . . , p<br />
where fi are the objective functions<br />
αij, βi, p are the value function parameters<br />
and Sj are the linear product terms in<br />
the value function<br />
(3)<br />
A special form of this value function suggested by Deb,<br />
Sinha, Korhonen and Wallenius [1] used p = M, where M is<br />
the number of objectives. Choosing a value of p = M makes<br />
the shape of the value function easily deductible with each<br />
product term, Sj, j = 1, . . . , M, representing an asymptote<br />
(a hyper-plane). However, any positive integer value of p<br />
can be chosen. More the number of parameters in the value<br />
function, more is the flexibility and any type of quasi-concave<br />
indifference curve can be fitted by increasing the value of<br />
p. Once the preference information is given, the task is to<br />
figure out the parameters of the value function which capture<br />
the preference information optimally. Next, we frame the<br />
optimization problem which needs to be solved to figure out<br />
the value function parameters.<br />
4) Value Function <strong>Optimization</strong>: Following is a generic<br />
approach which could be used to fit any value function to<br />
the preference information provided by the decision maker.<br />
In the equations, V represents the value function being<br />
used and P is a vector of objectives. V (P ) represents a<br />
scalar assigned to the objective vector P such that the<br />
scalar represents the utility/value of the objective vector. The<br />
optimization problem attempts to find such parameters for<br />
the value function for which the minimum difference in the<br />
value function values between the ordered pairs of points is<br />
49