in GGA; there is also the penalty term as the element of the fitness function.3. <strong>Evolutionary</strong> based heuristicThe version of the evolutionary strategy used by the author is the result of his studies of evolutionary <strong>algorithm</strong>s inflowshop <strong>problem</strong> [7]. No attempts were made to fine-tune any of the <strong>algorithm</strong> parameters. That is (1, λ) - ESwherein λ descendants are generated from one parent using simple operators. No crossover is applied. The best ofthe descendants becomes the new parent solution.The overall procedureThe general <strong>for</strong>m of the applied <strong>algorithm</strong> written in PASCAL pseudo-codes is presented below.BEGINgenerate and evaluate random starting solution;best solution:= start solution;REPEATFOR i:=1 to λ DOBEGINcopy parent to create child number i ;mutate child i;END;evaluate children;choose best child based on objective function to be a new parent;UNTIL (max. generation);output best solution;END.Chromosome representationPermitted solutions are represented by the list of n elements and s group separators, whereas the value j (1≤j≤n)can appear only once and so can the value i (n+1≤i≤n+s), indicating the separator number.There<strong>for</strong>e, <strong>for</strong> 7 elements and 3 group separators the solution in the <strong>for</strong>m R 1= (1,2,9,8,5,2,7,10,6,4) willmean that the elements <strong>for</strong>m three <strong>bin</strong>s (1,3); (5,2,7); (6,4), while the solution R 2=(1,10,3,8,5,2,9,7,6,4) meansthat the elements ought to be packed into four <strong>bin</strong>s (1), (3), (5,2), (7,6,4).It is accepted that s=ROUND (0.7* n)The initial populationThe initial population was obtained in a random manner.The fitness functionIn the <strong>algorithm</strong> the maximum of the following function was sought:FD =N∑i = 1Fis⋅ ⎛ ⎝ ⎜ ⎞⎟C ⎠N2whereN - the number of <strong>bin</strong>s in the given solutionFi - the sum of weights of the elements packed into the <strong>bin</strong> i, i=1..NC - the <strong>bin</strong> capacitys- a penalty constant, such that:⎧ 1s = ⎨⎩−1<strong>for</strong> F ≤ Ci<strong>for</strong> F > Ci2
Applied operatorsThe <strong>algorithm</strong> made use of the operators presented in the publications on the subject, that is: exchange andinsertion mutations and inversion. The characteristic feature of the <strong>algorithm</strong>, the one distinguishing it from thoseproposed by other authors is that both elements and separators are subjected to genetic operators.The final version of the evolution strategy <strong>for</strong> the BPP is characterised by the elements and parameterssummarised in Table 1.Fitness functionPopulationRepresentationReproductionInitializationSelection <strong>for</strong> the nextgenerationTermination criterionCrossoverOther operators4. Experimental testsTable 1. Anatomy of the evolution strategy <strong>for</strong> the <strong>bin</strong> <strong>packing</strong> <strong>problem</strong>FD =N∑i = 1Fis⋅ ⎛ ⎝ ⎜ ⎞⎟C ⎠N2a list (sequence) of n parts and s separators of the groups1 parent, 10 descendants1 randomly generated solutionthe best descendant replaces the parentafter realization of 45 000 generationsnot appliedexchange, insertion and inversion applied at an equal probabilityTests were run using 8 sets with 20 <strong>problem</strong>s each, the <strong>problem</strong>s being included in the Beasley’s library [2]. These<strong>problem</strong>s were divided into two groups:• elements whose weights were drawn at random from the uni<strong>for</strong>m distribution of the interval [20...100] are tobe placed in the <strong>bin</strong>s of the size 150, the number of elements being 120, 250, 500, 1000,• elements whose values are taken from the interval [0.25...0.50] are to be placed in the <strong>bin</strong>s of the size 1, thenumber of elements being 60, 120, 249, 501- 1/3 of which has considerable weight, while the weigh of theremaining 2/3 is rather small (less than 1/3 of the <strong>bin</strong> size).It must be emphasised that ES itself involves some random searching. There<strong>for</strong>e, each calculation using the sameset of input data may yield different solution. Because of that, three runs were completed <strong>for</strong> each set of data, thenthe best solution was selected. During one run the <strong>algorithm</strong> would analyse 450 000 solutions, which is a minorpart of the search space. It is comparable to the value applied by Khuri et al. (1000 generations, each yielding 500solutions).Table 2 presents the results obtained <strong>for</strong> well-known approximate <strong>algorithm</strong>s <strong>for</strong> 2 out of 8 sets from theBeasley’s library: <strong>bin</strong>pack4 <strong>for</strong> 1000 elements and <strong>bin</strong>pack8 <strong>for</strong> 501 elements, which are the major and the mostdifficult <strong>problem</strong>s from the first and the second group of <strong>problem</strong>s, respectively.Results of those tests have confirmed the superiority of Falkenauer’s GGA <strong>algorithm</strong>, especially in relation to<strong>problem</strong>s involving large dimensions. Algorithm proposed by Martello and Toth and FFD are adequate whilesolving the <strong>problem</strong>s from the first group. In the case of those considered by Falkenauer to be more difficult, these<strong>algorithm</strong>s prove inadequate. The <strong>algorithm</strong> by Khuri et al. gives the worst results. The evolution strategydeveloped by the author yields slightly less satisfactory results than GGA proposed by Falkenauer. It is moreadvantageous that other techniques, especially in relation to the second group of <strong>problem</strong>s.3