15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Conclusion<br />

This chapter section has introduced the reader to four effective heuristics that belong to the class of general<br />

approximation iterative algorithms, namely SA, GA, tabu search, and SimE. From the immense literature<br />

that is available it is evident that for a large variety of applications, in certain settings, these heuristics<br />

produce excellent results. All five algorithms are general iterative metaheuristics. A value of the objective<br />

function is used to compare results of consecutive iterations and a solution is selected based on its value.<br />

All algorithms incorporate domain specific knowledge to dictate the search strategy. They also tolerate<br />

some element of nondeterminism that helps the search escape out of local minima. They all rely on<br />

the use of a suitable cost function, which provides feedback to the algorithm as the search progresses. The<br />

principle difference among these heuristics is how and where domain specific knowledge is used. For<br />

example, in SA such knowledge is mainly included in the cost function. Elements involved in a perturbation<br />

are selected randomly, and perturbations are accepted or rejected according to the Metropolis criterion,<br />

which is a function of the cost. The cooling schedule has also a major impact on the algorithm performance<br />

and must be carefully crafted to the problem domain as well as the particular problem instance.<br />

For the two evolutionary algorithms discussed in the chapter, GA and SimE, domain specific<br />

knowledge is exploited in all phases. In the case of GA, the fitness of individual solutions incorporates<br />

domain specific knowledge. Selection for reproduction, the genetic operations, as well as generation<br />

of the new population also incorporate a great deal of heuristic knowledge about the problem domain.<br />

In SimE, each individual element of a solution is characterized by a goodness measure that is highly<br />

correlated with the objective function. The perturbation step (selection followed by allocation) affects<br />

mostly low goodness elements. Therefore, domain specific knowledge is included in every step of the<br />

SimE algorithm.<br />

Tabu search is different from the above heuristics in that it has an explicit memory component. At<br />

each iteration the neighborhood of the current solution is partially explored, and a move is made to the<br />

best nontabu solution in that neighborhood. The neighborhood function as well as tabu list size and<br />

content are problem specific. The direction of the search is also influenced by the memory structures<br />

(whether intensification or diversification is used).<br />

A classification of meta-heuristics proposed by Glover and Laguna [130] is based on three basic<br />

features: (1) the use of adaptive memory, where the letter A is used if the meta-heuristic employs adaptive<br />

memory, and the letter M is used if it is memoryless; (2) the kind of neighborhood exploration, where<br />

the letter N is used if the meta-heuristic performs a systematic neighborhood search, and the letter S is<br />

used if stochastic sampling is followed; and (3) the number of current solutions carried from one iteration<br />

to the next, where the digit 1 is used if the meta-heuristic maintains a single solution, and the letter P<br />

is used if a parallel search is performed with a population of solutions of cardinality P. For example,<br />

according to this classification, GA is M/S/P, tabu search is A/N/1, SA is M/S/1, and SimE is also M/S/1.<br />

It is also possible to make hybrids of these algorithms. The basic idea of hybridization is to enhance<br />

the strengths and compensate for the weaknesses of two or more complementary approaches. For the<br />

details about the hybridization the readers are referred to [8].<br />

In this chapter section, it has not been the authors’ intention to demonstrate the superiority of one<br />

algorithm over the other. Actually it would be unwise to rank such algorithms. Each one of them has its<br />

own merits. Recently, an interesting theoretical study has been reported by Wolpert and Macready in<br />

which they proved a number of theorems stating that the average performance of any pair of iterative<br />

(deterministic or nondeterministic) algorithms across all problems is identical. That is, if an algorithm<br />

performs well on a certain class of problems then it necessarily pays for that with degraded performance<br />

on the remaining set of problems [146]; however, it should be noted that the reported theorems assume<br />

that the algorithms do not include domain specific knowledge of the problems being solved. Obviously,<br />

it would be expected that a well-engineered algorithm would exhibit superior performance to that of a<br />

poorly engineered one.<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!