27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Tabu Search <strong>and</strong> Other Hybrid Concepts 163<br />

Aggressive exploration using a short-term memory forms the core of the TS. From<br />

a c<strong>and</strong>idate list of (non-exhaustive) moves the best admissible one is chosen. The decision<br />

is based on tabu restrictions on the one h<strong>and</strong> <strong>and</strong> on aspiration criteria on the<br />

other. Whereas aspiration criteria aim at perpetuating former successful operations, tabu<br />

restrictions help to avoid stepping back to inferior solutions <strong>and</strong> repeating already investigated<br />

trial moves. Although the best admissible step does not necessarily lead to<br />

an improvement, only better solutions are stored as real moves. Successes <strong>and</strong> failures<br />

are used to update the tabu list <strong>and</strong> the aspiration memory. If no further improvements<br />

can be found, or after a speci ed number of iterations, one transfers the results to the<br />

longer-term memories <strong>and</strong> switches to either an intensi cation or a diversi cation mode.<br />

Intensi cation combined with the medium-term memory refers to procedures for reinforcing<br />

movecombinations historically found good, whereas diversi cation combined with<br />

the long-term memory refers to exploring new regions of the search space. The rst articles<br />

of Glover (1986, 1989) present many ideas to decide upon switching back <strong>and</strong> forth<br />

between the three modes. Many morehave been conceived <strong>and</strong> published together with<br />

application results. In some cases complete procedures from other optimization paradigms<br />

have been used within the di erent phases of the TS, e.g., line search orgradient-liketechniques<br />

during intensi cation, <strong>and</strong> GAs during diversi cation.<br />

Instead of going into further details here, it seems appropriate to give some hints that<br />

point to rather similar hybrid methods, more or less centered around either GAs, ESs, or<br />

SA as the main strategy.<br />

One could start again with Powell's rule to look for further restart points in the<br />

vicinity of the nal solutions of his conjugate direction method (Chap. 3, Sect. 3.2.2.1)<br />

or with the restart rule of the simplex method according to Nelder <strong>and</strong> Mead (Chap. 3,<br />

Sect. 3.2.1.5), in order to interpret them in terms of some kind of diversi cation phase. But<br />

in general, both approaches cannot be classi ed as better ideas than starting a speci c<br />

optimum seeking method from di erent initial solutions <strong>and</strong> simply comparing all the<br />

(maybe di erent) outcomes, <strong>and</strong> choosing the best one as the nal solution. It might<br />

even be more promising to use di erent strategies from the same starting point <strong>and</strong> to<br />

select the overall best outcome again as a new start condition. On MIMD (multiple<br />

instructions, multiple data) parallel computers or nets of workstations the competition of<br />

di erent search methods could even be used to set up a knowledge base that adapts to<br />

a speci c situation (e.g., Peters, 1989, 1991). Only individual conclusions for one or the<br />

other special application can be drawn from this kind of metastrategic approach, however.<br />

At the close of this general survey, only a few further hints will be given regarding the<br />

vast number of recent proposals.<br />

Ablay (1987), for example, uses a basic search routine similar to Rechenberg's (1+1)<br />

ES <strong>and</strong> interrupts it more or less frequently by a pure r<strong>and</strong>om search in order to avoid<br />

premature stagnation as well as convergence to a non-global local optimum.<br />

The replicator algorithm of Voigt (1989) also refers to organic evolution as a metaphor<br />

(see also Voigt, Muhlenbein, <strong>and</strong> Schwefel, 1990). Its modelling technique may be called<br />

descriptive, according to earlier work of Feistel <strong>and</strong> Ebeling (1989). Ebeling (1992) even<br />

proposes to incorporate ontogenetic learning features (so-called Haeckel strategy).<br />

Muhlenbein <strong>and</strong> Schlierkamp-Voosen (1993a,b) proposed a so-called breeder GA, which

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!