16.01.2013 Views

An Introduction to Genetic Algorithms - Boente

An Introduction to Genetic Algorithms - Boente

An Introduction to Genetic Algorithms - Boente

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

competition in the d *···* partition in Grefenstette and Baker's example will not follow a two−armed−bandit<br />

strategy.<br />

The general idea here is that, roughly, the multi−armed−bandit strategy for competition among schemas<br />

proceeds from low−order partitions at early times <strong>to</strong> higher−order partitions at later times. The extent <strong>to</strong> which<br />

this describes the workings of actual GAs is not clear. Grefenstette (1991b) gives some further caveats<br />

concerning this description of GA behavior. In particular, he points out that, at each new generation, selection<br />

most likely produces a new set of biases in the way schemas are sampled (as in the fitness function given by<br />

equation 4.5 above). Since the Schema Theorem gives the dynamics of schema sampling over only one time<br />

step, it is difficult (if not impossible), without making unrealistic assumptions, <strong>to</strong> make any long−range<br />

prediction about how the allocation of trials <strong>to</strong> different schemas will proceed. Because of the biases<br />

introduced by selection, the static average fitnesses of schemas will not necessarily be correlated in any useful<br />

way with their observed average fitnesses.<br />

Implications for GA Performance<br />

Chapter 4: Theoretical Foundations of <strong>Genetic</strong> <strong>Algorithms</strong><br />

In view of the solution <strong>to</strong> the Two−Armed Bandit problem, the exponential allocation of trials is clearly an<br />

appropriate strategy for problems in which "on−line" performance is important, such as real−time control<br />

problems. Less clear is its appropriateness for problems with other performance criteria, such as "Is the<br />

optimal individual reliably discovered in a reasonable time?" There have been many cases in the GA literature<br />

in which such criteria have been applied, sometimes leading <strong>to</strong> the conclusion that GAs don't work. This<br />

demonstrates the importance of understanding what a particular algorithm is good at doing before applying it<br />

<strong>to</strong> particular problems. This point was made eloquently in the context of GAs by De Jong (1993).<br />

What is maximizing on−line performance good for? Holland's view is clear: the rough maximization of<br />

on−line performance is what goes on in adaptive systems of all kinds, and in some sense this is how<br />

"adaptation" is defined. In the realm of technology, on−line performance is important in control problems<br />

(e.g., au<strong>to</strong>matically controlling machinery) and in learning problems (e.g., learning <strong>to</strong> navigate in an<br />

environment) in which each system action can lead <strong>to</strong> a gain or loss. It is also important in prediction tasks<br />

(e.g., predicting financial markets) in which gains or losses are had with each prediction. Most of the GA<br />

applications I discussed in chapter 2 had a slightly different performance criterion: <strong>to</strong> find a "reasonably good"<br />

(if not perfect) solution in a reasonable amount of time. In other words, the goal is <strong>to</strong> satisfice (find a solution<br />

that is good enough for one's purposes) rather than <strong>to</strong> optimize (find the best possible solution). This goal is<br />

related <strong>to</strong> maximizing on−line performance, since on−line performance will be maximized if high−fitness<br />

individuals are likely <strong>to</strong> be chosen at each step, including the last. The two−armed−bandit allocation strategy<br />

maximizes both cumulative payoff and the amount of information the gambler gets for her money as <strong>to</strong> which<br />

is the better arm. In the context of schemas, the exponential allocation strategy could be said <strong>to</strong> maximize, for<br />

a given amount of time (samples), the amount of information gained about which part of the search space is<br />

likely <strong>to</strong> provide good solutions if sampled in the future. Gaining such information is crucial <strong>to</strong> successful<br />

satisficing. For true optimization, hybrid methods such as a GA augmented by a hill climber or other kinds of<br />

gradient search have often been found <strong>to</strong> perform better than a GA alone (see, e.g., Hart and Belew 1996).<br />

GAs seem <strong>to</strong> be good at quickly finding promising regions of the search space, and hill climbers are often<br />

good at zeroing in on optimal individuals in those regions.<br />

Although the foregoing schema analysis has suggested the types of problems for which a genetic algorithm<br />

might be useful, it does not answer more detailed questions, such as "Given a certain version of the GA, what<br />

is the expected time <strong>to</strong> find a particular fitness level?" Some approaches <strong>to</strong> such questions will be described<br />

below.<br />

92

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!