16.01.2013 Views

An Introduction to Genetic Algorithms - Boente

An Introduction to Genetic Algorithms - Boente

An Introduction to Genetic Algorithms - Boente

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

This experiment shows that in some situations the GA is a better training method for networks than simple<br />

back−propagation. This does not mean that the GA will outperform back−propagation in all cases. It is also<br />

possible that enhancements of back−propagation might help it overcome some of the problems that prevented<br />

it from performing as well as the GA in this experiment. Schaffer, Whitley, and Eshelman (1992) point out<br />

Figure 2.20: Montana and Davis's results comparing the performance of the GA with back−propagation. The<br />

figure plots the best evaluation (lower is better) found by a given iteration. Solid line: genetic algorithm.<br />

Broken line: back−propagation. (Reprinted from Proceedings of the International Joint Conference on<br />

Artficial Intelligence; © 1989 Morgan Kaufmann Publishers, Inc. Reprinted by permission of the publisher.)<br />

that the GA has not been found <strong>to</strong> outperform the best weight−adjustment methods (e.g., "quickprop") on<br />

supervised learning tasks, but they predict that the GA will be most useful in finding weights in tasks where<br />

back−propagation and its relatives cannot be used, such as in unsupervised learning tasks, in which the error<br />

at each output unit is not available <strong>to</strong> the learning system, or in situations in which only sparse reinforcement<br />

is available. This is often the case for "neurocontrol" tasks, in which neural networks are used <strong>to</strong> control<br />

complicated systems such as robots navigating in unfamiliar environments.<br />

Evolving Network Architectures<br />

Chapter 2: <strong>Genetic</strong> <strong>Algorithms</strong> in Problem Solving<br />

Montana and Davis's GA evolved the weights in a fixed network. As in most neural network applications, the<br />

architecture of the network—the number of units and their interconnections—is decided ahead of time by the<br />

programmer by guesswork, often aided by some heuristics (e.g., "more hidden units are required for more<br />

difficult problems") and by trial and error. Neural network researchers know all <strong>to</strong>o well that the particular<br />

architecture chosen can determine the success or failure of the application, so they would like very much <strong>to</strong> be<br />

able <strong>to</strong> au<strong>to</strong>matically optimize the procedure of designing an architecture for a particular application. Many<br />

believe that GAs are well suited for this task. There have been several efforts along these lines, most of which<br />

fall in<strong>to</strong> one of two categories: direct encoding and grammatical encoding. Under direct encoding, a network<br />

architecture is directly encoded in<strong>to</strong> a GA chromosome. Under grammatical encoding, the GA does not evolve<br />

network architectures:<br />

53

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!