27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

R<strong>and</strong>om Strategies 99<br />

works entirely with a restricted choice of directions. With a xed step length, a direction<br />

can be r<strong>and</strong>omly selected only from within an n-dimensional hypercone. The angle subtended<br />

by the cone <strong>and</strong> its height (<strong>and</strong>thus the overall step length) are controlled in an<br />

adaptiveway. For a spherical objective function, e.g., the model functions F2 (hypercone),<br />

F3 (hypersphere), or F4 (something intermediate between hypersphere <strong>and</strong> hypercube),<br />

there is no improvement in the convergence behavior. Advantages can only be gained<br />

if the search has to follow a particular direction for a long time along a narrow valley.<br />

Sudden changes in direction present a problem, however, which leads Heydt to consider<br />

substituting for the cone con guration a hyper-parabolic or hyper-hyperbolic distribution,<br />

with which at least small step lengths would retain su cient freedom of direction.<br />

In every case the striving for rapid convergence is directly opposed to the reliabilityof<br />

global convergence. This has led Jarvis (1968, 1970) to investigate a combination of the<br />

method of Matyas (1965, 1967) with that of McMurtry <strong>and</strong> Fu (1966). Numerical tests<br />

by Cockrell (1969, 1970 see also Fu <strong>and</strong> Cockrell, 1970) show that even here the basic<br />

strategy of Matyas (1965) or Schumer <strong>and</strong> Steiglitz (1967) is clearly the better alternative.<br />

It o ers high convergence rates besides a fair chance of locating global optima, at least<br />

for a small number of variables. In the case of many dimensions, every attempt to reach<br />

global reliability isthwarted by the excessive cost. This leaves the globally convergent<br />

stochastic approximation method of Vaysbord <strong>and</strong> Yudin (1968) far behind the rest of<br />

the eld. Furthermore, the sequential or creeping r<strong>and</strong>om search is the least susceptible<br />

if perturbations act on the objective function.<br />

Users of r<strong>and</strong>om strategies always draw attention to their simplicity, exibility <strong>and</strong><br />

resistance to perturbations. These properties are especially important if one wishes to<br />

construct automatic optimalizers (e.g., Feldbaum, 1958 Herschel, 1961 Medvedev <strong>and</strong><br />

Ruban, 1967 Krasnushkin, 1970). Rastrigin actually built the rst optimalizer with a<br />

r<strong>and</strong>om search strategy, whichwas designed for automatic frequency control of an electric<br />

motor. Mitchell (1964) describes an extreme value controller that consists of an analogue<br />

computer with a permanently wired-in digital part. The digital part serves for storage <strong>and</strong><br />

ow control, while the analogue part evaluates the objective function. The developmentof<br />

hybrid analogue computers, in which the computational inaccuracy is determined by the<br />

system, has helped to bring r<strong>and</strong>om methods, especially of the sequential type, into more<br />

general use. For examples of applications besides those of the authors mentioned above,<br />

the following publications can be referred to: Meissinger (1964), Meissinger <strong>and</strong> Bekey<br />

(1966), Kavanaugh, Stewart, <strong>and</strong> Brocker (1968), Korn <strong>and</strong> Kosako (1970), Johannsen<br />

(1970, 1973), <strong>and</strong> Chatterji <strong>and</strong> Chatterjee (1971). Hybrid computers can be applied to<br />

best advantage for problems of optimal control <strong>and</strong> parameter identi cation, because they<br />

are able to carry out integrations <strong>and</strong> di erentiations more rapidly than digital computers.<br />

Mutseniyeks <strong>and</strong> Rastrigin (1964) have devised a special algorithm for the dynamic control<br />

problem of keeping an optimum. Instead of the variable position vector x,avelocityvector<br />

with components @xi=@t is varied. A r<strong>and</strong>omly chosen combination is retained as long as<br />

the objective function is decreasing in value (for minimization @F=@t < 0). As soon as<br />

it begins to increase again, a new velocity vector is chosen at r<strong>and</strong>om.<br />

It is always striking, if one observes living beings, how well adapted they are in shape,<br />

function, <strong>and</strong> lifestyle . In many cases, biological structures, processes, <strong>and</strong> systems even

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!