27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

90 R<strong>and</strong>om Strategies<br />

in order to prevent convergence to inferior local optima.<br />

The rigidity of an algorithm based on a xed internal model of the objective function,<br />

with which the information gathered during the iterations is interpreted, is advantageous<br />

if the objective function corresponds closely enough to the model. If this is not the case,<br />

the advantage disappears <strong>and</strong> may even turn into a disadvantage. Second order methods<br />

with quadratic models seem more sensitive in this respect than rst order methods with<br />

only linear models. Even more robust are the direct search strategies that work without<br />

an explicit model, such as the strategy of Hooke <strong>and</strong> Jeeves (1961). It makes no use of<br />

the sizes of the changes in the objective functionvalues, but only of their signs.<br />

A method that uses a kind of minimal model of the objective function is the stochastic<br />

approximation (Schmetterer, 1961 see also Chap. 2, Sect. 2.3). This purely deterministic<br />

method assumes that the measured or calculated function values are samples of a normally<br />

distributed r<strong>and</strong>om quantity, of which the expectation value is to be minimized or<br />

maximized. The method feels its way to the optimum with alternating exploratory <strong>and</strong><br />

work steps, whose lengths form convergent series with prescribed bounds <strong>and</strong> sums. In<br />

the multidimensional case this st<strong>and</strong>ard concept can be the basis of various strategies for<br />

choosing the directions of the work steps (Fabian, 1968). Usually gradient methods show<br />

themselves to best advantage here. The stochastic approximation itself is very versatile.<br />

Constraintscanbetaken into account (Kaplinskii <strong>and</strong> Propoi, 1970), <strong>and</strong> problems of<br />

functional optimization can be treated (Gersht <strong>and</strong> Kaplinskii, 1971) as well as dynamic<br />

problems of maintaining or seeking optima (Chang, 1968). Tsypkin (1968a,b,c, 1970a,b<br />

see also Zypkin, 1966, 1967, 1970) discusses these topics very thoroughly. There are also,<br />

however, arguments against the reliability of convergence for certain types of objective<br />

function (Aizerman, Braverman <strong>and</strong> Rozonoer, 1965). The usefulness of the strategy in<br />

the multidimensional case is limited by its high cost. Hence there has been no shortage<br />

of attempts to accelerate the convergence (Fabian, 1967 Berlin, 1969 Saridis, 1968,<br />

1970 Saridis <strong>and</strong> Gilbert, 1970 Janac, 1971 Kwatny, 1972 see also Chap. 2, Sect. 2.3).<br />

Ideas for using r<strong>and</strong>om directions look especially promising some of the many investigations<br />

of this topic which have been published are Loginov (1966), Stratonovich (1968,<br />

1970), Schmitt (1969), Ermoliev (1970), Svechinskii (1971), Tsypkin (1971), Antonov <strong>and</strong><br />

Katkovnik (1972), Berlin (1972), Katkovnik <strong>and</strong> Kulchitskii (1972), Kulchitskii (1972),<br />

Poznyak (1972), <strong>and</strong> Tsypkin <strong>and</strong> Poznyak (1972).<br />

The original method is not able to determine global extrema reliably. Extensions of<br />

the strategy in this direction are due to Kushner (1963, 1972) <strong>and</strong> Vaysbord <strong>and</strong> Yudin<br />

(1968). The sequence of work steps is so designed that the probability of the following<br />

state being the global optimum is maximized. In contrast to the gradient concept, the<br />

information gathered is not interpreted in terms of local but of global properties of the<br />

objective function. In the case of two local minima, the e ort of the search is gradually<br />

concentrated in their neighborhood <strong>and</strong> only when one of them is signi cantly better is<br />

the other ab<strong>and</strong>oned in favor of the one that is also a global minimum. In terms of the<br />

cost of the strategy, the acceleration of the local search <strong>and</strong> the reliability of the global<br />

search are diametrically opposed. Hill <strong>and</strong> Gibson (1965) show that their global strategy<br />

is superior to Kushner's, as well as to one of Bocharov <strong>and</strong> Feldbaum. However, they only<br />

treat cases with n 2 parameters. More recent research results have been presented by

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!