27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

94 R<strong>and</strong>om Strategies<br />

knowledge of the approximate position of the optimum, the subspaces will be assigned<br />

di erent sizes (Idelsohn, 1964). The original uniform distribution is thereby replaced by<br />

one with a greater density in the neighborhood of the expected optimum. Karnopp (1961,<br />

1963, 1966) has treated this problem in detail without, however, giving any practical<br />

procedure. Mathematically based investigations of the same topic are due to Motskus<br />

(1965), Hupfer (1970), Pluznikov, Andreyev, <strong>and</strong> Klimenko (1971), Yudin (1965, 1966,<br />

1972), Vaysbord (1967, 1968, 1969), Taran (1968a,b), Karumidze (1969), <strong>and</strong> Meerkov<br />

(1972). If after several (simultaneous) samples the search is continued in an especially<br />

promising looking subregion, the procedure becomes sequential in character. Suggestions<br />

of this kind have been made for example by McArthur (1961), Motskus (1965), <strong>and</strong><br />

Hupfer (1970) (shrinkage r<strong>and</strong>om search). Zakharov (1969, 1970) applies the stochastic<br />

approximation for the successive shrinkage of the region in which Monte-Carlo samples<br />

are placed. The most thoroughly worked out strategy is that of McMurtry <strong>and</strong> Fu (1966,<br />

probabilistic automaton see also McMurtry, 1965). The problem considered is to adjust<br />

the variable parameters of a control system for a dynamic process in such away that the<br />

optimum of the system is found <strong>and</strong> maintained despite perturbations <strong>and</strong> (slow) drift<br />

(Hill, McMurtry, <strong>and</strong> Fu, 1964 Hill <strong>and</strong> Fu, 1965). Initially the probabilities are equal<br />

for all subregions, at the center of which the function values are measured (assumed to be<br />

stochastically perturbed). In the course of the iterations the probability matrix is altered<br />

so that regions with better objective function values are tested more often than others.<br />

The search ends when only one subregion remains: the one with the highest probability<br />

of containing the global optimum. McMurtry <strong>and</strong> Fu use a so-called linear intensi cation<br />

to adjust the probability matrix. Suggestions for further improving the convergence rate<br />

have been made by Nikolic <strong>and</strong> Fu (1966), Fu <strong>and</strong> Nikolic (1966), Shapiro <strong>and</strong> Narendra<br />

(1969), Asai <strong>and</strong> Kitajima (1972), Viswanathan <strong>and</strong> Narendra (1972), <strong>and</strong> Witten (1972).<br />

Strongin (1970, 1971) treats the same problem from the point of view of decision theory.<br />

All these methods lay great emphasis on the reliability of global convergence. The<br />

quality of the approximation depends to a large extent on the number of subdivisions<br />

of the n-dimensional region under investigation. High accuracy requirements cannot be<br />

met for many variables since, at least initially, the number of subregions to investigate<br />

rises exponentially with the number of parameters. To improve the local convergence<br />

properties, there are suggestions for replacing the midpoint tests in a subvolume by the<br />

result of an extreme value search. This could be done with one of the familiar search<br />

strategies such as a gradient method (Hill, 1969) or any other purely sequential r<strong>and</strong>om<br />

search method (Jarvis 1968, 1970) with a high convergence rate, even if it were only<br />

guaranteed to converge locally. Application, however, is limited to problems with at most<br />

seven or eight variables, as reported.<br />

Another possibility for giving a sequential character to r<strong>and</strong>om methods consists of<br />

gradually shifting the expectation value of a r<strong>and</strong>om variable with a restricted probability<br />

density distribution. Brooks (1958) calls his proposal of this type the creeping r<strong>and</strong>om<br />

search. Suitable r<strong>and</strong>om numbers are provided for example by a Gaussian distribution<br />

with expectation value <strong>and</strong> st<strong>and</strong>ard deviation . Starting from a chosen initial condition<br />

x (0) , several simultaneous trials are made, which most likely fall in the neighborhood of the<br />

starting point ( = x (0) ). The coordinates of the point with the best function value form

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!