27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

R<strong>and</strong>om Strategies 89<br />

parts is taken as an approximation to the global optimum. However, for n-dimensional<br />

interpolations the cost increases rapidly with n thisscheme thus looks impractical for<br />

more than two variables. To work with several, r<strong>and</strong>omly chosen starting points <strong>and</strong> to<br />

compare each of the local minima (or maxima) obtained is usually regarded as the only<br />

course of action for determining the global optimum with at least a certain probability<br />

(so-called multistart techniques). Proposals along these lines have beenmadeby, among<br />

others, Gelf<strong>and</strong> <strong>and</strong> Tsetlin (1961), Bromberg (1962), Bocharov <strong>and</strong>Feldbaum (1962),<br />

Zellnik, Sondak, <strong>and</strong> Davis (1962), Krasovskii (1962), Gurin <strong>and</strong> Lobac (1963), Flood <strong>and</strong><br />

Leon (1964, 1966), Kwakernaak (1965), Casey <strong>and</strong> Rustay (1966), Weisman <strong>and</strong> Wood<br />

(1966), Pugh (1966), McGhee (1967), Crippen <strong>and</strong> Scheraga (1971), <strong>and</strong> Brent (1973).<br />

A further problem faces deterministic strategies if the calculated or measured values<br />

of the objective function are subject to stochastic perturbations. In the experimental<br />

eld, for example in the on-line optimum search, or for control of the optimal conditions<br />

in processes, perturbations must be taken into account from the start (e.g., Tovstucha,<br />

1960 Feldbaum, 1960, 1962 Krasovskii, 1963 Medvedev, 1963, 1968 Kwakernaak, 1966<br />

Zypkin, 1967). However, in computational optimization too, where the objective function<br />

is analytically speci ed, a similar e ect arises because of rounding errors (Brent, 1973),<br />

especially if one uses hybrid analogue computers for solving functional optimization problems<br />

(e.g., Gilbert, 1967 Korn <strong>and</strong> Korn, 1964 Bekey <strong>and</strong> Karplus, 1971). A simple, if<br />

expensive (in the sense of cost in computations or trials) method of dealing with this is the<br />

repetition of measurements until a de nite conclusion is possible. This is the procedure<br />

adopted by Box <strong>and</strong> Wilson (1951) in the experimental gradient method, <strong>and</strong> by Box<br />

(1957) in his EVOP strategy. Instead of a xed number of repetitions, which while on<br />

the safe side may be unnecessarily high, one can follow the concept of sequential analysis<br />

of statistical data (Wald, 1966 see also Zigangirov, 1965 Schumer, 1969 Kivelidi <strong>and</strong><br />

Khurgin, 1970 Langguth, 1972), which istomake only as many trials as the trial results<br />

seem to make absolutely necessary. More detailed investigations on this subject havebeen<br />

made, for example, by Mlynski (1964a,b, 1966a,b).<br />

As opposed to attempting to improve the decisive data, Brooks <strong>and</strong> Mickey (1961)<br />

have found that one should work with the minimum number of n + 1 comparison points<br />

in order to determine a gradient direction, even if this is a perturbed one. One must<br />

however depart from the requirement thateach step should yield a success, or even the<br />

locally greatest success. The motto that following locally the best possible route seldom<br />

leads to the best overall result is true not only for rst order gradient strategies but also for<br />

Newton <strong>and</strong> quasi-Newton methods . Harkins (1964), for example, maintains that inexact<br />

line searches not only do not worsen the convergence of a minimization procedure but in<br />

some cases actually improve it. Similar experiences led Davies, Swann, <strong>and</strong> Campey in<br />

their strategy (see Chap. 3, Sect. 3.2.1.4) to make only one quadratic interpolation in<br />

each direction. Also Spendley, Hext, <strong>and</strong> Himsworth (1962), in the formulation of their<br />

simplex method, which generates only near-optimal directions, work on the assumption<br />

that r<strong>and</strong>om decisions are not necessarily a total disadvantage (see also Himsworth, 1962).<br />

Based on similar arguments, the modi cation of this strategy by M.J.Box (1965) sets<br />

up the initial simplex or complex by means of r<strong>and</strong>om numbers. Imamura et al. (1970)<br />

even go so far as to superimpose arti cial stochastic variations on an objective function

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!