27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Multidimensional Strategies 65<br />

evaluation (lattice search, factorial design), in which the task is to set up mathematical<br />

models of physical or other processes. This territory is entered for example byG.E.P.Box<br />

(1957), Box <strong>and</strong> Wilson (1951), Box <strong>and</strong>Hunter (1957), Box <strong>and</strong> Behnken (1960), Box<br />

<strong>and</strong> Draper (1969, 1987), Box et al. (1973), <strong>and</strong> Beveridge <strong>and</strong> Schechter (1970). It will<br />

not be covered in any more detail here.<br />

3.2.2 Gradient Strategies<br />

The Gauss-Seidel strategy very straightforwardly uses only directions parallel to the coordinate<br />

axes to successively improve the objective function value. All other direct search<br />

methods strive toadvance more rapidly by taking steps in other directions. To doso<br />

they exploit the knowledge about the topology of the objective function gleaned from<br />

the successes <strong>and</strong> failures of previous iterations. Directions are viewed as most promising<br />

in which the objective function decreases rapidly (for minimization) or increases rapidly<br />

(for maximization). Southwell (1946), for example, improves the relaxation by choosing<br />

the coordinate directions, not cyclically, but in order of the size of the local gradient in<br />

them. If the restriction of parallel axes is removed, the local best direction is given by<br />

the (negative) gradient vector<br />

with<br />

rF (x)=(Fx1(x)Fx2(x):::Fxn(x)) T<br />

Fxi(x) = @F<br />

(x) for all i = 1(1)n<br />

@xi<br />

at the point x (0) . All hill climbing procedures that orient their choice of search directions<br />

v (0) according to the rst partial derivatives of the objective function are called gradient<br />

strategies. They can be thought of as analogues of the total step procedure of Jacobi for<br />

solving systems of linear equations (see Schwarz, Rutishauser, <strong>and</strong> Stiefel, 1968).<br />

So great is the number of methods of this type which have been suggested or applied<br />

up to the present, that merely to list them all would be di cult. The reason lies in<br />

the fact that the gradient represents a local property of a function. To follow the path<br />

of the gradient exactly would mean determining in general a curved trajectory in the ndimensional<br />

space. This problem is only approximately soluble numerically <strong>and</strong> is more<br />

di cult than the original optimization problem. With the help of analogue computers<br />

continuous gradient methods have actually been implemented (Bekey <strong>and</strong> McGhee, 1964<br />

Levine, 1964). They consider the trajectory x(t) as a function of time <strong>and</strong> obtain it as<br />

the solution of a system of rst order di erential equations.<br />

All the numerical variants of the gradient method di er in the lengths of the discrete<br />

steps <strong>and</strong> thereby also with regard to how exactly they follow the gradient trajectory.<br />

The iteration rule is generally<br />

x (k+1) = x (k) ; s (k) rF (x(k) )<br />

krF (x (k) )k<br />

It assumes that the partial derivatives everywhere exist <strong>and</strong> are unique. If F (x) is continuously<br />

di erentiable then the partial derivatives exist <strong>and</strong> F (x) iscontinuous.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!