27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

58 Hill climbing Strategies<br />

of factorial design (see for example Davies, 1954). The minimum number according to<br />

Brooks <strong>and</strong> Mickey (1961) is n +1. Thus instead of a single starting point, n +1vertices<br />

are used. They are arranged so as to be equidistant from each other: for n = 2 in an<br />

equilateral triangle for n =3atetrahedron <strong>and</strong> in general a polyhedron, also referred to<br />

as a simplex. The objective function is evaluated at all the vertices. The iteration rule is:<br />

Replace the vertex with the largest objective function value by a new one situated at its<br />

re ection in the midpoint of the other n vertices. This rule aims to locate the new point<br />

at an especially promising place. If one l<strong>and</strong>s near a minimum, the newest vertex can<br />

also be the worst. In this case the second worst vertex should be re ected. If the edge<br />

length of the polyhedron is not changed, the search eventually stagnates. The polyhedra<br />

rotate about the vertex with the best objective function value. A closer approximation to<br />

the optimum can only be achieved by halving the edge lengths of the simplex. Spendley,<br />

Hext, <strong>and</strong> Himsworth suggest doing this whenever a vertex is common to more than<br />

1:65 n +0:05 n 2 consecutive polyhedra. Himsworth (1962) holds that this strategy is<br />

especially advantageous when the number of variables is large <strong>and</strong> the determination of<br />

the objective function prone to error.<br />

To this basic procedure, various modi cations have been proposed by, among others,<br />

Nelder <strong>and</strong> Mead (1965), Box (1965), Ward, Nag, <strong>and</strong> Dixon (1969), <strong>and</strong> Dambrauskas<br />

(1970, 1972). Richardson <strong>and</strong> Kuester (1973) have provided a complete program. The<br />

most common version is that of Nelder <strong>and</strong> Mead, in which the main di erence from the<br />

basic procedure is that the size <strong>and</strong> shape of the simplex is modi ed during the run to<br />

suit the conditions at each stage.<br />

The algorithm, with an extension by O'Neill (1971), runs as follows:<br />

Step 0: (Initialization)<br />

Choose a starting point x (00) , initial step lengths s (0)<br />

i for all i = 1(1)n<br />

(if no better scaling is known, s (0)<br />

i = 1), <strong>and</strong> an accuracy parameter " > 0<br />

(e.g., " =10 ;8 ). Set c =1<strong>and</strong>k =0.<br />

Step 1: (Establish the initial simplex)<br />

x (k ) = x (k0) + cs (0) e for all = 1(1)n.<br />

Step 2: (Determine worst <strong>and</strong> best points for the normal re ection)<br />

Determine the indices w (worst point) <strong>and</strong> b (best point) such that<br />

F (x (kw) ) = maxfF (x (k ) ) = 0(1)ng<br />

F (x (kb) ) = minfF (x (k ) ) = 0(1)ng<br />

Construct x = 1<br />

n<br />

nP<br />

=0 6=w<br />

If F (x 0 ) <<br />

>:<br />

> 1 set x (k+1w) = x 0 <strong>and</strong> go to step 8<br />

=1 go to step 5<br />

=0 go to step 6:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!