27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Multidimensional Strategies 39<br />

the second variable. Both end results are then used to reject one of the values of the<br />

rst variable that were held constant, <strong>and</strong> to reduce the size of the interval with respect<br />

to this parameter. By analogy, a three dimensional minimization consists of a recursive<br />

sequence of two dimensional Fibonacci searches. If the number of function calls to reduce<br />

the uncertainty interval [aibi] su ciently with respect to the variable xi is Ni, then the<br />

total number N also obeys Equation (3.19). The advantage compared to the grid method<br />

is simply that Ni depends logarithmically on the ratio of initial interval size to accuracy<br />

(see Equation (3.11)). Aside from the fact that each variable must be suitably xed in<br />

advance, <strong>and</strong> that the unimodality requirement of the objective function only guarantees<br />

that local minima are approached, there is furthermore no guarantee that a desired<br />

accuracy will be reached within a nite number of objective function calls (Kaupe, 1964).<br />

Other elimination procedures have been extended in a similar way tothemultivariable<br />

case, such as, for example, the dichotomous search (Wilde, 1965) <strong>and</strong> a sequential boxingin<br />

method (Berman, 1969). In each case the e ort rises exponentially with the number<br />

of variables. Another elimination concept for the multidimensional case, the method of<br />

contour tangents, is due to Wilde (1963) (see also Beamer <strong>and</strong> Wilde, 1969). It requires,<br />

however, the determination of gradient vectors. Newman (1965) indicates how to proceed<br />

in the two dimensional case, <strong>and</strong> also for discrete values of the variables (lattice search).<br />

He requires that F (x) beconvex <strong>and</strong> unimodal. Then the cost should only increase<br />

linearly with the number of variables. For n 3, however, no applications of the contour<br />

tangent method are as yet known.<br />

Transferring interpolation methods to the n-dimensional case means transforming the<br />

original minimum problem into a series of problems, in the form of a set of equations to<br />

be solved. As non-linear equations can only be solved iteratively, this procedure is limited<br />

to the special case of linear interpolation with quadratic objective functions. Practical<br />

algorithms based on the regula falsi iteration can be found in Schmidt <strong>and</strong> Schwetlick<br />

(1968) <strong>and</strong> Schwetlick (1970). The procedure is not widely used as a minimization method<br />

(Schmidt <strong>and</strong> Vetters, 1970). The slopes of the objective function that it requires are<br />

implicitly calculated from function values. The secant method described by Wolfe (1959b)<br />

for solving a system of non-linear equations also works without derivatives of the functions.<br />

From n +1 current argument values, it extracts the required information about the<br />

structure of the n equations.<br />

Just as the transition from simultaneous to sequential one dimensional search methods<br />

reduces the e ort required at the expense of global convergence, so each further<br />

acceleration in the multidimensional case is bought by a reduction in reliability. High<br />

convergence rates are achieved by gathering more information <strong>and</strong> interpreting it in the<br />

form of a model of the objective function. If assumptions <strong>and</strong> reality agree, then this<br />

procedure is successful if they do not agree, then extrapolations lead to worse predictions<br />

<strong>and</strong> possibly even to ab<strong>and</strong>oning an optimization strategy. Figure 3.4 shows the contour<br />

diagram of a smooth two parameter objective function.<br />

All the strategies to be described assume a degree of smoothness in the objective<br />

function. They do not converge with certainty to the global minimum but at best to one<br />

of the local minima, or sometimes only to a saddle point.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!