27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

166 Comparison of Direct Search Strategies for Parameter Optimization<br />

was best from the point of view of its results. So long as no generally recognized quality<br />

function of this kind exists, the question of which optimization method is optimal remains<br />

unanswered.<br />

6.2 Theoretical Results<br />

Classical optimization theory is concerned with establishing necessary <strong>and</strong> su cient existence<br />

criteria for maxima <strong>and</strong> minima. It provides systems of equations but no iterative<br />

methods of nding their solutions. Not even Dantzig's simplex method (1966) for solving<br />

linear programming problems can be regarded as a direct result of theory{theoretical considerations<br />

of the linear problem only show that the extremum sought, except in special<br />

cases, must always lie in a corner of the polyhedron de ned by the constraints. With n<br />

variables <strong>and</strong> m constraints (together with n non-negativity conditions) the number of<br />

corners or points of intersection of the hypersurfaces formed by the constraints is also<br />

limited to a maximum of m+n<br />

. Even the systematic inspection of all the points of<br />

n<br />

intersection would be a nite optimization method. But not all the points of intersection<br />

are also within the allowed region (Saaty, 1955, 1963). Muller-Merbach (1971) gives<br />

mn ; m + 2 as an upper bound to the number of feasible corner points. The simplex<br />

method, which is a method of steepest ascent along the edges of the polyhedron only<br />

traverses a tiny fraction of all the corners. Dantzig (1966) refers to empirical evidence<br />

that the number of necessary iterations increases as n, thenumber of variables, if the<br />

number of constraints m is constant, or as m if (n ; m) is not too small. Since, in<br />

the least favorable case, between m <strong>and</strong> 2 m exchange operations must be performed on<br />

the tableau of (m +1)(n + 1) coe cients, the average computation time increases as<br />

O(m2 n). In so-called degenerate cases, however, the simplex method can also become<br />

in nite. The repeated cycling through the same corners must then be broken by arule<br />

for r<strong>and</strong>omly choosing the iteration step (Dantzig). From a theoretical point of view the<br />

ellipsoid method of Khachiyan (1979) <strong>and</strong> the interior point method of Karmarkar (1984)<br />

do have the advantage of polynomial time consumption even in the worst case.<br />

The question of niteness of iterative methodsisalsoacentral theme of non-linear<br />

programming. In this case the solution can lie at any point on the border or interior<br />

of the enclosed region. For the special case that the objective function <strong>and</strong> all the constraint<br />

functions are convex <strong>and</strong> multiply di erentiable Kuhn <strong>and</strong> Tucker (1951) <strong>and</strong> John<br />

(1948) have derived necessary <strong>and</strong> su cient conditions for extremal solutions. Most of<br />

the iteration methods that have beendeveloped on this basis are designed for problems<br />

with a quadratic objective function <strong>and</strong> linear constraints. Representative of quadratic<br />

programming are, for example, the methods of Beale (1956) <strong>and</strong> Wolfe (1959a). They<br />

make extensive use of the algorithm of the simplex method <strong>and</strong> thus belong, according to<br />

Hadley (1969), to the class of neighboring extremal point methods. Other strategies can<br />

moveinto the allowed region in the course of the iterations. As far as the constraints permit<br />

they take the direction of the gradient of the objective function. They are therefore<br />

known as gradient methods of non-linear programming (Kappler, 1967). As their name<br />

may suggest, however, they are not suitable for all non-linear problems. Their convergence<br />

can be proved at best for di erentiable quasi-convex programs (Kunzi, Krelle, <strong>and</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!