27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Particular Problems <strong>and</strong> Methods of Solution 13<br />

2.2.4 Direct (Numerical) Versus<br />

Indirect (Analytic) Optimization<br />

The classi cation of mathematical methods of optimization into direct <strong>and</strong> indirect procedures<br />

is attributed to Edelbaum (1962). Especially if one has a computer model of a<br />

system, with which one can perform simulation experiments, the search for a certain set<br />

of exogenous parameters to generate excellent results asks for robust direct optimization<br />

methods. Direct or numerical methods are those that approach the solution in a stepwise<br />

manner (iteratively), at each step (hopefully) improving the value of the objective<br />

function. If this cannot be guaranteed, a trial <strong>and</strong> error process results.<br />

An indirect or analytic procedure attempts to reach the optimum in a single (calculation)<br />

step, without tests or trials. It is based on the analysis of the special properties of<br />

the objective function at the position of the extremum. In the simplest case, parameter<br />

optimization without constraints, one proceeds on the assumption that the tangent plane<br />

at the optimum is horizontal, i.e., the rst partial derivatives of the objective function<br />

exist <strong>and</strong> vanish in x :<br />

@F<br />

=0 for all i = 1(1)n (2.1)<br />

@xi x=x<br />

This system of equations can be expressed with the so-called Nabla operator (r) asa<br />

single vector equation for the stationary point x :<br />

rF (x )=0 (2.2)<br />

Equation (2.1) or (2.2) transforms the original optimization problem into a problem of<br />

solving a set of, perhaps non-linear, simultaneous equations. If F (x) or one or more of its<br />

derivatives are not continuous, there may be extrema that do not satisfy the otherwise<br />

necessary conditions. On the other h<strong>and</strong> not every point inIR n {the n-dimensional space<br />

of real variables{ that satis es conditions (2.1) need be a minimum it could also be a<br />

maximum or a saddle point. Equation (2.2) is referred to as a necessary condition for the<br />

existence of a local minimum.<br />

To givesu cient conditions requires further processes of di erentiation. In fact,<br />

di erentiations must be carried out until the determinant of the matrix of the second<br />

or higher partial derivatives at the point x is non-zero. Things remain simple in the case<br />

of only one variable, when it is required that the lowest order non-vanishing derivative is<br />

positive <strong>and</strong> of even order. Then <strong>and</strong> only then is there a minimum. If the derivative is<br />

negative, x represents a maximum. A saddle point exists if the order is odd.<br />

For n variables, at least the n<br />

(n +1) second partial derivatives<br />

2<br />

@2F (x)<br />

for all i j = 1(1)n<br />

@xi @xj<br />

must exist at the point x . The determinant of the Hessian matrix r 2 F (x )must be<br />

positive, as well as the further n ; 1 principle subdeterminants of this matrix. While<br />

MacLaurin had already completely formulated the su cient conditions for the existence<br />

of minima <strong>and</strong> maxima of one parameter functions in 1742, the corresponding theory

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!