27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Particular Problems <strong>and</strong> Methods of Solution 15<br />

Subsidiary conditions, or constraints, complicate matters. In rare cases equality constraints<br />

can be expressed as equations in one variable, that can be eliminated from the<br />

objective function, or constraints in the form of inequalities can be made super uous<br />

by a transformation of the variables. Otherwise there are the methods of bounded variation<br />

<strong>and</strong> Lagrange multipliers, in addition to penalty functions <strong>and</strong> the procedures of<br />

mathematical programming.<br />

The situation is very similar for functional optimization, except that here the indirect<br />

methods are still dominanteven today. Thevariational calculus provides as conditions for<br />

optima di erential instead of ordinary equations{actually ordinary di erential equations<br />

(Euler-Lagrange) or partial di erential equations (Hamilton-Jacobi). In only a few cases<br />

can such a system be solved in a straightforward way for the unknown functions. One<br />

must usually resort again to the help of a computer. Whether it is advantageous to<br />

use a digital or an analogue computer depends on the problem. It is a matter of speed<br />

versus accuracy. Ahybrid system often turns out to be especially useful. If, however, the<br />

solution cannot be found by a purely analytic route, why not choose from the start the<br />

direct procedure also for functional optimization? In fact with the increasing complexity<br />

of practical problems in numerical optimization, this eld is becoming more important, as<br />

illustrated by the work of Daniel (1969), who takes over methods without derivatives from<br />

parameter optimization <strong>and</strong> applies them to the optimization of functionals. An important<br />

point in this is the discretization or parameterization of the originally continuous problem,<br />

which canbeachieved in at least two ways:<br />

By approximation of the desired functions using a sum of suitable known functions or<br />

polynomials, so that only the coe cients of these remain to be determined (Sirisena,<br />

1973)<br />

By approximation of the desired functions using step functions or sides of polygons,<br />

so that only heights <strong>and</strong> positions of the connecting points remain to be determined<br />

Recasting a functional into a parameter optimization problem has the great advantage<br />

that a digital computer can be used straightaway to nd the solution numerically. The<br />

disadvantage that the result only represents a suboptimum is often not serious in practice,<br />

because the assumed values of parameters of the process are themselves not exactly<br />

known (Dixon, 1972a). The experimentally determined numbers are prone to errors or<br />

to statistical uncertainties. In any case, large <strong>and</strong> complicated functional optimization<br />

problems cannot be completely solved by the indirect route.<br />

The direct procedure can either start directly with the functional to be minimized, if<br />

the integration over the substituted function can be carried out (Rayleigh-Ritz method)<br />

or with the necessary conditions, the di erential equations, which specify the optimum. In<br />

the latter case the integral is replaced by a nite sum of terms (Beveridge <strong>and</strong> Schechter,<br />

1970). In this situation gradient methods are readily applied (Kelley, 1962 Klessig <strong>and</strong><br />

Polak, 1973). The detailed way to proceed depends very much on the subsidiary conditions<br />

or constraints of the problem.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!