27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Particular Problems <strong>and</strong> Methods of Solution 17<br />

reset into the allowed region by thecomplex method of M. J. Box (1965) (a direct search<br />

strategy) whenever explicit bounds are crossed. Implicit constraints on the other h<strong>and</strong><br />

are treated as barriers (see Chap. 3, Sect. 3.2.1.6).<br />

The methods of mathematical programming, both linear <strong>and</strong> non-linear, treat the<br />

constraints as the main aspect of the problem. They were specially evolved for operations<br />

research (Muller-Merbach, 1971) <strong>and</strong> assume that all variables must always be positive.<br />

Such non-negativity conditions allow special solution procedures to be developed. The<br />

simplest models of economic processes are linear. There are often no better ones available.<br />

For this purpose Dantzig (1966) developed the simplex method of linear programming (see<br />

also Krelle <strong>and</strong> Kunzi, 1958 Hadley, 1962 Weber, 1972).<br />

The linear constraints, together with the condition on the signs of the variables, span<br />

the feasible region in the form of a polygon (for n = 2) or a polyhedron, sometimes called<br />

simplex. Since the objective function is also linear, except in special cases, the desired<br />

extremum must lie in a corner of the polyhedron. It is therefore su cient just to examine<br />

the corners. The simplex method of Dantzig does this in a particularly economic way, since<br />

only those corners are considered in which the objective function has progressively better<br />

values. It can even be thought of as a gradient method along the edges of the polyhedron.<br />

It can be applied in a straightforward way to manyhundreds, even thous<strong>and</strong>s, of variables<br />

<strong>and</strong> constraints. For very large problems, which mayhave a particular structure, special<br />

methods have also been developed (Kunzi <strong>and</strong> Tan, 1966 Kunzi, 1967). Into this category<br />

come the revised <strong>and</strong> the dual simplex methods, the multiphase <strong>and</strong> duplex methods, <strong>and</strong><br />

decomposition algorithms. An unpleasant property of linear programs is that sometimes<br />

just small changes of the coe cients in the objective function or the constraints can cause<br />

a big alteration in the solution. To reveal such dependencies, methods of parametric linear<br />

programming <strong>and</strong> sensitivity analysis have been developed (Dinkelbach, 1969).<br />

Most strategies of non-linear programming resemble the simplex method or use it<br />

as a subprogram (Abadie, 1972). This is the case in particular for the techniques of<br />

quadratic programming, which are conceived for quadratic objective functions <strong>and</strong> linear<br />

constraints. The theory of non-linear programming is based on the optimality conditions<br />

developed by Kuhn <strong>and</strong> Tucker (1951), an extension of the theory of maxima <strong>and</strong> minima<br />

to problems with constraints in the form of inequalities. These can be expressed<br />

geometrically as follows: at the optimum (in a corner of the allowed region) the gradient<br />

of the objective function lies within the cone formed by the gradients of the active<br />

constraints. To start with, this is only a necessary condition. It becomes su cient under<br />

certain assumptions concerning the structure of the objective <strong>and</strong> constraint functions.<br />

For minimum problems, the objective function <strong>and</strong> the feasible region must be convex,<br />

that is the constraints must be concave. Such a problem is also called a convex program.<br />

Finally the Kuhn-Tucker theorem transforms a convex program into an equivalent saddle<br />

point problem (Arrow <strong>and</strong> Hurwicz, 1956), just as the Lagrange multiplier method does<br />

for constraints in the form of equalities. A complete theory of equality constraints is due<br />

to Apostol (1957).<br />

Non-linear programming is therefore only applicable to convex optimization, in which,<br />

to be precise, one must distinguish at least seven types of convexity (Ponstein, 1967). In<br />

addition, all the functions are usually required to be continuously di erentiable, with an

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!