27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Particular Problems <strong>and</strong> Methods of Solution 9<br />

normally the type Gj(x) 0: Points on the edge of the (closed) allowed space are thereby<br />

permitted. A di erent situation arises if the constraint isgiven as a strict inequality of<br />

the form Gj(x) > 0: Then the allowed space can be open if Gj(x) iscontinuous. If for<br />

Gj(x) 0, with other conditions the same, the minimum lies on the border Gj(x) =0,<br />

then for Gj(x) > 0, there is no true minimum. One refers here to an in mum, the<br />

greatest lower bound, at which actually Gj(x) = 0. In analogous fashion one distinguishes<br />

between maxima <strong>and</strong>suprema (smallest upper bounds). Optimization in the following<br />

means always to nd a maximum or a minimum, perhaps under inequality constraints.<br />

2.2.2 Static Versus Dynamic Optimization<br />

The term static optimization means that the optimum is time invariant or stationary.<br />

It is su cient to determine its position <strong>and</strong> size once <strong>and</strong> for all. Once the location<br />

of the extremum has been found, the search isover. In many cases one cannot control<br />

all the variables that in uence the objective function. Then it can happen that these<br />

uncontrollable variables change with time <strong>and</strong> displace the optimum (non-stationary case).<br />

The goal of dynamic optimization 2 is therefore to maintain an optimal condition in<br />

the face of varying conditions of the environment. The search for the extremum becomes<br />

a more or less continuous process. According to the speed of movement of the optimum,<br />

it may be necessary, instead of making the slow adjustment of the independent variables<br />

by h<strong>and</strong>{as for example in the EVOP method (see Chap. 2, Sect. 2.2.1), to give the task<br />

to a robot or automaton.<br />

The automaton <strong>and</strong> the process together form a control loop. However, unlike conventional<br />

control loops this one is not required to maintain a desired value of a quantity<br />

but to discover the most favorable value of an unknown <strong>and</strong> time-dependent quantity.<br />

Feldbaum (1962), Frankovic et al. (1970), <strong>and</strong> Zach (1974) investigate in detail such automatic<br />

optimization systems, known as extreme value controllers or optimizers. In each<br />

case they are built around a search process. For only one variable (adjustable setting) a<br />

variety ofschemes can be designed. It is signi cantly more complicated for an optimal<br />

value loop when several parameters have tobeadjusted.<br />

Many of the search methods are so very costly because there is no a priori information<br />

about the process to be controlled. Hence nowadays one tries to build adaptive control<br />

systems that use information gathered over a period of time to set up an internal model<br />

of the system, or that, in a sense, learn. Oldenburger (1966) <strong>and</strong>, in more detail, Zypkin<br />

(1970) tackle the problems of learning <strong>and</strong> self-learning robots. Adaptation is said to take<br />

place if the change in the control characteristics is made on the basis of measurements<br />

of those input quantities to the process that cannot be altered{also known as disturbing<br />

variables. If the output quantities themselves are used (here the objective function) to<br />

adjust the control system, the process is called self-learning or self-adaptation. The latter<br />

possibility is more reliable but, because of the time lag, slower. Cybernetic engineering is<br />

concerned with learning processes in a more general form <strong>and</strong> always sees or even seeks<br />

links with natural analogues.<br />

An example of a robot that adapts itself to the environment is the homeostat of Ashby<br />

2 Some authors use the term dynamic optimization in a di erent way than is done here.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!