27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Theoretical Results 171<br />

tions. Similar results apply in the case of the variable metric strategy, except that there<br />

are an additional O(n 2 ) basic operations for matrix additions <strong>and</strong> multiplications. The<br />

direct search method due to Powell evaluates neither rst nor second partial derivatives.<br />

After every n + 1 line searches the direction vectors are rede ned, which requires O(n 2 )<br />

values to be assigned. But since each one dimensional optimization counts as an iteration<br />

step, only O(n) direct operations are attributed to each iteration. A convenient summary<br />

of the relationships is given in Table 6.1. For simplicity only the terms of highest order in<br />

the number of parameters n are accounted for, without their coe cients of proportionality.<br />

So far we have no scale for comparison of the di erent function evaluations with each<br />

other. Fletcher (1972a) <strong>and</strong> others consider an evaluation of the Hessian matrix to be<br />

equivalent toO(n) gradient determinations or O(n 2 ) objective function calls. This type<br />

of scaling is valid whenever the partial derivatives cannot be obtained in analytic form<br />

<strong>and</strong> provided as functions, but are calculated approximately as quotients of di erences<br />

obtained by trial steps in the coordinate directions. In any case it ought to be about<br />

right if the objective function is of higher than second order. Accordingly the following<br />

weighting of the function evaluations can be introduced on the table:<br />

F : rF : r 2 F ^ = n 0 : n 1 : n 2<br />

Before anything can be said about the overall computation cost, or time, one must<br />

know howmany operations are required for calculating a value of the objective function.<br />

In general a function of n variables will entail a cost that rises at least linearly with n.<br />

Table 6.1: Number of operations required by the most important<br />

basic strategies to minimize a quadratic objective function<br />

in terms of the number of variables n (only orders<br />

of magnitude)<br />

Number of Number of operations per iteration<br />

Strategy iterations<br />

Function evaluations<br />

F rF r<br />

Elementary<br />

2F operations<br />

Newton<br />

e.g., Newton-Raphson<br />

Variable metric<br />

e.g., Davidon<br />

Conjugate gradients<br />

e.g., Fletcher-Reeves<br />

Conjugate directions<br />

e.g., Powell<br />

n 0 | n 0 n 0 n 3<br />

n 1 n 0 n 0 | n 2<br />

n 1 n 0 n 0 | n 1<br />

n 2 n 0 | | n 1<br />

n 0 n 1 n 2<br />

Weighting factors

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!