27.06.2013 Views

Evolution and Optimum Seeking

Evolution and Optimum Seeking

Evolution and Optimum Seeking

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Core storage required 233<br />

term \core" here, but at the time these tests were performed, it was so called.) All indirect<br />

methods of quadratic optimization, which solve the linear equations for the extremal,<br />

require storage of order O(n 2 ) for the matrix of coe cients. The same holds for quasi-<br />

Newton methods, except that here the signi cant r^oleisplayed by the approximation to<br />

the inverse Hessian matrices. Most strategies that perform line searches in other than<br />

coordinate directions also require O(n 2 )words for the storage of n vectors, each withn<br />

coe cients. An exception to this rule is the conjugate gradient method of Fletcher <strong>and</strong><br />

Reeves, which at each stage only needs to retain the latest generated direction vector<br />

for the subsequent iteration. Of the direct search methods included in the tests, the coordinate<br />

methods, the method of Hooke <strong>and</strong> Jeeves,<strong>and</strong>theevolution strategies work<br />

with only O(n) words of core storage. How important the formal storage requirement<br />

of an optimization method can be is shown by the maximum number of variables for<br />

the tested strategies in Table 6.2. The limiting values range from 75 to 4,000 under the<br />

given conditions. There exist, of course, tricks such as segmentation for enabling larger<br />

programs to be run on smaller machines the cost of the strategy should then take into<br />

account, however, the extra cost in preparation time for an optimization. (Here again,<br />

modern virtual storage techniques <strong>and</strong> the relative cheapness of memory chips make the<br />

considerations above look rather old-fashioned.)<br />

In the following Table 6.11, all the strategies compared are listed again, together with<br />

the order of magnitude of their required computation time as obtained from the rst set of<br />

tests (columns 1 <strong>and</strong> 2). The third column shows how the computation time would vary<br />

if each function call performed O(n 2 ) rather than O(n) operations, as would occur for the<br />

worst case of a general quadratic objective function. The fourth column gives the storage<br />

requirement, again only as an order of magnitude, <strong>and</strong> the fth displays the product<br />

of the time <strong>and</strong> storage requirements from the two previous columns. Judging by the<br />

computation computation time alone, the variable metric strategy seems the best suited<br />

for true quadratic problems. In the least favorable case, however, it is more expensive<br />

than an indirect method <strong>and</strong> only faster in special cases. Problems having a very simple<br />

structure (e.g., Problem 1.1) can be solved just as well by direct search methods the time<br />

they take isatworst only a constant factor more than that of a second order method.<br />

If the total cost is measured by the product of time <strong>and</strong> storage requirements, all those<br />

strategies that store a two dimensional array of data, show up badly at least for problems<br />

with many variables. Since the coordinate methods have shown unreliable convergence,<br />

the method of Hooke <strong>and</strong> Jeeves <strong>and</strong> the evolution strategies remain as the least costly<br />

optimization methods. Their cost does not exceed that of indirect methods. The product<br />

of time <strong>and</strong> storage is not such a bad measure of the total cost in many computing centers<br />

jobs have been, in fact, charged with the product of storage requested in K words <strong>and</strong><br />

the time in seconds of occupation of the central processing unit (K-core-sec).<br />

A comparison of the two membered <strong>and</strong> multimembered evolution strategies seems<br />

clearly to favor the simpler method. This is not surprising as several individuals in the<br />

multimembered procedure have to nd their way towards the optimum. In nature, this<br />

process runs in parallel. Already in the early 1970s, rst e orts towards constructing<br />

multi-processor computers were undertaken (see Barnes et al., 1968 Miranker, 1971).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!