22.01.2014 Views

download searchable PDF of Circuit Design book - IEEE Global ...

download searchable PDF of Circuit Design book - IEEE Global ...

download searchable PDF of Circuit Design book - IEEE Global ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

I<br />

___I<br />

126 Gradient Optimization<br />

x,<br />

x,<br />

Figure 5.11. The negative gradient and Newton<br />

vectors <strong>of</strong> a quadratic function. (Reprinted with<br />

permission <strong>of</strong> Macmillan Publishing Co., Inc. from<br />

Introduction to Optimization Techniques by M. Aoki.<br />

Copyright © 1971 by ~asanao Aoki.J<br />

generalization more easy to follow. The reader should not miss this opportu:<br />

nity to "see" what'differential calculus has to say about multidimensional<br />

functions, Taylor series representations, and the idea <strong>of</strong> linearization in the<br />

case <strong>of</strong> Newton's method. The concepts <strong>of</strong> single-variable functions were<br />

stated so that this transition could be related to calculus that every engineer<br />

should recall.<br />

Newton's method describes a change in each component <strong>of</strong> the variable<br />

space, which converges to a minimum in just one step for quadratic functions<br />

(see Figure 5.11). The Newton vector, or step, can proceed to the minimum<br />

(the origin, as shown in Figure 5.11) in just one step. But what if the function<br />

F(x) is not quadratic? Also, what if second partial derivatives are not known<br />

or inconvenient to compute? Might not a sequence <strong>of</strong> moves in the direction<br />

<strong>of</strong> steepest descent (negative, gradient) lead to the minimum? In how many<br />

steps? These are questions that will be considered next.<br />

5.2. Conjugate Gradient Search<br />

Gradient optimization methods assume the availability <strong>of</strong> partial derivatives.<br />

Usually, finding first partial derivatives adds considerable complexity to the<br />

programming task or slows program execution time. Second partial deri~atives<br />

are even less' convenient to obtain. Fortunately, there are a number <strong>of</strong> search<br />

methods that do not require second derivatives; the popular conjugate gradient<br />

methods belong to this class. Methods that require only function values<br />

without any derivatives will be mentioned briefly in Section 5.7.<br />

Almost all optimization methods select a sequence <strong>of</strong> directions leading to a<br />

minimum (or maximum) ftinction value. A minimum in any particular direction<br />

is located by varying just one variable, usually some scalar that determines<br />

the distance from the last "turning" point, and this procedure is called a<br />

linear search.,The linear algebra jargon and the special case <strong>of</strong> linear searches<br />

on quadratic surfaces will ,be described. Several elementary search direction

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!