principles and applications of microearthquake networks
principles and applications of microearthquake networks
principles and applications of microearthquake networks
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
5.4. Nonlinear Optimization 123<br />
Fletcher, 1980; Gillet al., 1981). Unfortunately, no single method has been<br />
found to solve all problems effectively. The subject is being actively pursued<br />
by scientists in many disciplines. In this section, we briefly describe<br />
some elementary aspects <strong>of</strong> nonlinear optimization.<br />
5.4.1. Problem DeJinition<br />
The basic mathematical problem in optimization is to minimize a scalar<br />
quantity $which is the value <strong>of</strong> a function F(xl, x2, . . . , xn) <strong>of</strong> n independent<br />
variables. These independent variables, xl, x2, . . . , xn, must be<br />
adjusted to obtain the minimum required, ie.,<br />
(5.64) minimize {$ = F(x,, x2, . . . , x,)}<br />
The function Fis referred to as the objective function because its value +<br />
is the quantity to be minimized.<br />
It is useful to consider the n independent variables, xl, x2, . . . , xn, as a<br />
vector in n-dimensional Euclidean space,<br />
(5.65)<br />
where the superscript T denotes the transpose <strong>of</strong> a vector or a matrix.<br />
During the optimization process, the coordinates <strong>of</strong> x will take on successive<br />
values as adjustments are made. Each set <strong>of</strong> adjustments to x is<br />
termed an iteration, <strong>and</strong> a number <strong>of</strong> iterations are generally required<br />
before a minimum is reached. In order to start an iterative procedure, an<br />
initial estimate <strong>of</strong> x must be given. After K iterations, we denote the value<br />
<strong>of</strong> $ by $Ltm, <strong>and</strong> the value <strong>of</strong> x by x(~). Changes in the value <strong>of</strong> x between<br />
two successive iterations are just the adjustments applied. These adjustments<br />
may also be thought <strong>of</strong> as components <strong>of</strong> an adjustment vector,<br />
(5.66) 6x = (6x1, 652, . . * , 6Xn)T<br />
The goal <strong>of</strong> optimization is to find after K iterations a xCK) that gives a<br />
minimum value $(K) <strong>of</strong> the objective function F. A point x(~) is called a<br />
global minimum if it gives the lowest possible value <strong>of</strong> F. In general, a<br />
global minimum need not be unique; <strong>and</strong> in practice, it is very difficult to<br />
tell if a global minimum has been reached by an iterative procedure. We<br />
may only claim that a minimum within a local area <strong>of</strong> search has been<br />
obtained. Even such a local minimum may not be unique locally.<br />
Methods in optimization may be divided into three classes: (1) search<br />
methods which use function evaluation only; (2) methods which in addition<br />
require gradient information or first derivatives; <strong>and</strong> (3) methods<br />
which require function, gradient, <strong>and</strong> second-derivative information. The<br />
appeal <strong>of</strong> each class depends on the particular problem <strong>and</strong> the available