15.11.2014 Views

principles and applications of microearthquake networks

principles and applications of microearthquake networks

principles and applications of microearthquake networks

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

126 5. Inversion criid Optimization<br />

If the objective function is quadratic in x, the approximation used in<br />

Newton’s method is exact, <strong>and</strong> the minimum can be reached from any x in<br />

a single step. For more general objective functions, an iterative procedure<br />

must be used with an initial estimate <strong>of</strong> x sufficiently near the minimum in<br />

order for Newton’s method to converge. Since this can not be guaranteed,<br />

modern computer codes for optimization usually include modifications to<br />

ensure that (1) the 6x generated at each iteration is a descent or downhill<br />

direction, <strong>and</strong> (2) an approximate minimum along this direction is found.<br />

The general form <strong>of</strong> the iteration is6x = -aGg, where G is a positivedefinite<br />

approximation to H -’ based on second-derivative information,<br />

<strong>and</strong> a represents the distance along the -Gg direction to step. Alternatively,<br />

as iterations proceed, an approximate Hessian matrix may be built<br />

up without computing second derivatives. This is known as the Quasi-<br />

Newton or variable metric method. For more details on the derivative<br />

methods <strong>and</strong> their implementations, readers may refer, for example, to R.<br />

Fletcher (1980), <strong>and</strong> Gill et al. (1981).<br />

5.4.3. Nonlinear Least Squares<br />

The application <strong>of</strong> optimization techniques to model fitting by least<br />

squares may be considered as a method for minimizing errors <strong>of</strong> fit (or<br />

residuals) at a set <strong>of</strong> rn data points where the coordinates <strong>of</strong> the kth data<br />

point are (x)~, k = 1, 2‘, . . . , rn. The objective function for the present<br />

problem is<br />

(5.77)<br />

where rk(x) denotes the evaluation <strong>of</strong> residual r(x) at the kth data point.<br />

We may consider rk(x), k = 1,2, . . . , rn, as components <strong>of</strong> a vector in an<br />

rn-dimensional Euclidean space <strong>and</strong> write<br />

(5.78) r = M x), r2(x), . . . , r,WT<br />

Therefore, Eq. (5.77) becomes<br />

(5.79) F(x) = rTr<br />

TO find the gradient vector g, we perform partial differentiation on Eq.<br />

(5.77) with respect to xi, i = 1, 2, . . . , n, <strong>and</strong> obtain<br />

(5.80) aF(x)/axi =<br />

in<br />

2rk(x)[ark(x)/dxi], i = 1, 2, . . . , n<br />

k=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!