15.11.2014 Views

principles and applications of microearthquake networks

principles and applications of microearthquake networks

principles and applications of microearthquake networks

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

5.3. Solving InL.erse Problems 121<br />

independent columns. Such a definition is very difficult to apply in practice<br />

for a general matrix. However, if the matrix is triangular, the rank is<br />

simply the number <strong>of</strong> nonzero diagonal elements. Since an orthogonal<br />

transformation, such as that in the singular value decomposition, does not<br />

affect the rank, we see right away that the rank <strong>of</strong> A is equal to the rank <strong>of</strong><br />

S in its singular value decomposition. Hence a practical definition <strong>of</strong> the<br />

rank <strong>of</strong> a matrix is the number <strong>of</strong> its nonzero singular values. Because <strong>of</strong><br />

finite precision in any computer, it may be difficult to distinguish a small<br />

singular value <strong>and</strong> a zero one. Therefore, we define the effective rank as<br />

the number <strong>of</strong> singular values greater than some prescribed tolerance<br />

which reflects the accuracy <strong>of</strong> the machine <strong>and</strong> the data.<br />

Reducing the rank <strong>of</strong> a matrix has the effect <strong>of</strong> improving the variance<br />

<strong>of</strong> the solution. In Eq. (5.59), the singular value ui enters into the<br />

covariance matrix as l/o:. Therefore, if oi is small, the covariance, <strong>and</strong><br />

hence the variance will be large. By discarding small singular values, we<br />

decrease the variance. However, we pay the price <strong>of</strong> poorer resolution.<br />

For a full-rank matrix, the resolution matrix R is simply the identity matrix<br />

<strong>and</strong> we have full resolution. Decreasing the rank makes R further away<br />

from being an identify matrix.<br />

Since our model <strong>and</strong> data have uncertainties, we have to consider the<br />

condition number <strong>of</strong> the matrix A. In practice, we are not solving Ax = b,<br />

but Ax = b 2 Ab, assuming A to be exact at this moment. It can be shown<br />

(Forsythe et al., 1977, pp. 41-43) that the error Ax in x resulting from the<br />

error Ab in b is given by<br />

(5.61)<br />

II &I1 /II x II<br />

Y [II Ab II /lib Ill<br />

where y is the condition number <strong>of</strong> A, <strong>and</strong> 11 - 11 denotes the vector norm.<br />

Forsythe et al. (1977, pp. 205-206) also show that we can define the<br />

condition number <strong>of</strong> A by<br />

(5.62)<br />

i.e., y is the ratio <strong>of</strong> the largest <strong>and</strong> smallest singular values <strong>of</strong> A. We see<br />

from Eq. (5.61) that the condition number y is an upper bound in magnifying<br />

the relative error in our observations. In <strong>microearthquake</strong> studies, the<br />

relative error in observations is about lop3. Therefore, if y is lo3, then the<br />

error in our solution may be comparable to the solution itself.<br />

In conclusion, the singular value decomposition (SVD) approach to<br />

generalized inversion <strong>of</strong>fers a powerful analysis <strong>of</strong> our problem. Numerical<br />

computation <strong>of</strong> SVD has been worked out by Golub <strong>and</strong> Reinsch<br />

(1970), <strong>and</strong> a SVD subroutine is generally available as a library subroutine<br />

in most computer centers. Although SVD analysis requires more computational<br />

time than, say, solving the normal equations by the Cholesky

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!