13.01.2015 Views

Sirgue, Laurent, 2003. Inversion de la forme d'onde dans le ...

Sirgue, Laurent, 2003. Inversion de la forme d'onde dans le ...

Sirgue, Laurent, 2003. Inversion de la forme d'onde dans le ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.2. INVERSION OF LINEAR PROBLEMS 27<br />

local methods will be i<strong>de</strong>ntical to the generalised inverse solution. This solution neverthe<strong>le</strong>ss<br />

requires the dual imp<strong>le</strong>mentation of the SVD and Gauss-Newton in or<strong>de</strong>r to discriminate the<br />

mo<strong>de</strong>l null-space. An alternative is to damp the Hessian by adding a constant along its diagonal<br />

such as<br />

H d = H + κI (2.46)<br />

where κ is a real, positive number. Because H d does not have eigenvectors with zero eigenvalues,<br />

it is always invertib<strong>le</strong>. Applying a damping term to the Hessian is equiva<strong>le</strong>nt to finding the<br />

Gauss-Newton solution that minimises a new weighted misfit function <strong>de</strong>fined as<br />

E = 1 (<br />

∆d t ∆d + κ∆m t ∆m ) (2.47)<br />

2<br />

which minimises both the data residuals and the mo<strong>de</strong>l <strong>le</strong>ngth update in proportion to κ.<br />

Gauss-Newton solution and generalised inverse<br />

It can easily be <strong>de</strong>monstrated that the matrix H −1<br />

p Ft p is in fact the generalised inverse obtained<br />

from the SVD since we have<br />

H −1<br />

p F t p = V p Λ −1<br />

p<br />

= G †<br />

The Gauss-Newton solution is in fact equiva<strong>le</strong>nt to the solution given by the generalised inverse<br />

U t p<br />

when there is no a priori know<strong>le</strong>dge of the mo<strong>de</strong>l (m o = 0).<br />

2.2.2.4 Preconditioned gradient methods<br />

We have seen that the component of the gradient along the space eigenvectors are the component<br />

of the true mo<strong>de</strong>l perturbation stretched by the eigenvalues of H. In or<strong>de</strong>r to mitigate this effect<br />

without recourse of the inverse Hessian, it is possib<strong>le</strong> to precondition the gradient direction by :<br />

1. Adding a vector to the gradient<br />

2. Weight the data residuals<br />

3. Multiply the gradient by a matrix<br />

The preconditioned gradient direction can always be associated with an equiva<strong>le</strong>nt misfit function<br />

E, in which the preconditioned gradient is the steepest ascent. Therefore, an equiva<strong>le</strong>nce<br />

exist between modifying the misfit function or the gradient direction.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!