30.07.2013 Views

Regularization of the AVO inverse problem by means of a ...

Regularization of the AVO inverse problem by means of a ...

Regularization of the AVO inverse problem by means of a ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 3. BAYESIAN INVERSION APPROACH AND ALGORITHMS 32<br />

objective function that has to be minimized such that it maximizes <strong>the</strong> posterior distribution.<br />

Usually, <strong>the</strong> objective function <strong>of</strong> <strong>inverse</strong> <strong>problem</strong> has two terms. One that takes care <strong>of</strong> <strong>the</strong><br />

observation and <strong>the</strong> o<strong>the</strong>r <strong>the</strong> prior information. This can be written as<br />

J(m) = φd + µhφm<br />

(3.8)<br />

where J is <strong>the</strong> objective function, m represents <strong>the</strong> model parameters, and µh is called<br />

<strong>the</strong> hyper-parameter to be determined such that it honors both <strong>the</strong> data and <strong>the</strong> prior.<br />

The functions φd and φm are results <strong>of</strong> <strong>the</strong> likelihood function and <strong>the</strong> prior distribution<br />

respectively. Lets take a typical <strong>inverse</strong> <strong>problem</strong> in which <strong>the</strong> relationship between <strong>the</strong> data<br />

and <strong>the</strong> model parameters are expressed as<br />

d = Lm + n, (3.9)<br />

where d is <strong>the</strong> observed data, L is a linear operator which depend on <strong>the</strong> physics model<br />

that governs <strong>the</strong> system, and n is noise in <strong>the</strong> data and some <strong>the</strong>oretical error. Assuming<br />

<strong>the</strong> noise terms are independent and Gaussian, <strong>the</strong> noise can be modeled using Multivariate<br />

Gaussian probability distribution function given <strong>by</strong><br />

P = Po exp{− 1<br />

2 nT C −1<br />

d n}, (3.10)<br />

where Cd is <strong>the</strong> noise covariance matrix. Using equation (3.9) and (3.10), <strong>the</strong> likelihood<br />

function <strong>of</strong> <strong>the</strong> data can be expressed as<br />

where<br />

P (d|m) = Po exp{− 1<br />

2 (d − Lm))T Cd −1 (d − Lm)}, (3.11)<br />

Po =<br />

1<br />

(2π) (n)/2 , (3.12)<br />

|Cd| 1/2<br />

and n is <strong>the</strong> number <strong>of</strong> data points. Assuming a certain prior distribution, <strong>the</strong> posterior<br />

distribution takes <strong>the</strong> form<br />

P (m|d) ∝ exp{− 1<br />

2 (d − Lm)T C −1<br />

d (d − Lm) − µhR(m)}, (3.13)<br />

Maximizing <strong>the</strong> posterior distribution leads to an objective function which has similar struc-<br />

ture as equation (3.8)<br />

J(m) = 1<br />

2 (d − Lm)T C −1<br />

d (d − Lm) + µhR(m), (3.14)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!