18.01.2015 Views

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

Karjalainen, Pasi A. Regularization and Bayesian methods for ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

30 2. Estimation theory<br />

For the calculation of the mean we first have to <strong>for</strong>m the equation <strong>for</strong> the posterior<br />

density p(θ|z). First we note that the matrix inversion lemma [69] gives <strong>for</strong> the<br />

inverse of the joint covariance of θ <strong>and</strong> z<br />

(<br />

) −1 (<br />

)<br />

C θ C θz C 11 C 12<br />

=<br />

(2.97)<br />

C zθ C z C 21 C 22<br />

where<br />

C 11 = (C θ − C θz Cz<br />

−1 C zθ ) −1 = C −1<br />

θ<br />

C 22 = (C z − C zθ C −1<br />

θ<br />

C 12 = C T 21 = −C 11 C θz C −1<br />

z<br />

C θz ) −1 = C −1<br />

z<br />

+ C −1<br />

θ<br />

C θz C 22 C zθ C −1<br />

+ C −1<br />

z<br />

The density of z can be written in the <strong>for</strong>m<br />

{<br />

p(z) ∝ exp − 1 ( ) ( )(<br />

0 z T 0 0<br />

2 0 Cz<br />

−1<br />

so that the posterior density p(θ|z) is obtained by <strong>for</strong>ming<br />

p(θ|z) = p(θ,z)<br />

p(z)<br />

{<br />

∝<br />

exp<br />

{<br />

= exp<br />

{<br />

= exp<br />

− 1 2<br />

− 1 2<br />

(<br />

θ<br />

(2.98)<br />

C zθ C 11 C θz Cz −1 (2.99)<br />

= −C −1<br />

θ<br />

C θz C 22 (2.100)<br />

0<br />

z<br />

) (<br />

θ T z T C 11 C 12<br />

C 21 C 22 − Cz<br />

−1<br />

)}<br />

)(<br />

θ<br />

z<br />

)}<br />

(2.101)<br />

(2.102)<br />

(2.103)<br />

(<br />

θ T C 11 θ + 2θ T C 12 z + z T (C 22 − C −1<br />

z )z )} (2.104)<br />

− 1 2 (θT C 11 θ + 2θ T C 11 C θz C −1<br />

z z (2.105)<br />

+z T Cz<br />

−1 C zθ C 11 C θz Cz −1 z)<br />

{<br />

= exp − 1 }<br />

2 (θ − C θzCz<br />

−1 z) T C 11 (θ − C θz Cz −1 z)<br />

}<br />

(2.106)<br />

(2.107)<br />

This is clearly a Gaussian density. The Gaussian conditional density is of the <strong>for</strong>m<br />

{<br />

p(θ|z) ∝ exp − 1 }<br />

2 (θ − E {θ|z})T C −1<br />

θ|z (θ − E {θ|z}) (2.108)<br />

Since ˆθ MS = E {θ|z}, C θ|z = C˜θMS<br />

<strong>and</strong> comparing with (2.107) we can conclude<br />

ˆθ MS = C θz C −1<br />

z z (2.109)<br />

C˜θMS<br />

= C θ − C θz C −1<br />

z C zθ (2.110)<br />

that is exactly the linear minimum mean square estimator.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!