Karjalainen, Pasi A. Regularization and Bayesian methods for ...
Karjalainen, Pasi A. Regularization and Bayesian methods for ...
Karjalainen, Pasi A. Regularization and Bayesian methods for ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
22 2. Estimation theory<br />
Clearly, we have C xy = C T yx. The conditional density of x given y is defined to be<br />
p(x|y) = p(x,y)<br />
p(y)<br />
(2.15)<br />
whenever p(y) > 0 <strong>and</strong> 0 otherwise. Clearly, we can also write<br />
p(y|x) = p(x,y)<br />
p(x)<br />
(2.16)<br />
<strong>and</strong> we obtain<br />
p(x|y)p(y) = p(y|x)p(x) (2.17)<br />
This is called the Bayes’ theorem. The conditional mean of x given y is<br />
η x|y = E {x|y} =<br />
∫ ∞<br />
−∞<br />
xp(x|y)dx (2.18)<br />
that is a function of the r<strong>and</strong>om variable y. If x <strong>and</strong> y are r<strong>and</strong>om vectors,<br />
E {f(x,y)|y} =<br />
∫ ∞<br />
−∞<br />
Using (2.10), (2.19) <strong>and</strong> (2.15) we can derive a useful result<br />
E {f(x,y)} =<br />
=<br />
∫ ∞<br />
−∞<br />
∫ ∞<br />
−∞<br />
f(x,y)p(x|y)dx (2.19)<br />
f(x,y)p(x,y)dxdy (2.20)<br />
f(x,y)p(x|y)dxp(y)dy (2.21)<br />
= E y {E {f(x,y)|y}} (2.22)<br />
The conditional covariance of x given y is [152]<br />
∣ }<br />
C x|y = E<br />
{(x − η x|y )(x − η x|y ) T ∣∣y<br />
= E { xx ∣ T } y − ηx|y ηx|y T (2.23)<br />
The r<strong>and</strong>om variables x i are said to be statistically independent if<br />
p(x) = ∏ i<br />
p i (x i ) (2.24)<br />
<strong>and</strong> jointly Gaussian if their joint density can be written in the <strong>for</strong>m<br />
p(x) = (det C x) −1/2 (<br />
exp − 1 )<br />
(2π) n/2 2 (x − η x) T Cx −1 (x − η x )<br />
(2.25)<br />
where det C x denotes the determinant of C x . When x is jointly Gaussian with<br />
mean η x <strong>and</strong> covariance C x , this is denoted by<br />
x ∼ N(η x ,C x ) (2.26)