06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

338 4 Bayesian Inference<br />

(x − µ) 2<br />

+ (µ − µ0) 2<br />

σ 2<br />

σ 2 0<br />

= x2<br />

σ2 + µ20 σ2 +<br />

0<br />

σ2 + σ2 0<br />

σ2σ2 µ<br />

0<br />

2 − 2 σ2 µ0 + σ2 0x σ2σ2 µ (4.31)<br />

0<br />

= x2<br />

σ2 + µ20 σ2 <br />

+ µ<br />

0<br />

2 − 2 σ2 µ0 + σ2 0x σ2 + σ2 2 2 σ σ0 µ /<br />

0 σ2 + σ2 <br />

0<br />

= x2<br />

σ2 + µ20 σ2 −<br />

0<br />

(σ2 µ0 + σ2 0x)2 σ2σ2 0 (σ2 + σ2 0 )<br />

<br />

+ µ − σ2 µ0 + σ2 0x<br />

σ2 + σ2 2 2 2 σ σ0 /<br />

0 σ2 + σ2 <br />

0<br />

The last quadratic in the expression above corresponds to the exponential in<br />

a normal distribution with a variance <strong>of</strong> σ2σ2 0/(σ2 +σ2 0), so we adjust the joint<br />

PDF so we can integrate out the µ, leaving<br />

<br />

1 σ<br />

fX(x) = √<br />

2πσσ0<br />

2 + σ2 0<br />

σ2σ2 <br />

exp −<br />

0<br />

1<br />

2 x<br />

2 σ2 + µ20 σ2 −<br />

0<br />

(σ2 µ0 + σ2 0x) 2<br />

σ2σ2 0 (σ2 + σ2 0 )<br />

<br />

.<br />

Combining the exponential in this expression with (4.31), we get the exponential<br />

in the conditional posterior PDF, again ignoring the −1/2 while factoring<br />

out σ2σ2 0/(σ2 + σ2 0), as<br />

<br />

µ 2 − 2 σ2 µ0 + σ2 0x<br />

σ2 + σ2 µ +<br />

0<br />

(σ2 µ0 + σ2 0x) 2<br />

(σ2 + σ2 0 )2<br />

<br />

/ σ2σ2 0<br />

σ2 + σ2 0<br />

Finally, we get the conditional posterior PDF,<br />

fM|x(µ) = 1<br />

<br />

σ<br />

√<br />

2π<br />

2σ2 0<br />

σ2 + σ2 <br />

exp −<br />

0<br />

1<br />

<br />

µ −<br />

2<br />

σ2 µ0 + σ2 0x σ2 + σ2 2<br />

/<br />

0<br />

σ2σ2 0<br />

σ2 + σ2 <br />

,<br />

0<br />

and so we see that the posterior is a normal distribution with a mean that is a<br />

weighted average <strong>of</strong> the prior mean and the observation x, and a variance that<br />

is likewise a weighted average <strong>of</strong> the prior variance and the known variance <strong>of</strong><br />

the observable X.<br />

This example, although quite simple, indicates that there can be many tedious<br />

manipulations. It also illustrates why it is easier to work with PDFs without<br />

normalizing constants.<br />

We will now consider a more interesting example, in which neither µ nor<br />

σ 2 is known. We will also assume that we have multiple observations.<br />

Example 4.5 Normal with Inverted Chi-Squared and Conditional<br />

Normal Priors<br />

Suppose we assume the observable random variable X has a N(µ, σ 2 ), and we<br />

wish to make inferences on µ and σ 2 . Let us assume that µ is a realization<br />

<strong>of</strong> an unobservable random variable M ∈ IR and σ 2 is a realization <strong>of</strong> an<br />

unobservable random variable Σ 2 ∈ IR+.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!