13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

35about ˜θ; this is a good approximation as long as N is large enough that g(θ) ishighly peaked.p(X|M i )≈∫exp(g(˜θ|M i ) + (θ − ˜θ) T g ′ (˜θ|M i )Since ˜θ is a maximum, g ′ (˜θ) = 0. We now have+ 1 2 (θ − ˜θ) T g ′′ (˜θ|M i )(θ − ˜θ))dθ (3.5)p(X|M i )∫ (≈ exp g(˜θ|M i ) + 1 2 (θ − ˜θ) T g ′′ (˜θ|M i )(θ − ˜θ))dθ (3.6)∫ (= exp(g(˜θ|M i )) exp ( 1 2 (θ − ˜θ) T g ′′ (˜θ|M i )(θ − ˜θ))dθ (3.7)The integral has the form <strong>of</strong> a multivariate normal density with covarianceequal to the inverse <strong>of</strong> −g ′′ (˜θ|M i ).p(X|M i ) ≈ exp(g(˜θ|M i ))(2π) D i/2 | − g ′′ (˜θ)| −1/2 (3.8)Recall that g(θ|M i ) = log(p(X|θ, M i )π(θ|M i )).log(p(X|M i )) ≈ log(p(X|˜θ, M i )) + log(p(˜θ|M i )) + (D i /2) log(2π)+(−1/2) log(| − g ′′ (˜θ)|) (3.9)If N is large, then −g ′′ (˜θ|M i ) ≈ E[−g ′′ (˜θ|M i )]. This is the Fisher informationfor the data Y , which will be equal to N times the Fisher information for oneobservation. Let I denote the Fisher information matrix for a single observation;this will be a D i by D i matrix. Now we have | − g ′′ (˜θ)| ≈ N D i|I|.log(p(X|M i )) ≈ log(p(X|˜θ, M i )) + log(p(˜θ|M i )) + (D i /2) log(2π)+(−D i /2) log(N) + (−1/2) log(|I|) (3.10)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!