13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

86the object <strong>of</strong> the expectation is constant.⎛ ⎛log ⎝E KT⎝∑ Kj=1f(Y i |X i = j, ˆθ K )p(X i = j|N( ˆX i K ), ˆφ⎞⎞K )j=1 f(Y i |X i = j, ˆθ K T )p(Xi = j|N( ˆX K Ti ), ˆφ⎠⎠ (5.41)K T )∑ KTIf the inequality in equation 5.42 holds, then consistency is implied.⎛E KT⎝∑ Kj=1f(Y i |X i = j, ˆθ K )p(X i = j|N( ˆX i K ), ˆφ⎞K )j=1 f(Y i |X i = j, ˆθ K T )p(Xi = j|N( ˆX K Ti ), ˆφ⎠K T )( )((DK − D KT ) log(N)/2)≤ expN∑ KT(5.42)When K T = 1, there is only one possible configuration for X; in other words,X is constant. Equation 5.43 follows.f(Y ) = f(Y |X) = ∏ if(Y i |X i ) = ∏ if(Y i ) (5.43)Note that f(Y i |ˆθ K T , Xi ) is equal to f(Y i |ˆθ K T ) when KT = 1, and this in turn isasymptotically equal to f(Y i |θ K T ), since in this case ˆθ is just the usual maximumlikelihood estimate with independent data. The expected value in equation 5.42can be rewritten as equation 5.44.⎛∑ Kj=1f(YE KT⎝i |X i = j, ˆθ K )p(X i = j|N( ˆX i K ), ˆφ⎞K )⎠ (5.44)f(Y i |ˆθ K T )I now show that equation 5.44 simplifies to equation 5.45. Consider the value <strong>of</strong>ˆφ K when K T = 1 and K > K T . The estimate ˆφ K is a maximum pseudolikelihoodestimate. Maximum pseudolikelihood estimates were shown to be consistent byGeman and Graffigne (1986), so we know that ˆφ K → φ K Twhen K T = 1 all points are independent, so φ K Tas N → ∞. However,= 0. When φ = 0, the probability<strong>of</strong> any X i is (1/K), regardless <strong>of</strong> its neighbors. It follows that the expectation inequation 5.44 is equal to the expectation in equation 5.45.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!