13.07.2015 Views

View - Statistics - University of Washington

View - Statistics - University of Washington

View - Statistics - University of Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

49Dependence CaseWhen |β| < 1, then the following statements hold (Hamilton, 1994).E[Y i ] = C/(1 − β) (4.8)V AR[Y i ] = σ 2 Y = σ2 ɛ /(1 − β2 ) (4.9)E[Ȳ ] = E[Y i] = C/(1 − β) (4.10)V AR(Ȳ ) =( ) ( )1 + β σ2ɛ1 − β N(4.11)We are able to observe the Y i−1 value preceding each Y i for every observationexcept the first. Not surprisingly, the contribution <strong>of</strong> the first observation to theoverall loglikelihood is different than that <strong>of</strong> the other observations, and this makesthe loglikelihood equation more difficult to analyze. Since the proportion <strong>of</strong> thecontribution <strong>of</strong> Y 1 to the loglikelihood becomes small as N increases, we chooseto condition on its value, which has the effect <strong>of</strong> simplifying the formula for theloglikelihood. Furthermore, estimates based on this conditional loglikelihood areconsistent and asymptotically equal to estimates based on the exact loglikelihood.The exact loglikelihood isL(Y |M) = − 1 2 log(2π) − 1 2 log(σ2 ɛ /(1 − β 2 )) − (Y 1 − C/(1 − β)) 2− N − 12log(2π) − N − 1 log(σɛ 2 ) − 122σɛ22σɛ 2 /(1 − β 2 )N∑(Y i − C − βY i−1 (4.12) ) 2The first three terms in equation 4.12 can be thought <strong>of</strong> as the contribution <strong>of</strong>Y 1 , with the remaining terms resulting from Y 2 ...Y N . Conditioning on Y 1 we obtaini=2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!