21.11.2014 Views

direct multi-step estimation and forecasting - OFCE - Sciences Po

direct multi-step estimation and forecasting - OFCE - Sciences Po

direct multi-step estimation and forecasting - OFCE - Sciences Po

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Direct <strong>multi</strong>-<strong>step</strong> <strong>estimation</strong> <strong>and</strong> <strong>forecasting</strong><br />

it is possible, by iterated substitution, to find a set of non-linear function β h,i of the set of {α 1,i }<br />

such that:<br />

y T +h =<br />

k∑<br />

β h,i y T −i .<br />

i=0<br />

Denote by ̂β h,i the function of the estimated ˜α 1,i so that ŷ T +h,h = ∑ k<br />

i=0 ̂β h,i y T −i . And then, the<br />

IMS MSFE is defined as:<br />

̂V IMS<br />

h,k<br />

[<br />

= E (y T +h − ŷ T +h,h ) 2] .<br />

Note that {y t } can expressed as an autoregressive process of order p according to Wiener–Kolmogorov’s<br />

theorem, where p can be infinite. Then, if k ≥ p, the theoretical (for known parameters) MSFEs coincide<br />

for DMS (V DMS<br />

h,k<br />

) <strong>and</strong> IMS (Vh,k<br />

IMS ); but if k < p, the latter is larger than the former, which in<br />

turn is larger than the ‘true’ MSFE from the correct—potentially infinitely parameterized—model<br />

(see Bhansali, 1996). Define γ i as the ith autocorrelation of {y t } for i ≥ 1 <strong>and</strong> γ 0 its variance (in<br />

stationary processes), then using an AR(1) as a <strong>forecasting</strong> model:<br />

[γ 2 − (γ 1 ) 2] 2<br />

V IMS<br />

2,1 /V DMS<br />

2,1 = 1 +<br />

1 − (γ 2 ) 2 ≥ 1,<br />

where the equality arises when the model is well specified. Similarly it can be shown that if the<br />

( ) 4 θ<br />

process follows an MA(1) with parameter θ, V2,1 IMS /V2,1 DMS = 1 +<br />

1 + θ 2 > 1.<br />

Bhansali recalls the main asymptotic findings. For a well specified model, the 1S <strong>estimation</strong><br />

procedure is asymptotically equivalent to maximum likelihood <strong>and</strong> in the case of Gaussian processes,<br />

achieves the Cramér–Rao bound; yet this is not the case when k ≠ p. By contrast, DMS is<br />

asymptotically inefficient for a well-specified model. However, if k (T ) → ∞, the distributions of<br />

the DMS <strong>and</strong> IMS estimators coincide for T → ∞, under some regularity conditions (see Bhansali,<br />

1993 when k (T ) = o ( T 1/3) ). In analyzing the ARMA(1, 1) model:<br />

y t − ρy t−1 = ɛ t − θɛ t−1 ,<br />

Bhansali notes that the two-<strong>step</strong> ahead forecast is given by:<br />

y t+2 = −ρ (θ − ρ)<br />

∞∑<br />

θ i y t−i = τ (1 − θL) −1 y t , (10)<br />

i=1<br />

so that he recommends to minimize the in-sample sum of squared forecast errors:<br />

∑ (<br />

y t+2 − τ (1 − θL) −1 y t<br />

) 2<br />

,<br />

20

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!