04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

198 Chapter 6 Nonstationary and Seasonal Time Series Models<br />

6.4 Forecasting ARIMA Models<br />

indicates that there is no appreciable measurement error due to sales and deliveries<br />

(i.e., σ 2 V 0), and hence testing for a unit root in this case is equivalent to<br />

testing that σ 2 U 0. Assuming that the mean is known, the unit root hypothesis<br />

is rejected at α .05, since −.818 > −1 + 6.80/57 −.881. The evidence<br />

against H0 is stronger using the likelihood ratio statistic. Using ITSM and entering<br />

the MA(1) model θ −1and σ 2 2203.12, we find that −2lnL(−1, 2203.12) <br />

604.584, while −2lnL(ˆθ, ˆσ 2 ) 597.267. Comparing the likelihood ratio statistic<br />

λn 604.584 − 597.267 7.317 with the cutoff value cLR,.01, we reject H0 at level<br />

α .01 and conclude that the measurement error associated with sales and deliveries<br />

is nonzero.<br />

In the above example it was assumed that the mean was known. In practice, these<br />

tests should be adjusted for the fact that the mean is also being estimated.<br />

Tanaka (1990) proposed a locally best invariant unbiased (LBIU) test for the unit<br />

root hypothesis. It was found that the LBIU test has slightly greater power than the<br />

likelihood ratio test for alternatives close to θ −1but has less power for alternatives<br />

further away from −1 (see Davis et al., 1995). The LBIU test has been extended to<br />

cover more general models by Tanaka (1990) and Tam and Reinsel (1995). Similar<br />

extensions to tests based on the maximum likelihood estimator and the likelihood<br />

ratio statistic have been explored in Davis, Chen, and Dunsmuir (1996).<br />

In this section we demonstrate how the methods of Section 3.3 and 5.4 can be adapted<br />

to forecast the future values of an ARIMA(p, d, q) process {Xt}. (The required numerical<br />

calculations can all be carried out using the program ITSM.)<br />

If d ≥ 1, the first and second moments EXt and E(Xt+hXt) are not determined<br />

by the difference equations (6.1.1). We cannot expect, therefore, to determine best<br />

linear predictors for {Xt} without further assumptions.<br />

For example, suppose that {Yt} is a causal ARMA(p, q) process and that X0 is<br />

any random variable. Define<br />

Xt X0 +<br />

t<br />

Yj, t 1, 2,....<br />

j1<br />

Then {Xt,t ≥ 0} is an ARIMA(p, 1,q) process with mean EXt EX0 and autocovariances<br />

E(Xt+hXt) − (EX0) 2 that depend on Var(X0) and Cov(X0,Yj), j <br />

1, 2,....The best linear predictor of Xn+1 based on {1,X0,X1,...,Xn} is the same<br />

as the best linear predictor in terms of the set {1,X0,Y1,...,Yn}, since each linear<br />

combination of the latter is a linear combination of the former and vice versa. Hence,<br />

using Pn to denote best linear predictor in terms of either set and using the linearity

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!