04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

6.6 Regression with ARMA Errors 211<br />

many applications of regression analysis, however, this assumption is clearly violated,<br />

as can be seen by examination of the residuals from the fitted regression and<br />

their sample autocorrelations. It is often more appropriate to assume that the errors<br />

are observations of a zero-mean second-order stationary process. Since many autocorrelation<br />

functions can be well approximated by the autocorrelation function of a<br />

suitably chosen ARMA(p, q) process, it is of particular interest to consider the model<br />

Yt x ′<br />

t β + Wt, t 1,...,n, (6.6.1)<br />

or in matrix notation,<br />

Y Xβ + W, (6.6.2)<br />

where Y (Y1,...,Yn) ′ is the vector of observations at times t 1,...,n, X<br />

is the design matrix whose tth row, x ′ t (xt1,...,xtk), consists of the values of<br />

the explanatory variables at time t, β (β1,...,βk) ′ is the vector of regression<br />

coefficients, and the components of W (W1,...,Wn) ′ are values of a causal zeromean<br />

ARMA(p, q) process satisfying<br />

φ(B)Wt θ(B)Zt, {Zt} ∼WN 0,σ 2 . (6.6.3)<br />

The model (6.6.1) arises naturally in trend estimation for time series data. For<br />

example, the explanatory variables xt1 1,xt2 t, and xt3 t 2 can be used to<br />

estimate a quadratic trend, and the variables xt1 1,xt2 cos(ωt), and xt3 sin(ωt)<br />

can be used to estimate a sinusoidal trend with frequency ω. The columns of X are<br />

not necessarily simple functions of t as in these two examples. Any specified column<br />

of relevant variables, e.g., temperatures at times t 1,...,n, can be included in the<br />

design matrix X, in which case the regression is conditional on the observed values<br />

of the variables included in the matrix.<br />

The ordinary least squares (OLS) estimator of β is the value, ˆβOLS, which<br />

minimizes the sum of squares<br />

(Y − Xβ) ′ (Y − Xβ) <br />

n<br />

t1<br />

Yt − x ′<br />

t β 2 .<br />

Equating to zero the partial derivatives with respect to each component of β and<br />

assuming (as we shall) that X ′ X is nonsingular, we find that<br />

ˆβOLS (X ′ X) −1 X ′ Y. (6.6.4)<br />

(If X ′ X is singular, ˆβOLS is not uniquely determined but still satisfies (6.6.4) with<br />

(X ′ X) −1 any generalized inverse of X ′ X.) The OLS estimate also maximizes the<br />

likelihood of the observations when the errors W1,...,Wn are iid and Gaussian. If<br />

the design matrix X is nonrandom, then even when the errors are non-Gaussian and<br />

dependent, the OLS estimator is unbiased (i.e., E <br />

ˆβOLS β) and its covariance<br />

matrix is<br />

Cov( ˆβOLS) X ′ X −1 X ′ ƔnX X ′ X −1 , (6.6.5)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!