04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

7.5 Best Linear Predictors of Second-Order Random Vectors 245<br />

immediately from the properties of the prediction operator (Section 2.5) that<br />

Pn(Y) µ + A1(Xn − µn) +···+An(X1 − µ1) (7.5.2)<br />

for some matrices A1,...,An, and that<br />

Y − Pn(Y) ⊥ Xn+1−i, i 1,...,n, (7.5.3)<br />

where we say that two m-dimensional random vectors X and Y are orthogonal (written<br />

X ⊥ Y)ifE(XY ′ ) is a matrix of zeros. The vector of best predictors (7.5.1) is uniquely<br />

determined by (7.5.2) and (7.5.3), although it is possible that there may be more than<br />

one possible choice for A1,...,An.<br />

As a special case of the above, if {Xt} is a zero-mean time series, the best linear<br />

predictor ˆXn+1 of Xn+1 in terms of X1,...,Xn is obtained on replacing Y by Xn+1 in<br />

(7.5.1). Thus<br />

ˆXn+1 <br />

Hence, we can write<br />

0, if n 0,<br />

Pn(Xn+1), if n ≥ 1.<br />

ˆXn+1 n1Xn +···+nnX1, n 1, 2,..., (7.5.4)<br />

where, from (7.5.3), the coefficients nj ,j 1,...,n, are such that<br />

<br />

E<br />

i.e.,<br />

ˆXn+1X ′<br />

n+1−i<br />

E Xn+1X ′ <br />

n+1−i , i 1,...,n, (7.5.5)<br />

n<br />

nj K(n + 1 − j,n + 1 − i) K(n + 1,n+ 1 − i), i 1,...,n.<br />

j1<br />

In the case where {Xt} is stationary with K(i,j) Ɣ(i − j), the prediction equations<br />

simplify to the m-dimensional analogues of (2.5.7), i.e.,<br />

n<br />

nj Ɣ(i − j) Ɣ(i), i 1,...,n. (7.5.6)<br />

j1<br />

Provided that the covariance matrix of the nm components of X1,...,Xn is nonsingular<br />

for every n ≥ 1, the coefficients {nj } can be determined recursively using<br />

a multivariate version of the Durbin–Levinson algorithm given by Whittle (1963)<br />

(for details see TSTM, Proposition 11.4.1). Whittle’s recursions also determine the<br />

covariance matrices of the one-step prediction errors, namely, V0 Ɣ(0) and, for<br />

n ≥ 1,<br />

Vn E(Xn+1 − ˆXn+1)(Xn+1 − ˆXn+1) ′<br />

Ɣ(0) − n1Ɣ(−1) −···−nnƔ(−n). (7.5.7)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!