04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

246 Chapter 7 Multivariate Time Series<br />

Remark 5. The innovations algorithm also has a multivariate version that can be<br />

used for prediction in much the same way as the univariate version described in<br />

Section 2.5.2 (for details see TSTM, Proposition 11.4.2).<br />

7.6 Modeling and Forecasting with Multivariate AR Processes<br />

If {Xt} is any zero-mean second-order multivariate time series, it is easy to show from<br />

the results of Section 7.5 (Problem 7.4) that the one-step prediction errors Xj − ˆXj,<br />

j 1,...,n, have the property<br />

′<br />

E<br />

0 for j k. (7.6.1)<br />

Xj − ˆXj<br />

Xk − ˆXk<br />

Moreover, the matrix M such that<br />

⎡ ⎤ ⎡ ⎤<br />

X1 − ˆX1 X1<br />

⎢ X2 − ˆX2<br />

⎥ ⎢ ⎥<br />

⎥ ⎢ X2 ⎥<br />

⎢ ⎥ ⎢ ⎥<br />

⎢ X3 − ˆX3 ⎥<br />

⎢ ⎥ M⎢<br />

X3 ⎥<br />

⎢ ⎥<br />

⎢ ⎥ ⎢ ⎥<br />

⎣ . ⎦ ⎣ . ⎦<br />

Xn − ˆXn<br />

Xn<br />

(7.6.2)<br />

is lower triangular with ones on the diagonal and therefore has determinant equal<br />

to 1.<br />

If the series {Xt} is also Gaussian, then (7.6.1) implies that the prediction errors<br />

Uj Xj − ˆXj, j 1,...,n, are independent with covariance matrices V0,...,Vn−1,<br />

respectively (as specified in (7.5.7)). Consequently, the joint density of the prediction<br />

errors is the product<br />

f(u1,...,un) (2π) −nm/2<br />

<br />

n<br />

j1<br />

detVj−1<br />

−1/2<br />

<br />

exp − 1<br />

2<br />

n<br />

j1<br />

u ′ −1<br />

jVj−1uj Since the determinant of the matrix M in (7.6.2) is equal to 1, the joint density of the<br />

observations X1,...,Xn at x1,...,xn is obtained on replacing u1,...,un in the last<br />

expression by the values of Xj − ˆXj corresponding to the observations x1,...,xn.<br />

If we suppose that {Xt} is a zero-mean m-variate AR(p) process with coefficient<br />

matrices {1,...,p} and white noise covariance matrix | , we can therefore<br />

express the likelihood of the observations X1,...,Xn as<br />

L(,| ) (2π) −nm/2<br />

n<br />

j1<br />

detVj−1<br />

−1/2<br />

exp<br />

<br />

− 1<br />

2<br />

n<br />

j1<br />

U ′ −1<br />

jVj−1Uj where Uj Xj − ˆXj, j 1,...,n, and ˆXj and Vj are found from (7.5.4), (7.5.6),<br />

and (7.5.7).<br />

<br />

,<br />

<br />

.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!