04.01.2013 Views

Springer - Read

Springer - Read

Springer - Read

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

272 Chapter 8 State-Space Models<br />

Definition 8.4.1 For the random vector X (X1,...,Xv) ′ ,<br />

Pt(X) : (Pt(X1),...,Pt(Xv)) ′ ,<br />

where Pt(Xi) : P(Xi|Y0, Y1,...,Yt), is the best linear predictor of Xi in terms<br />

of all components of Y0, Y1,...,Yt.<br />

Remark 1. By the definition of the best predictor of each component Xi of X,<br />

Pt(X) is the unique random vector of the form<br />

Pt(X) A0Y0 +···+AtYt<br />

with v × w matrices A0,...,At such that<br />

[X − Pt(X)] ⊥ Ys, s 0,...,t<br />

(cf. (7.5.2) and (7.5.3)). Recall that two random vectors X and Y are orthogonal<br />

(written X ⊥ Y) ifE(XY ′ ) is a matrix of zeros.<br />

Remark 2. If all the components of X, Y1,...,Yt are jointly normally distributed<br />

and Y0 (1,...,1) ′ , then<br />

Pt(X) E(X|Y1,...,Yt), t ≥ 1.<br />

Remark 3. Pt is linear in the sense that if A is any k × v matrix and X, V are two<br />

v-variate random vectors with finite second moments, then (Problem 8.10)<br />

and<br />

Pt(AX) APt(X)<br />

Pt(X + V) Pt(X) + Pt(V).<br />

Remark 4. If X and Y are random vectors with v and w components, respectively,<br />

each with finite second moments, then<br />

P(X|Y) MY,<br />

where M is a v×w matrix, M E(XY ′ )[E(YY ′ )] −1 with [E(YY ′ )] −1 any generalized<br />

inverse of E(YY ′ ). (A generalized inverse of a matrix S is a matrix S −1 such that<br />

SS −1 S S. Every matrix has at least one. See Problem 8.11.)<br />

In the notation just developed, the prediction, filtering, and smoothing problems<br />

(a), (b), and (c) formulated above reduce to the determination of Pt−1(Xt), Pt(Xt),<br />

and Pn(Xt) (n >t), respectively. We deal first with the prediction problem.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!