29.07.2014 Views

STATISTICS 512 TECHNIQUES OF MATHEMATICS FOR ...

STATISTICS 512 TECHNIQUES OF MATHEMATICS FOR ...

STATISTICS 512 TECHNIQUES OF MATHEMATICS FOR ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

32<br />

• Return to regression.<br />

(i) Least squares estimation in terms of hat matrix<br />

decomposition of norm of residuals: Note that<br />

x ⊥ y⇒kx + yk 2 = kxk 2 + kyk 2 ;then<br />

°<br />

°y − Xˆθ ° 2<br />

° =<br />

° °°H<br />

³ Xˆθ´° y<br />

° 2 °<br />

− +<br />

°°(I<br />

³ Xˆθ´°<br />

− H) y<br />

° 2<br />

− =<br />

°<br />

°H ³ y Xˆθ´° ° 2<br />

− + k(I − H) yk<br />

2<br />

≥<br />

k(I − H) yk 2 <br />

with equality iff Hy = Xˆθ iff (‘if and only if’)<br />

ˆθ = ³ X 0 X´−1<br />

X 0 y<br />

(how?), the LS estimator. The fitted values are<br />

ŷ = Xˆθ = Hy<br />

and are orthogonal to the residuals<br />

e = y − ŷ =(I − H) y<br />

We say that H and I − H project the data (y)<br />

onto the estimation space and error space, respectively,<br />

and that these spaces are orthogonal.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!