28.01.2015 Views

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1. DISTRIBUTION THEORY<br />

Define the mean vector by:<br />

⎛ ⎞<br />

µ 1<br />

µ 2<br />

µ = E(X) = ⎜ ⎟<br />

⎝ . ⎠<br />

µ r<br />

and the variance matrix (covariance matrix) by:<br />

Σ = Var(X) = [σ ij ] i=1,...,r<br />

j=1,...,r<br />

Finally, the correlation matrix is defined by:<br />

⎛<br />

⎞<br />

1 ρ 12 . . . ρ 1r<br />

. . .<br />

. . .<br />

R = ⎜<br />

⎟<br />

⎝ .. . ρr−1,r<br />

⎠<br />

1<br />

where ρ ij =<br />

σ ij<br />

√<br />

σii<br />

√<br />

σjj<br />

.<br />

Now suppose that a = (a 1 , . . . , a r ) T is a vector <strong>of</strong> constants and observe that:<br />

a T x = a 1 x 1 + a 2 x 2 + · · · + a r x r =<br />

r∑<br />

a i x i<br />

i=1<br />

Theorem. 1.9.8<br />

Suppose X has E(X) = µ and Var(X) = Σ, and let a, b ∈ R r be fixed vectors. Then,<br />

1. E(a T X) = a T µ<br />

2. Var(a t X) = a T Σa<br />

3. Cov(a T X, b T X) = a T Σb<br />

Remark<br />

It is easy to check that this is just a re-statement <strong>of</strong> Theorem 1.9.3 using matrix<br />

notation.<br />

Theorem. 1.9.9<br />

Suppose X is a random vector with E(X) = µ, Var(X) = Σ, and let A p×r and b ∈ R p<br />

be fixed.<br />

If Y = AX + b, then<br />

E(Y) = Aµ + b and Var(Y) = AΣA T .<br />

55

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!