28.01.2015 Views

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1. DISTRIBUTION THEORY<br />

Remark<br />

The same methods can be used to establish a similar result for block diagonal matrices.<br />

The simplest case is the following, which is most easily proved using moment generating<br />

functions:<br />

X 1 , X 2 are independent<br />

if and only if Σ 12 = 0, i.e.,<br />

( ) (( ) ( ))<br />

X1<br />

µ1 Σ11 0<br />

∼ N<br />

X<br />

r1 +r 2<br />

,<br />

.<br />

2 µ 2 0 Σ 22<br />

Theorem. 1.10.7<br />

Suppose X 1 , X 2 , . . . , X n are i.i.d. N(µ, σ 2 ) RVs and let<br />

¯X = 1 n<br />

S 2 =<br />

n∑<br />

X i ,<br />

i=1<br />

1<br />

n − 1<br />

n∑<br />

(X i − ¯X) 2 .<br />

i=1<br />

Then ¯X ∼ N(µ, σ 2 /n) and<br />

(n − 1)S2<br />

σ 2<br />

∼ χ 2 n−1 independently.<br />

(Note: S 2 here is a random variable, and is not to be confused with the sample covariance<br />

matrix, which is also <strong>of</strong>ten denoted by S. Hopefully, the meaning <strong>of</strong> the notation<br />

will be clear in the context in which it is used.)<br />

Pro<strong>of</strong>. Observe first that if X = (X 1 , . . . , X n ) T then the i.i.d. assumption may be<br />

written as:<br />

X ∼ N n (µ1 n , σ 2 I n×n )<br />

1. ¯X ∼ N(µ, σ 2 /n):<br />

Observe that ¯X = BX, where<br />

[ 1<br />

B =<br />

n<br />

]<br />

1<br />

n . . . 1<br />

n<br />

Hence, by Theorem 1.10.5,<br />

¯X ∼ N(Bµ1, σ 2 BB T ) ≡ N(µ, σ 2 /n),<br />

as required.<br />

67

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!