PDF of Lecture Notes - School of Mathematical Sciences
PDF of Lecture Notes - School of Mathematical Sciences
PDF of Lecture Notes - School of Mathematical Sciences
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
1. DISTRIBUTION THEORY<br />
Pro<strong>of</strong>.<br />
M AX+b (t) = E[e tT (AX+b )] = e tT b E[e tT AX ]<br />
= e tT b E[e (AT t) T X ] = e tT b M X (A T t).<br />
Now partition<br />
and<br />
Note that<br />
and<br />
Hence,<br />
t r×1 =<br />
(<br />
t1<br />
t 2<br />
)<br />
A l×r = (I l×l 0 l×m ).<br />
( )<br />
X1<br />
AX = (I l×l 0 l×m ) = X<br />
X 1 ,<br />
2<br />
A T t 1 =<br />
(<br />
Il×l<br />
0 m×l<br />
)<br />
t 1 =<br />
( )<br />
t1<br />
.<br />
0<br />
M X1 (t 1 ) = M AX (t 1 ) = M X (A T t 1 )<br />
( )<br />
t1<br />
= M X ,<br />
0<br />
as required.<br />
Note that similar results hold for more than two random subvectors.<br />
The major limitation <strong>of</strong> the MGF is that it may not exist. The characteristic function<br />
on the other hand is defined for all distributions. Its definition is similar to the MGF,<br />
with it replacing t, where i = √ −1; the properties <strong>of</strong> the characterisitc function are<br />
similar to those <strong>of</strong> the MGF, but using it requires some familiarity with complex<br />
analysis.<br />
1.9.3 Vector notation<br />
Consider the random vector<br />
X = (X 1 , X 2 , . . . , X r ) T ,<br />
with E[X i ] = µ i , Var(X i ) = σi 2 = σ ii , Cov(X i , X j ) = σ ij .<br />
54