PDF of Lecture Notes - School of Mathematical Sciences
PDF of Lecture Notes - School of Mathematical Sciences
PDF of Lecture Notes - School of Mathematical Sciences
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
1. DISTRIBUTION THEORY<br />
) ( N<br />
)<br />
p(x) =<br />
( M<br />
x n − x<br />
( ) ,<br />
M + N<br />
n<br />
E(X) = n<br />
M<br />
M + N ,<br />
M + N − n nMN<br />
Var (X) =<br />
M + N − 1 (M + N) . 2<br />
The mgf exists, but there is no useful expression available.<br />
1. The hypergeometric distribution is simply<br />
# selections with x black balls<br />
,<br />
# possible selections<br />
) ( N<br />
)<br />
=<br />
( M<br />
x n − x<br />
( ) .<br />
M + N<br />
n<br />
2. To see how the limits arise, observe we must have x ≤ n (i.e., no more than<br />
sample size <strong>of</strong> black balls in the sample.) Also, x ≤ M, i.e., x ≤ min (n, M).<br />
Similarly, we must have x ≥ 0 (i.e., cannot have < 0 black balls in sample), and<br />
n − x ≤ N (i.e., cannot have more white balls than number in urn).<br />
i.e. x ≥ n − N<br />
i.e. x ≥ max (0, n − N).<br />
3. If we sample with replacement, we would get X ∼ B ( n, p = M<br />
M+N<br />
)<br />
. It is interesting<br />
to compare moments:<br />
finite population correction<br />
↑<br />
hypergeometric E(X) = np Var (X) = M+N−n [np(1 − p)]<br />
M+N−1<br />
binomial E(x) = np Var (X) = np(1 − p) ↓<br />
when sample all<br />
balls in urn Var(X) ∼ 0<br />
4. When M, N >> n, the difference between sampling with and without replacement<br />
should be small.<br />
6