15.01.2013 Views

an introduction to generalized linear models - GDM@FUDAN ...

an introduction to generalized linear models - GDM@FUDAN ...

an introduction to generalized linear models - GDM@FUDAN ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The probability density function of a continuous r<strong>an</strong>dom variable Y (or the<br />

probability mass function if Y is discrete) is referred <strong>to</strong> simply as a probability<br />

distribution <strong>an</strong>d denoted by<br />

f(y; θ)<br />

where θ represents the parameters ofthe distribution.<br />

We use dot (·) subscripts for summation <strong>an</strong>d bars ( − ) for me<strong>an</strong>s, thus<br />

y = 1<br />

N<br />

N�<br />

i=1<br />

yi = 1<br />

N<br />

The expected value <strong>an</strong>d vari<strong>an</strong>ce ofa r<strong>an</strong>dom variable Y are denoted by<br />

E(Y ) <strong>an</strong>d var(Y ) respectively. Suppose r<strong>an</strong>dom variables Y1, ..., YN are independent<br />

with E(Yi) =µi <strong>an</strong>d var(Yi) =σ2 i for i =1, ..., n. Let the r<strong>an</strong>dom<br />

variable W be a <strong>linear</strong> combination ofthe Yi’s<br />

y · .<br />

W = a1Y1 + a2Y2 + ... + <strong>an</strong>Yn, (1.1)<br />

where the ai’s are const<strong>an</strong>ts. Then the expected value of W is<br />

<strong>an</strong>d its vari<strong>an</strong>ce is<br />

E(W )=a1µ1 + a2µ2 + ... + <strong>an</strong>µn<br />

(1.2)<br />

var(W )=a 2 1σ 2 1 + a 2 2σ 2 2 + ... + a 2 nσ 2 n. (1.3)<br />

1.4Distributions related <strong>to</strong> the Normal distribution<br />

The sampling distributions ofm<strong>an</strong>y ofthe estima<strong>to</strong>rs <strong>an</strong>d test statistics used<br />

in this book depend on the Normal distribution. They do so either directly because<br />

they are derived from Normally distributed r<strong>an</strong>dom variables, or asymp<strong>to</strong>tically,<br />

via the Central Limit Theorem for large samples. In this section we<br />

give definitions <strong>an</strong>d notation for these distributions <strong>an</strong>d summarize the relationships<br />

between them. The exercises at the end ofthe chapter provide<br />

practice in using these results which are employed extensively in subsequent<br />

chapters.<br />

1.4.1 Normal distributions<br />

1. Ifthe r<strong>an</strong>dom variable Y has the Normal distribution with me<strong>an</strong> µ <strong>an</strong>d<br />

vari<strong>an</strong>ce σ 2 , its probability density function is<br />

f(y; µ, σ 2 )=<br />

We denote this by Y ∼ N(µ, σ 2 ).<br />

1<br />

√<br />

2πσ2 exp<br />

�<br />

− 1<br />

2<br />

�<br />

y − µ<br />

σ2 � �<br />

2<br />

.<br />

2. The Normal distribution with µ = 0 <strong>an</strong>d σ 2 =1,Y ∼ N(0, 1), is called the<br />

st<strong>an</strong>dard Normal distribution.<br />

© 2002 by Chapm<strong>an</strong> & Hall/CRC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!