28.01.2015 Views

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

PDF of Lecture Notes - School of Mathematical Sciences

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2. STATISTICAL INFERENCE<br />

Definition. 2.1.3<br />

A statistic T with property T (x) ∈ Θ ∀x is called an estimator for θ.<br />

Example<br />

For x 1 , x 2 , . . . , x n i.i.d. N(µ, σ 2 ), we have θ = (µ, σ 2 ). The quantity (¯x, s 2 ) is an estimator<br />

for θ, where ¯x = 1 n∑<br />

x i , S 2 = 1 n∑<br />

(x i − ¯x) 2 .<br />

n<br />

n − 1<br />

i=1<br />

There are two important concepts here: the first is that estimators are random<br />

variables; the second is that you need to be able to distinguish between random variables<br />

and their realisations. In particular, an estimate is a realisation <strong>of</strong> a random<br />

variable.<br />

For example, strictly speaking, x 1 , x 2 , . . . , x n are realisations <strong>of</strong> random variables<br />

X 1 , X 2 , . . . , X n , and ¯x = 1 n∑<br />

x i is a realisation <strong>of</strong> ¯X; ¯X is an estimator, and ¯x is an<br />

n<br />

estimate.<br />

i=1<br />

We will find that from now on however, that it is <strong>of</strong>ten more convenient, if less rigorous,<br />

to use the same symbol for estimator and estimate. This arises especially in the use <strong>of</strong><br />

ˆθ as both estimator and estimate, as we shall see.<br />

An unsatisfactory aspect <strong>of</strong> Definition 2.1.3 is that it gives no guidance on how to<br />

recognize (or construct) good estimators.<br />

Unless stated otherwise, we will assume that θ is a scalar parameter in the following.<br />

In broad terms, we would like an estimator to be “as close as possible to” θ “with high<br />

probability.<br />

Definition. 2.1.4<br />

The mean squared error <strong>of</strong> the estimator T <strong>of</strong> θ is defined by<br />

i=1<br />

MSE T (θ) = E{(T − θ) 2 }.<br />

Example<br />

Suppose X 1 , . . . , X n are i.i.d. Bernoulli θ RV’s, and T = ¯X =‘proportion <strong>of</strong> successes’.<br />

Since nT ∼ B(n, θ) we have<br />

E(nT ) = nθ, Var(nT ) = nθ(1 − θ)<br />

#<br />

=⇒ E(T ) = θ, Var(T ) =<br />

θ(1 − θ)<br />

n<br />

78

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!