06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

416 5 Unbiased Point Estimation<br />

where, for each n, Vn(θ) is a k ×k positive definite matrix. From the definition<br />

2 <strong>of</strong> asymptotic expectation <strong>of</strong> ˆθn − θ , Vn(θ) is the asymptotic variance-<br />

covariance matrix <strong>of</strong> ˆ θn. Note that this matrix may depend on θ. We should<br />

note that for any fixed n, Vn(θ) is not necessarily the variance-covariance<br />

matrix <strong>of</strong> ˆ θn; that is, it is possible that Vn(θ) = V( ˆ θn).<br />

Just as we have defined Fisher efficiency for an unbiased estimator <strong>of</strong> fixed<br />

size, we define a sequence to be asymptotically Fisher efficient if the sequence<br />

is asymptotically unbiased, the Fisher information matrix In(θ) exists and is<br />

positive definite for each n, and Vn(θ) = (In(θ)) −1 for each n. The definition<br />

<strong>of</strong> asymptotically (Fisher) efficiency is <strong>of</strong>ten limited even further so as to apply<br />

only to estimators that are asymptotically normal. (MS2 uses the restricted<br />

definition.)<br />

Being asymptotically efficient does not mean for any fixed n that ˆ θn is<br />

efficient. First <strong>of</strong> all, for fixed n, ˆ θn may not even be unbiased; even if it is<br />

unbiased, however, it may not be efficient.<br />

As we have emphasized many times, asymptotic properties are different<br />

from limiting properties. As a striking example <strong>of</strong> this, consider a very simple<br />

example from Romano and Siegel (1986).<br />

Example 5.25 Asymptotic and Limiting Properties<br />

Let X1, . . ., Xn iid ∼ N1(µ, 1), and consider a randomized estimator ˆµn <strong>of</strong> µ<br />

defined by<br />

⎧<br />

⎨<br />

ˆµn =<br />

⎩<br />

n2 with probability 1<br />

n .<br />

Xn with probability 1 − 1<br />

n<br />

It is clear that n 1/2 (ˆµn−µ) d → N(0, 1), and furthermore, the Fisher information<br />

for µ is n −1/2 . The estimator ˆµn is therefore asymptotically Fisher efficient.<br />

The bias <strong>of</strong> ˆµn, however, is<br />

E(ˆµn − µ) = µ<br />

which tends to infinity, and the variance is<br />

V(ˆµn) = E(ˆµ 2 ) − (E(ˆµ)) 2<br />

=<br />

<br />

1 − 1<br />

n<br />

<br />

1 − 1<br />

<br />

+ n − µ = n − µ/n,<br />

n<br />

1<br />

n +<br />

= n 3 + O(n 2 ),<br />

1<br />

n<br />

<br />

n 4 <br />

− µ 1 − 1<br />

2 + n<br />

n<br />

which also tends to infinity. Hence, we have an asymptotically Fisher efficient<br />

estimator whose limiting bias and limiting variance are both infinite.<br />

The example can be generalized to any estimator Tn <strong>of</strong> g(θ) such that<br />

V(Tn) = 1/n and n 1/2 (Tn − g(θ)) d → N(0, 1). From Tn form the estimator<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!