06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

396 5 Unbiased Point Estimation<br />

Note the meaning <strong>of</strong> this relationship in the multiparameter case: it says<br />

that the matrix<br />

<br />

∂<br />

V(T(X)) −<br />

∂θ g(θ)<br />

T (I(θ)) −1 ∂<br />

g(θ) (5.14)<br />

∂θ<br />

is nonnegative definite. (This includes the zero matrix; the zero matrix is<br />

nonnegative definite.)<br />

Example 5.11 Fisher efficiency in a normal distribution<br />

Consider a random sample X1, X2, . . ., Xn from the N(µ, σ2 ) distribution. In<br />

Example 3.9, we used the parametrization θ = (µ, σ). Now we will use the<br />

parametrization θ = (µ, σ2 ). The joint log density is<br />

logp(µ,σ)(x) = c − n<br />

2 log(σ2 ) − <br />

(xi − µ) 2 /(2σ 2 ). (5.15)<br />

The information matrix is diagonal, so the inverse <strong>of</strong> the information matrix<br />

is particularly simple:<br />

I(θ) −1 2<br />

σ<br />

n =<br />

0<br />

0<br />

σ 4<br />

<br />

.<br />

2(n−1)<br />

(5.16)<br />

For the simple case <strong>of</strong> g(θ) = (µ, σ2 ), we have the unbiased estimator,<br />

<br />

n<br />

T(X) = X, (Xi − X) 2 <br />

/(n − 1) ,<br />

and<br />

i=1<br />

V(T(X)) =<br />

σ 2<br />

i<br />

n<br />

0<br />

0<br />

σ 4<br />

<br />

2(n−1)<br />

, (5.17)<br />

which is the same as the inverse <strong>of</strong> the information matrix. The estimators<br />

are Fisher efficient.<br />

It is important to know in what situations an unbiased estimator can<br />

achieve the CRLB. Notice this would depend on both p(X, θ) and g(θ). Let<br />

us consider this question for the case <strong>of</strong> scalar θ and scalar function g. The<br />

necessary and sufficient condition that an estimator T <strong>of</strong> g(θ) attain the CRLB<br />

is that (T − g(θ)) be proportional to ∂ log(p(X, θ))/∂θ a.e.; that is, for some<br />

a that does not depend on X,<br />

∂ log(p(X, θ))<br />

∂θ<br />

= a(θ)(T − g(θ)) a.e. (5.18)<br />

This means that the CRLB can be obtained by an unbiased estimator only in<br />

the one-parameter exponential family.<br />

For example, there are unbiased estimators <strong>of</strong> the mean in the normal,<br />

Poisson, and binomial families that attain the CRLB. There is no unbiased<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!