06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

5.1 UMVUE 395<br />

The risk <strong>of</strong> T ∗ is less than the maximum risk <strong>of</strong> T; therefore, T is not<br />

minimax.<br />

Now we might ask is T ∗ minimax?<br />

We first note that the risk (5.12) is constant, so T ∗ is minimax if it is<br />

admissible or if it is a Bayesian estimator (in either case with respect to the<br />

squared-error loss). We can see that T ∗ is Bayesian estimator (with a beta<br />

prior). (You are asked to prove this in Exercise 4.10 on page 379.) As we show<br />

in Chapter 4, a Bayes estimator with a constant risk is a minimax estimator;<br />

hence, δ ∗ is minimax. (This example is due to Lehmann.)<br />

Although we may initially be led to consideration <strong>of</strong> UMVU estimators<br />

by consideration <strong>of</strong> a squared-error loss, which leads to a mean squared-error<br />

risk, the UMVUE may not minimize the MSE. It was the fact that we could<br />

not minimize the MSE uniformly that led us to add on the requirement <strong>of</strong> unbiasedness.<br />

There may, however, be estimators that have a uniformly smaller<br />

MSE than the UMVUE. An example <strong>of</strong> this is in the estimation <strong>of</strong> the variance<br />

in a normal distribution. In Example 5.6 we have seen that the UMVUE<br />

<strong>of</strong> σ 2 in the normal distribution is S 2 , while in Example 3.13 we have seen<br />

that the MLE <strong>of</strong> σ 2 is (n − 1)S 2 /n, and by equation (3.55) on page 239, we<br />

see that the MSE <strong>of</strong> the MLE is uniformly less than the MSE <strong>of</strong> the UMVUE.<br />

There are other ways in which UMVUEs may not be very good as estimators;<br />

see, for example, Exercise 5.2. A further undesirable property <strong>of</strong><br />

UMVUEs is that they are not invariant to transformation.<br />

5.1.5 Lower Bounds on the Variance <strong>of</strong> Unbiased Estimators<br />

The three Fisher information regularity conditions (see page 168) play a major<br />

role in UMVUE. In particular, these conditions allow us to develop a lower<br />

bound on the variance <strong>of</strong> any unbiased estimator.<br />

The Information Inequality (CRLB) for Unbiased Estimators<br />

What is the smallest variance an unbiased estimator can have? For an unbiased<br />

estimator T <strong>of</strong> g(θ) in a family <strong>of</strong> densities satisfying the regularity<br />

conditions and such that T has a finite second moment, the answer results<br />

from inequality (3.83) on page 252 for the scalar estimator T and estimand<br />

g(θ). (Note that θ itself may be a vector.) That is the information inequality<br />

or the Cramér-Rao lower bound (CRLB), and it results from the covariance<br />

inequality.<br />

If g(θ) is a vector, then ∂g(θ)/∂θ is the Jacobian, and we have<br />

V(T(X)) <br />

<br />

∂<br />

∂θ g(θ)<br />

T (I(θ)) −1 ∂<br />

g(θ), (5.13)<br />

∂θ<br />

where we assume the existence <strong>of</strong> all quantities in the expression.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!