06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

272 3 Basic Statistical <strong>Theory</strong><br />

Example 3.21 Risk functions for estimators <strong>of</strong> the parameter in a<br />

binomial distribution<br />

Suppose we have an observation X from a binomial distribution with parameters<br />

n and π. The PDF (wrt the counting measure) is<br />

<br />

n<br />

pX(x) = π<br />

x<br />

x (1 − π) n−x I{0,1,...,n}(x).<br />

We wish to estimate π.<br />

The MLE <strong>of</strong> π is<br />

T(X) = X/n. (3.107)<br />

We also see that this is an unbiased estimator. Under squared-error loss, the<br />

risk is<br />

RT(π) = E (X/n − π) 2 = π(1 − π)/n,<br />

which, <strong>of</strong> course, is just variance.<br />

Let us consider a randomized estimator, for some 0 ≤ α = 1,<br />

<br />

T with probability 1 − α<br />

δα(X) =<br />

1/2 with probability α<br />

(3.108)<br />

This is a type <strong>of</strong> shrunken estimator. The motivation to move T toward 1/2 is<br />

that the maximum <strong>of</strong> the risk <strong>of</strong> T occurs at 1/2. By increasing the probability<br />

<strong>of</strong> selecting that value the risk at that point will be reduced, and so perhaps<br />

this will reduces the risk in some overall way.<br />

Under squared-error loss, the risk <strong>of</strong> δα(X) is<br />

Rδα(π) = (1 − α)E (X/n − π) 2 + αE (1/2 − π) 2<br />

= (1 − α)π(1 − π)/n + α(1/2 − π) 2 .<br />

The mass point has a spreading effect and the risk dips smoothly The risk<br />

<strong>of</strong> δα(X) also has a maximum at π = 1/2, but it is (1 − α)/4n, compared to<br />

RT(1/2) = 1/4n.<br />

We see that for α = 1/(n + 1) the risk is constant with respect to π;<br />

therefore δ1/(n+1)(X) is a minimax estimator wrt squared-error loss.<br />

Risk functions are shown in Figure 3.1 for T(X) and for δ.05(X) and<br />

δ1/(n+1)(X). Notice that neither δα(X) nor T(X) dominates the other.<br />

3.3.5 Summary and Review<br />

We have discussed five general approaches to statistical inference, and we have<br />

identified certain desirable properties that a method <strong>of</strong> inference may have.<br />

A first objective in mathematical statistics is to characterize optimal properties<br />

<strong>of</strong> statistical methods. The setting for statistical inference includes the<br />

distribution families that are assumed a priori, the objectives <strong>of</strong> the statistical<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!