06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

266 3 Basic Statistical <strong>Theory</strong><br />

The various pieces <strong>of</strong> this theorem will be considered in other places where<br />

the particular type <strong>of</strong> estimation is discussed.<br />

If in a Bayesian setup, the prior distribution and the posterior distribution<br />

are in the same parametric family, that is, if Q in equations (3.3) and (3.4)<br />

represents a single parametric family, then a squared-error loss yield Bayes<br />

estimators for E(X) that are linear in X. (If a prior distribution on the parameters<br />

together with a conditional distribution <strong>of</strong> the observables yield a<br />

posterior in the same parametric family as the prior, the prior is said to be<br />

conjugate with respect to the conditional distribution <strong>of</strong> the observables. We<br />

will consider various types <strong>of</strong> priors more fully in Chapter 4.)<br />

Because we use squared-error loss functions so <strong>of</strong>ten, we must be careful<br />

not to assume certain common properties hold. Other types <strong>of</strong> loss functions<br />

can provide useful counterexamples.<br />

3.3.3 Admissibility<br />

By Definition 3.13, a decision δ ∗ is admissible if there does not exist a decision<br />

δ that dominates δ ∗ . Because this definition is given as a negative condition, it<br />

is <strong>of</strong>ten easier to show that a rule is inadmissible, because all that is required<br />

to do that is to exhibit another rule that dominates it. In this section we<br />

consider some properties <strong>of</strong> admissibility and ways <strong>of</strong> identifying admissible<br />

or inadmissible rules.<br />

Admissibility <strong>of</strong> Estimators under Squared-Error Loss<br />

Any property defined in terms <strong>of</strong> the risk depends on the loss function. As we<br />

have seen above, the squared-error loss <strong>of</strong>ten results in estimators that have<br />

“nice” properties. Here is another one.<br />

Under a squared-error loss function an unbiased estimator is always at<br />

least as good as a biased estimator unless the bias has a negative correlation<br />

with the unbiased estimator.<br />

Theorem 3.10<br />

Let E(T(X)) = g(θ), and let T(X) = T(X) + B, where Cov(T, B) ≥ 0. Then<br />

under squared-error loss, the risk <strong>of</strong> T(X) is uniformly less than the risk <strong>of</strong><br />

T(X); that is, T(X) is inadmissible.<br />

Pro<strong>of</strong>.<br />

R(g(θ), T) = R(g(θ), T) + V(B) + Cov(T, B) ∀ θ.<br />

Also under a squared-error loss function, an unbiased estimator dominates<br />

a biased estimator unless the bias is a function <strong>of</strong> the parameter.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!