06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

262 3 Basic Statistical <strong>Theory</strong><br />

the sample median is L-unbiased under a squared-error loss, but it is not<br />

admissible under that loss, while the sample mean is L-unbiased under an<br />

absolute-error loss, but it is not admissible.<br />

This is the basis for defining unbiasedness for statistical tests and confidence<br />

sets.<br />

Unbiasedness for estimators has a simple definition. For squared-error loss<br />

for estimating g(θ), if T is L-unbiased, then, and only then, T is an unbiased<br />

estimator <strong>of</strong> g(θ). Of course, in this case, the loss function need not be<br />

considered and the requirement is just Eθ(T(X)) = g(θ).<br />

L-Invariance<br />

On page 217 we referred to equivariant estimators in parametric transformation<br />

group families (see Section 2.6). We mentioned that associated with the<br />

group G <strong>of</strong> transformations <strong>of</strong> the random variable is a group, G, <strong>of</strong> transformations<br />

<strong>of</strong> the parameter and a group <strong>of</strong> transformations on the estimator,<br />

G ∗ .<br />

In a decision-theoretic approach, the relevance <strong>of</strong> equivariance depends<br />

not only on the family <strong>of</strong> distributions, but also on the equivariance <strong>of</strong> the<br />

loss function. In the loss function L(P, T(X)), the first argument under a<br />

transformation can be thought <strong>of</strong> as a map PX → Pg(X) or equivalently as a<br />

map Pθ → P¯g(θ). The statistical decision procedure T(X) is L-invariant for<br />

a given loss function L if for each g ∈ G, there exists a unique g ∗ ∈ G ∗ , such<br />

that<br />

L(PX, T(X)) = L(Pg(X), g ∗ (T(X))), (3.99)<br />

or equivalently for each ¯g ∈ G,<br />

L(Pθ, T(X)) = L(P¯g(θ), g ∗ (T(X))). (3.100)<br />

The g ∗ in these expressions is the same as in equation (3.22). We will <strong>of</strong>ten<br />

require that statistical procedures be equivariant, in the sense that the<br />

quantities involved (the estimators, the confidence sets, and so on) change in<br />

a manner that is consistent with changes in the parametrization. The main<br />

point <strong>of</strong> this requirement, however, is to ensure L-invariance, that is, invariance<br />

<strong>of</strong> the loss. We will discuss equivariance <strong>of</strong> statistical procedures in more<br />

detail in Section 3.4.<br />

Uniformly Minimizing the Risk<br />

All discussions <strong>of</strong> statistical inference are in the context <strong>of</strong> some family <strong>of</strong><br />

distributions, and when we speak <strong>of</strong> a “uniform” property, we mean a property<br />

that holds for all members <strong>of</strong> the family.<br />

If we have the problem <strong>of</strong> estimating g(θ) under some given loss function<br />

L, it is <strong>of</strong>ten the case that for some specific value <strong>of</strong> θ, say θ1, one particular<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!