06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

214 3 Basic Statistical <strong>Theory</strong><br />

closeness, and equivariance under transformations. In this section we consider<br />

these properties only in the context <strong>of</strong> estimation, but the same properties<br />

have meaning in other types <strong>of</strong> statistical inference, although the specific definition<br />

may be different. “Good” can be defined either in terms <strong>of</strong> various<br />

measures associated with an estimator (bias, mean squared error, Pitman<br />

closeness), or as a property that the estimator either possesses or does not<br />

(equivariance).<br />

Bias<br />

The bias <strong>of</strong> T(X) for estimating g(θ) is<br />

E(T(X)) − g(θ). (3.14)<br />

One <strong>of</strong> the most commonly required desirable properties <strong>of</strong> point estimators<br />

is unbiasedness.<br />

Definition 3.1 (unbiased point estimator)<br />

The estimator T(X) is unbiased for g(θ) if<br />

E(T(X)) = g(θ) ∀θ ∈ Θ.<br />

Unbiasedness is a uniform property <strong>of</strong> the expected value.<br />

We can also define other types <strong>of</strong> unbiasedness in terms <strong>of</strong> other aspects<br />

<strong>of</strong> a probability distribution. For example, an estimator whose median is the<br />

estimand is said to be median-unbiased.<br />

Unbiasedness has different definitions in other settings (estimating functions,<br />

for example, see page 251) and for other types <strong>of</strong> statistical inference (for<br />

example, testing, see page 292, and determining confidence sets, see page 296),<br />

but the meanings are similar.<br />

If two estimators are unbiased, we would reasonably prefer one with smaller<br />

variance.<br />

Mean Squared Error<br />

Another measure <strong>of</strong> the goodness <strong>of</strong> a scalar estimator is the mean squared<br />

error or MSE,<br />

MSE(T(x)) = E((T(X) − g(θ)) 2 ), (3.15)<br />

which is the square <strong>of</strong> the bias plus the variance:<br />

MSE(T(x)) = (E(T(X)) − g(θ)) 2 + V(T(X)). (3.16)<br />

An example due to C. R. Rao (Rao, 1981) causes us to realize that we<br />

<strong>of</strong>ten face a dilemma in finding a good estimate. A “bad” estimator may have<br />

a smaller MSE than a “good” estimator.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!