06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

388 5 Unbiased Point Estimation<br />

Example 5.4 an unbiased estimator with poor properties<br />

Consider the problem <strong>of</strong> using a sample <strong>of</strong> size 1 for estimating g(θ) = e −3θ<br />

where θ is the parameter in a Poisson distribution. An unbiased is<br />

T(X) = (−2) X ,<br />

as you are asked to show in Exercise 5.2.<br />

The estimator is ridiculous. It can be negative, even though g(θ) > 0.<br />

It is increasing in the positive integers, even though g(θ) decreases over the<br />

positive integers.<br />

Degree <strong>of</strong> a Statistical Function<br />

If a statistical function is estimable, we may ask how many observations are<br />

required to estimate it; that is, to estimate it unbiasedly. We refer to this<br />

number as the degree <strong>of</strong> the statistical function. Obviously, this depends on<br />

the distribution as well as the functional. A mean functional may not even<br />

be exist, for example, in a Cauchy distribution, but if the mean functional<br />

exists, it is estimable and its degree is 1. The variance functional in a normal<br />

distribution is estimable and its degree is 2 (see page 401).<br />

5.1 Uniformly Minimum Variance Unbiased Point<br />

Estimation<br />

An unbiased estimator may not be unique. If there are more than one unbiased<br />

estimator, we will seek one that has certain optimal properties.<br />

5.1.1 Unbiased Estimators <strong>of</strong> Zero<br />

Unbiased estimators <strong>of</strong> 0 play a useful role in UMVUE problems.<br />

If T(X) is unbiased for g(θ) then T(X) − U(X) is also unbiased for g(θ)<br />

for any U such that E(U(X)) = 0; in fact, all unbiased estimators <strong>of</strong> g(θ)<br />

belong to an equivalence class defined as<br />

{T(X) − U(X)}, (5.4)<br />

where Eθ(U(X)) = 0.<br />

In Theorem 5.2 and its corollary we will see ways that unbiased estimators<br />

<strong>of</strong> zero can be used to identify optimal unbiased estimators.<br />

In some cases, there may be no such nontrivial U(X) that yields a different<br />

unbiased estimator in (5.4). Consider for example, a single Bernoulli trial<br />

with probability <strong>of</strong> success π, yielding the random variable X, and consider<br />

T(X) = X as an estimator <strong>of</strong> π. We immediately see that T(X) is unbiased<br />

for π. Now, let S be an estimator <strong>of</strong> π, and let S(0) = s0 and S(1) = s1. For<br />

S to be unbiased, we must have<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!