05.12.2012 Views

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Given any estimator ˆ θ we can calculate its expected value for each possible value of<br />

θ ∈ Θ. An estimator is said to be unbiased if this expected value is identically equal to<br />

θ. If an estimator is unbiased then we can conclude that if we repeat the experiment<br />

an infinite number of times with θ fixed and calculate the value of the estimator each<br />

time then the average of the estimator values will be exactly equal to θ. From the<br />

frequentist viewpoint this is a desireable property and so, where possible, frequentists<br />

use unbiased estimators.<br />

Definition 2.1 (The Frequentist philosophy). <strong>To</strong> evaluate the usefulness of an<br />

estimator ˆ θ = ˆ θ(x) of θ, examine the properties of the random variable ˆ θ = ˆ θ(X). �<br />

Definition 2.2 (Unbiased estimators). An estimator ˆ θ = ˆ θ(X) is said to be unbi-<br />

ased for a parameter θ if it equals θ in expectation:<br />

E[ ˆ θ(X)] = E( ˆ θ) = θ.<br />

Intuitively, an unbiased estimator is ‘right on target’. �<br />

Definition 2.3 (Bias of an estimator). The bias of an estimator ˆ θ = ˆ θ(X) of θ is<br />

defined as bias( ˆ θ) = E[ ˆ θ(X) − θ]. �<br />

Definition 2.4 (Bias corrected estimators). If bias( ˆ θ) is of the form cθ, then<br />

(obviously) ˜ θ = ˆ θ/(1 + c) is unbiased for θ. Likewise, if bias( ˆ θ) = θ + c, then ˜ θ = ˆ θ − c<br />

is unbiased for θ. In such situations we say that ˜ θ is a biased corrected version of ˆ θ. �<br />

Definition 2.5 (Unbiased functions). More generally ˆg(X) is said to be unbiased<br />

for a function g(θ) if E[ˆg(X)] = g(θ). �<br />

Note that even if ˆ θ is an unbiased estimator of θ, g( ˆ θ) will generally not be an<br />

unbiased estimator of g(θ) unless g is linear or affine. This limits the importance of the<br />

notion of unbiasedness. It might be at least as important that an estimator is accurate<br />

in the sense that its distribution is highly concentrated around θ.<br />

Is unbiasedness a good thing? Unbiasedness is important when combining esti-<br />

mates, as averages of unbiased estimators are unbiased (see the review exercises at the<br />

end of this chapter). For example, when combining standard deviations s1, s2, . . . , sk<br />

with degrees of freedom df1, . . . , dfk we always average their squares<br />

�<br />

df1s<br />

¯s =<br />

2 1 + · · · + dfks2 k<br />

df1 + · · · + dfk<br />

as s 2 i are unbiased estimators of the variance σ 2 , whereas si are not unbiased estimators<br />

of σ (see the review exercises). Be careful when averaging biased estimators! It may<br />

well be appropriate to make a bias-correction before averaging.<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!