05.12.2012 Views

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Problem 2.4. Consider X1, . . . , Xn where Xi ∼ N(θ, σ 2 ) and σ is known. Three<br />

estimators of θ are ˆ θ1 = ¯ X = 1<br />

n<br />

� n<br />

i=1 Xi, ˆ θ2 = X1, and ˆ θ3 = (X1 + ¯ X)/2. Pick one.<br />

Solution. E( ˆ θ1) = 1<br />

n [E(X1)+· · ·+E(Xn)] = 1<br />

1<br />

[θ+· · ·+θ] = [nθ] = θ, (unbiased). Next<br />

n n<br />

E( ˆ θ2) = E(X1) = θ, (unbiased). Finally E( ˆ θ3) = 1<br />

2E � n+1<br />

n X1 + 1<br />

n [X2 + · · · + Xn] � =<br />

�<br />

1 n+1<br />

2 n E(X1) + 1<br />

n [E(X2) + · · · + E(Xn)] � = 1 n+1 n−1<br />

{ θ + θ} = θ, (unbiased). All three<br />

2 n n<br />

estimators are unbiased. Although desirable from a frequentist standpoint, unbiased-<br />

ness is not a property that helps us choose between estimators. <strong>To</strong> do this we must<br />

examine some measure of loss like the mean squared error. For class of estimators<br />

that are unbiased, the mean squared error will be equal to the estimation variance.<br />

Calculate Var( ˆ θ1) = 1<br />

n 2 [Var(X1) + · · · + Var(Xn)] = 1<br />

n 2 [σ 2 + · · · + σ 2 ] = 1<br />

n 2 [nσ 2 ] = 1<br />

n σ2 .<br />

Trivially Var( ˆ θ2) = Var(X1) = σ 2 . Finally Var( ˆ θ3) = (σ 2 /n + σ 2 )/4 + 2Cov( ¯ X, X1).<br />

So ¯ X appears “best” in the sense that Var( ˆ θ) is smallest among these three unbiased<br />

estimators.<br />

Problem 2.5. Consider X1, . . . , Xn to be independent random variables with means<br />

E(Xi) = µ + βi and variances Var(Xi) = σ 2 i . Such a situation could arise when Xi are<br />

estimators of µ obtained from independent sources and βi is the bias of the estimator<br />

Xi. Consider pooling the estimators of µ into a common estimator using the linear<br />

combination ˆµ = w1X1 + w2X2 + · · · + wnXn.<br />

(i) If the estimators are unbiased, show that ˆµ is unbiased if and only if � wi = 1.<br />

(ii) In the case when the estimators are unbiased, show that ˆµ has minimum variance<br />

when the weights are inversely proportional to the variances σ 2 i .<br />

(iii) Show that the variance of ˆµ for optimal weights wi is Var(ˆµ) = 1/ �<br />

i σ−2<br />

i .<br />

(iv) Consider the case when estimators may be biased. Find the mean square error<br />

of the optimal linear combination obtained above, and compare its behaviour as<br />

n → ∞ in the biased and unbiased case, when σ 2 i = σ 2 , i = 1, . . . , n.<br />

Solution. E(ˆµ) = E(w1X1 + · · · + wnXn) = �<br />

i wiE(Xi) = �<br />

i wiµ = µ �<br />

i wi so ˆµ is<br />

unbiased if and only if �<br />

i wi = 1. The variance of our estimator is Var(ˆµ) = �<br />

i w2 i σ 2 i ,<br />

which should be minimized subject to the constraint �<br />

i wi = 1. Differentiating the<br />

Lagrangian L = �<br />

i w2 i σ 2 i − λ ( �<br />

i wi − 1) with respect to wi and setting equal to zero<br />

yields 2wiσ 2 i = λ ⇒ wi ∝ σ −2<br />

i so that wi = σ −2<br />

i /(�<br />

j σ−2 j<br />

). Then, for optimal weights we<br />

get Var(ˆµ) = �<br />

i w2 i σ2 i = ( �<br />

i σ−4 i σ2 i )/( �<br />

i σ−2 i )2 = 1/( �<br />

i σ−2 i ). When σ2 i = σ2 we have<br />

that Var(ˆµ) = σ2 /n which tends to zero for n → ∞ whereas bias(ˆµ) = � βi/n = ¯ β is<br />

equal to the average bias and MSE(ˆµ) = σ 2 + ¯ β 2 . Therefore the bias tends to dominate<br />

the variance as n gets larger, which is very unfortunate.<br />

13

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!