www.downloadslide.com 8.2 The Bias and Mean Square Error of Point Estimators 393 DEFINITION 8.2 Let ˆθ be a point estimator for a parameter θ. Then ˆθ is an unbiased estimator if E( ˆθ) = θ.IfE( ˆθ) ≠ θ, ˆθ is said to be biased. DEFINITION 8.3 The bias of a point estimator ˆθ is given by B( ˆθ) = E( ˆθ)− θ. Figure 8.3 shows two possible sampling distributions for unbiased point estimators for a target parameter θ. We would prefer that our estimator have the type of distribution indicated in Figure 8.3(b) because the smaller variance guarantees that in repeated sampling a higher fraction of values of ˆθ 2 will be “close” to θ. Thus, in addition to preferring unbiasedness, we want the variance of the distribution of the estimator V (ˆθ) to be as small as possible. Given two unbiased estimators of a parameter θ, and all other things being equal, we would select the estimator with the smaller variance. Rather than using the bias and variance of a point estimator to characterize its goodness, we might employ E[( ˆθ − θ) 2 ], the average of the square of the distance between the estimator and its target parameter. DEFINITION 8.4 The mean square error of a point estimator ˆθ is MSE( ˆθ) = E[( ˆθ − θ) 2 ]. The mean square error of an estimator ˆθ, MSE( ˆθ), is a function of both its variance and its bias. If B( ˆθ) denotes the bias of the estimator ˆθ, it can be shown that MSE(ˆθ) = V(ˆθ)+ [B( ˆθ)] 2 . We will leave the proof of this result as Exercise 8.1. In this section, we have defined properties of point estimators that are sometimes desirable. In particular, we often seek unbiased estimators with relatively small variances. In the next section, we consider some common and useful unbiased point estimators. FIGURE 8.3 Sampling distributions for two unbiased estimators: (a) estimator with large variation; (b) estimator with small variation f ( ˆ 1 ) f ( ˆ 2 ) (a) ˆ ˆ 1 2 (b)
www.downloadslide.com 394 Chapter 8 Estimation Exercises 8.1 Using the identity show that (ˆθ − θ) = [ˆθ − E(ˆθ)] + [E(ˆθ)− θ] = [ˆθ − E(ˆθ)] + B( ˆθ), MSE(ˆθ) = E[(ˆθ − θ) 2 ] = V (ˆθ)+ (B( ˆθ)) 2 . 8.2 a If ˆθ is an unbiased estimator for θ, what is B( ˆθ)? b If B( ˆθ) = 5, what is E( ˆθ)? 8.3 Suppose that ˆθ is an estimator for a parameter θ and E( ˆθ) = aθ +b for some nonzero constants a and b. a In terms of a, b, and θ, what is B( ˆθ)? b Find a function of ˆθ—say, ˆθ ⋆ —that is an unbiased estimator for θ. 8.4 Refer to Exercise 8.1. a If ˆθ is an unbiased estimator for θ, how does MSE(ˆθ) compare to V (ˆθ)? b If ˆθ is an biased estimator for θ, how does MSE(ˆθ) compare to V (ˆθ)? 8.5 Refer to Exercises 8.1 and consider the unbiased estimator ˆθ ⋆ that you proposed in Exercise 8.3. a b c Express MSE(ˆθ ⋆ ) as a function of V (ˆθ). Give an example of a value of a for which MSE(ˆθ ⋆ )MSE(ˆθ). 8.6 Suppose that E( ˆθ 1 ) = E( ˆθ 2 ) = θ,V (ˆθ 1 ) = σ1 2, and V ( ˆθ 2 ) = σ2 2 . Consider the estimator ˆθ 3 = a ˆθ 1 + (1 − a)ˆθ 2 . a Show that ˆθ 3 is an unbiased estimator for θ. b If ˆθ 1 and ˆθ 2 are independent, how should the constant a be chosen in order to minimize the variance of ˆθ 3 ? 8.7 Consider the situation described in Exercise 8.6. How should the constant a be chosen to minimize the variance of ˆθ 3 if ˆθ 1 and ˆθ 2 are not independent but are such that Cov(ˆθ 1 , ˆθ 2 ) = c ≠ 0? 8.8 Suppose that Y 1 , Y 2 , Y 3 denote a random sample from an exponential distribution with density function ⎧ ( ) ⎨ 1 e f (y) = −y/θ , y > 0, θ ⎩ 0, elsewhere. Consider the following five estimators of θ: a b ˆθ 1 = Y 1 , ˆθ 2 = Y 1 + Y 2 2 , ˆθ 3 = Y 1 + 2Y 2 , ˆθ 4 = min(Y 1 , Y 2 , Y 3 ), ˆθ 5 = Y . 3 Which of these estimators are unbiased? Among the unbiased estimators, which has the smallest variance?