www.downloadslide.com Supplementary Exercises 443 *8.133 Suppose that two independent random samples of n 1 and n 2 observations are selected from normal populations. Further, assume that the populations possess a common variance σ 2 . Let a S 2 i = ∑ ni j=1 (Y ij − Y i ) 2 , i = 1, 2. n i − 1 Show that S 2 p , the pooled estimator of σ 2 (which follows), is unbiased: b Find V (S 2 p ). S 2 p = (n 1 − 1)S1 2 + (n 2 − 1)S2 2 . n 1 + n 2 − 2 *8.134 The small-sample confidence interval for µ, based on Student’s t (Section 8.8), possesses a random width—in contrast to the large-sample confidence interval (Section 8.6), where the width is not random if σ 2 is known. Find the expected value of the interval width in the small-sample case if σ 2 is unknown. *8.135 A confidence interval is unbiased if the expected value of the interval midpoint is equal to the estimated parameter. The expected value of the midpoint of the large-sample confidence interval (Section 8.6) is equal to the estimated parameter, and the same is true for the smallsample confidence intervals for µ and (µ 1 − µ 2 ) (Section 8.8). For example, the midpoint of the interval y ±ts/ √ n is y, and E(Y ) = µ. Now consider the confidence interval for σ 2 . Show that the expected value of the midpoint of this confidence interval is not equal to σ 2 . *8.136 The sample mean Y is a good point estimator of the population mean µ. It can also be used to predict a future value of Y independently selected from the population. Assume that you have a sample mean Y and variance S 2 based on a random sample of n measurements from a normal population. Use Student’s t to form a pivotal quantity to find a prediction interval for some new value of Y —say, Y p —to be observed in the future. [Hint: Start with the quantity Y p − Y .] Notice the terminology: Parameters are estimated; values of random variables are predicted.
www.downloadslide.com CHAPTER 9 Properties of Point Estimators and Methods of Estimation 9.1 Introduction 9.2 Relative Efficiency 9.3 Consistency 9.4 Sufficiency 9.5 The Rao–Blackwell Theorem and Minimum-Variance Unbiased Estimation 9.6 The Method of Moments 9.7 The Method of Maximum Likelihood 9.8 Some Large-Sample Properties of Maximum-Likelihood Estimators (Optional) 9.9 Summary References and Further Readings 9.1 Introduction In Chapter 8, we presented some intuitive estimators for parameters often of interest in practical problems. An estimator ˆθ for a target parameter θ is a function of the random variables observed in a sample and therefore is itself a random variable. Consequently, an estimator has a probability distribution, the sampling distribution of the estimator. We noted in Section 8.2 that, if E(ˆθ) = θ, then the estimator has the (sometimes) desirable property of being unbiased. In this chapter, we undertake a more formal and detailed examination of some of the mathematical properties of point estimators—particularly the notions of efficiency, consistency, and sufficiency. We present a result, the Rao–Blackwell theorem, that provides a link between sufficient statistics and unbiased estimators for parameters. Generally speaking, an unbiased estimator with small variance is or can be made to be 444