06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Exercises 381<br />

4.14. Consider again the binomial(n, π) family <strong>of</strong> distributions in Example 4.6.<br />

Let Pα,β be the beta(α, β) distribution.<br />

a) Determine the gamma-minimax estimator <strong>of</strong> π under squared-error<br />

loss within the class <strong>of</strong> priors Γ = {Pα,β : 0 < α, β}.<br />

b) Determine the gamma-minimax estimator <strong>of</strong> π under squared-error<br />

loss within the class <strong>of</strong> priors Γ = {Pα,β : 0 < α, β ≤ 1}.<br />

4.15. Consider a generalization <strong>of</strong> the absolute-error loss function, |θ − d|:<br />

<br />

c(d − θ) for d ≥ θ<br />

L(θ, d) =<br />

(1 − c)(θ − d) for d < θ<br />

for 0 < c < 1 (equation (3.87)). Given a random sample X1, . . ., Xn on<br />

the random variable X, determine the Bayes estimator <strong>of</strong> θ = E(X|θ).<br />

(Assume whatever distributions are relevant.)<br />

4.16. Let X ∼ U(0, θ) and the prior density <strong>of</strong> Θ be θ −2 I[1,∞)(θ). The posterior<br />

is therefore<br />

fΘ|x(θ|x) = 2c2<br />

I[c,∞)(θ),<br />

θ3 where c = max(1, x).<br />

a) For squared-error loss, show that the Bayes estimator is the posterior<br />

mean. What is the posterior mean?<br />

b) Consider a reparametrization: ˜ θ = θ 2 , and let ˜ δ be the Bayes estimator<br />

<strong>of</strong> ˜ θ. The prior density now is<br />

1<br />

2 ˜ θ 3/2 I[1,∞)( ˜ θ).<br />

In order to preserve the connection, take the loss function to be<br />

L( ˜ θ, ˜ <br />

δ) = ( ˜δ − ˜θ) 2 . What is the posterior mean? What is the<br />

Bayes estimator <strong>of</strong> ˜ θ?<br />

c) Compare the two estimators. Comment on the relevance <strong>of</strong> the loss<br />

functions and <strong>of</strong> the prior for the relationship between the two estimators.<br />

4.17. Let X1 depend on θ1 and X2 be independent <strong>of</strong> X1 and depend on θ2. Let<br />

θ1 and θ2 have independent prior distributions. Assume a squared-error<br />

loss. Let δ1 and δ2 be the Bayes estimators <strong>of</strong> θ1 and θ2 repectively.<br />

a) Show that δ1 −δ2 is the Bayes estimator <strong>of</strong> θ1 −θ2 given X = (X1, X2)<br />

and the setup described.<br />

b) Now assume that θ2 > 0 (with probability 1), and let ˜ δ2 be the Bayes<br />

estimator <strong>of</strong> 1/θ2 under the setup above. Show that δ1 ˜ δ2 is the Bayes<br />

estimator <strong>of</strong> θ1/θ2 given X = (X1, X2).<br />

4.18. In the problem <strong>of</strong> estimating π given X from a binomial(10, π) with<br />

beta(α, β) prior and squared-error loss, as in Example 4.6, sketch the<br />

risk functions, as in Figure 3.1 on page 273, for the unbiased estimator,<br />

the minimax estimator, and the estimator resulting from Jeffreys’s noninformative<br />

prior.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!