06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Then we have<br />

E(g(θ)T(X)) = E(T(X)E(g(θ)|X)) = E((T(X)) 2 ).<br />

rT(PΘ) = E((T(X) − g(θ)|X)) 2 )<br />

= E((T(X)) 2 ) + E((g(θ)) 2 ) − 2E(g(θ)T(X))<br />

= 0.<br />

4.3 Bayes Rules 351<br />

Hence, by the condition <strong>of</strong> equation (3.84) we have the following theorem.<br />

Theorem 4.8<br />

Suppose T is an unbiased estimator. Then T is not a Bayes estimator under<br />

squared-error loss.<br />

Examples<br />

There are two standard examples <strong>of</strong> Bayesian analyses that serve as models<br />

for Bayes estimation under squared-error loss. These examples, Example 4.6<br />

and 4.8, should be in the student’s bag <strong>of</strong> easy pieces. In both <strong>of</strong> these examples,<br />

the prior is in a parametric conjugate family.<br />

In this section, we also consider estimation using an improper prior (Example<br />

4.9).<br />

Example 4.6 (Continuation <strong>of</strong> Example 4.2) Estimation <strong>of</strong> the Binomial<br />

Parameter with Beta Prior and a Squared-Error Loss<br />

We return to the problem in which we model the conditional distribution <strong>of</strong> an<br />

observable random variable X as a binomial(n, π) distribution, conditional on<br />

π, <strong>of</strong> course. Suppose we assume π comes from a beta(α, β) prior distribution;<br />

that is, we consider a random variable Π that has beta distribution. We wish<br />

to estimate Π.<br />

Let us choose the loss to be squared-error. In this case we know the risk is<br />

minimized by choosing the estimate as δ(x) = E(Π|x), where the expectation<br />

is taken wrt the distribution with density fΠ|x.<br />

We recognize the posterior conditional distribution as a beta(x + α, n −<br />

x + β), so we have the Bayes estimator for squared-error loss and beta prior<br />

α + X<br />

. (4.45)<br />

α + β + n<br />

We should study this estimator from various perspectives.<br />

linear combination <strong>of</strong> expectations<br />

First, we note that it is a weighted average <strong>of</strong> the mean <strong>of</strong> the prior<br />

and the standard UMVUE. (We discuss the UMVUE for this problem in<br />

Examples 5.1 and 5.5.)<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!