06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

342 4 Bayesian Inference<br />

<br />

r(FΘ, T) = L(θ, T(x))dFX|θ(x)dFΘ(θ). (4.39)<br />

Θ X<br />

exists, we call a rule that minimizes it a generalized Bayes action.<br />

In Example 4.9 on page 355, we determine the Bayes estimator <strong>of</strong> the mean<br />

in a normal distribution when the prior is uniform over the real line, which<br />

obviously is an improper prior.<br />

Limits <strong>of</strong> Bayes Actions<br />

Another variation on Bayes actions is the limit <strong>of</strong> the Bayes action as some<br />

hyperparameters approach fixed limits. In this case, we have a sequence <strong>of</strong><br />

prior PDFs, {F (k)<br />

Θ : k = 1, 2, . . .}, and consider the sequence <strong>of</strong> Bayes actions,<br />

δ (k) (X) that result from these priors. The limit limk→∞ δ (k) (X) is called a<br />

limiting Bayes action. The limiting prior, limk→∞ F (k)<br />

Θ , may not be a PDF.<br />

In Example 4.6 on page 352, we determine a limiting Bayes action, given<br />

in equation (4.47).<br />

4.2.5 Choosing Prior Distributions<br />

It is important to choose a reasonable prior distribution that reflects prior<br />

information or beliefs about the phenomenon <strong>of</strong> interest.<br />

Families <strong>of</strong> Prior Distributions<br />

Various families <strong>of</strong> prior distributions can provide both flexibility in representing<br />

prior beliefs and computational simplicity. Conjugate priors, as in<br />

Examples 4.2 through 4.5, are <strong>of</strong>ten very easy to work with and to interpret.<br />

In many cases, a family <strong>of</strong> conjugate priors would seem to range over<br />

most reasonable possibilities, as shown in Figure 4.1 for priors <strong>of</strong> the binomial<br />

parameter.<br />

Within a given family, it may be useful to consider ones that are optimal<br />

in some way. For example, in testing composite hypotheses we may seek a<br />

“worst case” for rejecting or accepting the hypotheses. This leads to consideration<br />

<strong>of</strong> a “least favorable prior distribution”. We may also wish to use a<br />

prior that reflects an almost complete lack <strong>of</strong> prior information.. This leads<br />

to consideration <strong>of</strong> “noninformative priors”, or priors with maximum entropy<br />

within a given class.<br />

If the priors are restricted to a particular class <strong>of</strong> distributions, say Γ,<br />

we may seek an action whose worst risk with respect to any prior in Γ is<br />

minimized, that is, we may see an action δ that yields<br />

inf<br />

δ sup r(P, δ).fΘ(θ) ∝ |I(θ)|<br />

P ∈Γ<br />

1/2 . (4.40)<br />

Such an action is said to be Γ-minimax, usually written as gamma-minimax.<br />

Clearly, any minimax Bayes action is gamma-minimax Bayes with respect to<br />

the same loss function.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!