06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

330 4 Bayesian Inference<br />

4.2.2 Regularity Conditions for Bayesian Analyses<br />

Many interesting properties <strong>of</strong> statistical procedures depend on common sets<br />

<strong>of</strong> assumptions, for example, the Fisher information regularity conditions. For<br />

some properties <strong>of</strong> a Bayesian procedure, or <strong>of</strong> the posterior distribution itself,<br />

there is a standard set <strong>of</strong> regularity conditions, <strong>of</strong>ten called the Walker regularity<br />

conditions, after Walker (1969), who assumed them in the pro<strong>of</strong>s <strong>of</strong> various<br />

asymptotic properties <strong>of</strong> the posterior distribution. The regularity conditions<br />

apply to the parameter space Θ, the prior PDF fΘ(θ), the conditional PDF<br />

<strong>of</strong> the observables fX|θ(x), and to the support X = {x : fX|θ(x) > 0}. All<br />

elements are real, and µ is Lebesgue measure.<br />

The Walker regularity conditions are grouped into three sets:<br />

A1. Θ is closed.<br />

When a general family <strong>of</strong> distributions is assumed for the observables,<br />

this condition may allow for a distribution that is not in that family (for<br />

example, in a Bernoulli(π), the parameter is not allowed to take the values<br />

0 and 1), but the convenience <strong>of</strong> this condition in certain situations more<br />

than pays for this anomaly, which occurs with 0 probability anyway.<br />

A2. X does not depend on θ.<br />

This is also one <strong>of</strong> the FI regularity conditions.<br />

A3. For θ1 = θ2 ∈ Θ, µ({x : fX|θ(x) = fX|θ(x)}) > 0.<br />

This is identifiability; see equation (1.24). Without it parametric inference<br />

does not make much sense.<br />

A4. Given x ∈ X and θ1 ∈ Θ and δ a sufficiently small real positive number,<br />

then ∀θ ∋ θ − θ1 < δ,<br />

| log(fX|θ(x)) − log(fX|θ1 (x))| < Hδ(x, θ1)<br />

where Hδ(x, θ1) is a measurable function <strong>of</strong> x and θ1 such that<br />

lim Hδ(x, θ1) = 0<br />

δ→0+<br />

and, ∀˜ θ ∈ Θ, <br />

<br />

lim<br />

δ→0+<br />

Hδ(x, θ1)fX| θ ˜(x)dx X<br />

= 0.<br />

This is a continuity condition.<br />

A5. If Θ is not bounded, then for any ˜ θ ∈ Θ and a sufficiently large real<br />

number ∆,<br />

θ > ∆ =⇒ log(fX|θ(x)) − log(f X| ˜ θ (x)) < K∆(x, ˜ θ),<br />

where K∆(x, ˜ θ) is a measurable function <strong>of</strong> x and ˜ θ such that<br />

<br />

lim<br />

δ→0+<br />

K∆(x, ˜ <br />

θ)fX| θ ˜(x)dx < 0.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle<br />

X

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!