06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

204 3 Basic Statistical <strong>Theory</strong><br />

“confidence” and “significance”. The formulation <strong>of</strong> the problem <strong>of</strong> statistical<br />

inference as in equations (3.3) and (3.4) avoids a direct confrontation <strong>of</strong> the<br />

question <strong>of</strong> how likely the decision is to be correct or incorrect. The conclusion<br />

in that kind <strong>of</strong> setup is the simple statement that the conditional distribution<br />

<strong>of</strong> Θ is QH, given that its marginal distribution is Q0 and the conditional<br />

distribution <strong>of</strong> the data is PΘ.<br />

Examples<br />

As an example <strong>of</strong> the approach indicated in equations (3.1) and (3.2), assume<br />

that a given sample y1, . . ., yn is taken in some prescribed manner from some<br />

member <strong>of</strong> a family <strong>of</strong> distributions<br />

P = {N(µ, σ 2 ) | µ ∈ IR, σ 2 ∈ IR+}.<br />

Statistical inference in this situation may lead us to place the population<br />

giving rise to the observed sample in the family <strong>of</strong> distributions<br />

PH = {N(µ, σ 2 ) | µ ∈ [µ1, µ2], σ 2 ∈ IR+}<br />

(think confidence intervals!). The process <strong>of</strong> identifying the subfamily may be<br />

associated with various auxiliary statements (such as level <strong>of</strong> “confidence”).<br />

As another example, we assume that a given sample y1, . . ., yn is taken<br />

independently from some member <strong>of</strong> a family <strong>of</strong> distributions<br />

P = {P | P ≪ λ},<br />

where λ is the Lebesgue measure, and our inferential process may lead us to<br />

decide that the sample arose from the family<br />

<br />

t<br />

<br />

PH = P | P ≪ λ and dP = .5 ⇒ t ≥ 0<br />

(think hypothesis tests concerning the median!).<br />

Notice that “P” in the example above is used to denote both a population<br />

and the associated probability measure; this is a notational convention that<br />

we adopted in Chapter 1 and which we use throughout this book.<br />

Statistical inference following the setup <strong>of</strong> equations (3.3) and (3.4) can<br />

be illustrated by observable data that follows a Bernoulli distribution with a<br />

parameter Π which, in turn, has a beta marginal distribution with parameters<br />

α and β. (That is, in equation (3.3), Θ is Π, PΠ is a Bernoulli distribution with<br />

parameter Π, and Q0 is a beta distribution with parameters α and β.) Given<br />

the single observation X = x, we can work out the conditional distribution <strong>of</strong><br />

Π to be a beta with parameters x + α and 1 − x + β.<br />

It is interesting to note the evolutionary nature <strong>of</strong> this latter example.<br />

Suppose that we began with Q0 (<strong>of</strong> equation (3.3)) being a beta with parameters<br />

x1 + α and 1 − x1 + β, where x1 is the observed value as described<br />

−∞<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!