06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

7.7 The Likelihood Principle and Tests <strong>of</strong> Hypotheses 535<br />

In a sequential testing procedure, at any point, the question <strong>of</strong> whether<br />

or not to continue observing random variables depends on the inference that<br />

could be made at that point. If the hypothesis can be rejected or if it is<br />

very unlikely that it can be rejected, the decision is made, and the test is<br />

terminated; otherwise, the test continues. When we refer to a “sequential<br />

test”, this is the type <strong>of</strong> situation we have in mind.<br />

7.6.1 Sequential Probability Ratio Tests<br />

Let us again consider the test <strong>of</strong> a simple null hypothesis against a simple<br />

alternative. Thinking <strong>of</strong> the hypotheses in terms <strong>of</strong> a parameter θ that indexes<br />

these two PDFs by θ0 and θ1, for a sample <strong>of</strong> size n, we have the likelihoods<br />

associated with the two hypotheses as Ln(θ0; x) and Ln(θ1; x). The best test<br />

indicates that we should reject if<br />

for some appropriately chosen k.<br />

define and show optimality<br />

7.6.2 Sequential Reliability Tests<br />

Ln(θ0; x)<br />

≤ k,<br />

Ln(θ1; x)<br />

7.7 The Likelihood Principle and Tests <strong>of</strong> Hypotheses<br />

*** introduce; refer to likelihood in N-P<br />

Tests <strong>of</strong> Hypotheses that Depend on the Data-Generating Process<br />

***<br />

Example 7.12 Sampling in a Bernoulli distribution; p-values and<br />

the likelihood principle revisited<br />

In Examples 3.12 and 6.1, we have considered the family <strong>of</strong> Bernoulli distributions<br />

that is formed from the class <strong>of</strong> the probability measures Pπ({1}) = π<br />

and Pπ({0}) = 1 − π on the measurable space (Ω = {0, 1}, F = 2 Ω ). Suppose<br />

now we wish to test<br />

H0 : π ≥ 0.5 versus H1 : π < 0.5.<br />

As we indicated in Example 3.12 there are two ways we could set up an<br />

experiment to make inferences on π. One approach is to take a random sample<br />

<strong>of</strong> size n, X1, . . ., Xn from the Bernoulli(π), and then use some function <strong>of</strong><br />

that sample as an estimator. An obvious statistic to use is the number <strong>of</strong> 1’s<br />

in the sample, that is, T = Xi. To assess the performance <strong>of</strong> an estimator<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!