06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

510 7 Statistical Hypotheses and Confidence Sets<br />

A test with a random component may be useful for establishing properties<br />

<strong>of</strong> tests or as counterexamples to some statement about a given test. (We<br />

<strong>of</strong>ten use randomized estimators in this way; see Example 5.25.)<br />

Another use <strong>of</strong> tests with random components is in problems with countable<br />

sample spaces when a critical region within the sample space cannot be<br />

constructed so that the test has a specified size.<br />

While randomized estimators rarely have application in practice, randomized<br />

test procedures can actually be used to increase the power <strong>of</strong> a conservative<br />

test. Use <strong>of</strong> a randomized test in this way would not make much sense<br />

in real-world data analysis, but if there are regulatory conditions to satisfy, it<br />

might needed to achieve an exact size.<br />

7.2 Optimal Tests<br />

Testing statistical hypotheses involves making a decision whether or not to<br />

reject a null hypothesis. If the decision is not to reject we may possibly make<br />

a secondary decision as to whether or not to continue collecting data, as we<br />

discuss in Section 7.6. For the moment, we will ignore the sequential testing<br />

problem and address the more basic question <strong>of</strong> optimality in testing. We first<br />

need a measure or criterion.<br />

A general approach to defining optimality is to define a loss function that<br />

increases in the “badness” <strong>of</strong> the statistical decision, and to formulate the risk<br />

as the expected value <strong>of</strong> that loss function within the context <strong>of</strong> the family<br />

<strong>of</strong> probability models being considered. Optimal procedures are those that<br />

minimize the risk. The decision-theoretic approach formalizes these concepts.<br />

Decision-Theoretic Approach<br />

The decision space in a testing problem is usually {0, 1}, which corresponds<br />

respectively to not rejecting and rejecting the hypothesis. (We may also allow<br />

for another alternative corresponding to “making no decision”.) As in the<br />

decision-theoretic setup, we seek to minimize the risk:<br />

R(P, δ) = E L(P, δ(X)) . (7.10)<br />

In the case <strong>of</strong> the 0-1 loss function and the four possibilities, the risk is<br />

just the probability <strong>of</strong> either type <strong>of</strong> error.<br />

We want a test procedure that minimizes the risk, but rather than taking<br />

into account the total expected loss in the risk (7.10), we generally prefer to<br />

restrict the probability <strong>of</strong> a type I error as in inequality (7.6) and then, subject<br />

to that, minimize the probability <strong>of</strong> a type II error as in equation (7.7), which<br />

is equivalent to maximizing the power under the alternative hypothesis. This<br />

approach is minimizes the risk subject to a restriction that the contribution<br />

to the risk from one type <strong>of</strong> loss is no greater than a specified amount.<br />

The issue <strong>of</strong> a uniformly most powerful test is similar to the issue <strong>of</strong> a<br />

uniformly minimum risk test subject to a restriction.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!