06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

524 7 Statistical Hypotheses and Confidence Sets<br />

Definition 7.2 (Chern<strong>of</strong>f consistency)<br />

The sequence <strong>of</strong> tests {δn} with power function β(δ(Xn), P) is Chern<strong>of</strong>fconsistent<br />

for the test iff δn is consistent and furthermore,<br />

lim<br />

n→∞ β(δ(Xn), P) = 0 ∀P ∈ P0. (7.25)<br />

7.3 Likelihood Ratio Tests, Wald Tests, and Score Tests<br />

We see that the Neyman-Pearson Lemma leads directly to use <strong>of</strong> the ratio <strong>of</strong><br />

the likelihoods in constructing tests. Now we want to generalize this approach<br />

and to study the properties <strong>of</strong> tests based on that ratio.<br />

There are two types <strong>of</strong> tests that arise from likelihood ratio tests. These<br />

are called Wald tests and score tests. Score tests are also called Rao test or<br />

Lagrange multiplier tests.<br />

The Wald tests and score tests are asymptotically equivalent. They are<br />

consistent under the Le Cam regularity conditions, and they are Chern<strong>of</strong>fconsistent<br />

if α is chosen so that as n → ∞, α → 0 and χ2 r,αn ∈ o(n).<br />

7.3.1 Likelihood Ratio Tests<br />

Although as we have emphasized, the likelihood is a function <strong>of</strong> the distribution<br />

rather than <strong>of</strong> the random variable, we want to study its properties<br />

under the distribution <strong>of</strong> the random variable. Using the idea <strong>of</strong> the ratio as<br />

in the test (7.12) <strong>of</strong> H0 : θ ∈ Θ0, but inverting that ratio and including both<br />

hypotheses in the denominator, we define the likelihood ratio as<br />

λ(X) = supθ∈Θ0 L(θ; X)<br />

. (7.26)<br />

supθ∈Θ L(θ; X)<br />

The test, similarly to (7.12), rejects H0 if λ(X) ≤ cα, where cα is some value<br />

in [0, 1]. Tests such as this are called likelihood ratio tests. (We should note<br />

that there are other definitions <strong>of</strong> a likelihood ratio; in particular, in TSH3<br />

its denominator is the sup over the alternative hypothesis. If the alternative<br />

hypothesis does not specify Θ − Θ0, such a definition requires specification<br />

<strong>of</strong> both H0, and H1; whereas (7.26) requires specification only <strong>of</strong> H0. Also,<br />

the direction <strong>of</strong> the inequality depends on the ratio; it may be inverted —<br />

compare the ratios in (7.12) and (7.26).)<br />

The likelihood ratio may not exist, but if it is well defined, clearly it is<br />

in the interval [0, 1], and values close to 1 provide evidence that the null<br />

hypothesis is true, and values close to 0 provide evidence that it is false.<br />

If there is no cα such that<br />

Pr(λ(X) ≤ cα|H0) = α,<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!