06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

240 3 Basic Statistical <strong>Theory</strong><br />

∂l(θ ; x)<br />

∂θ<br />

= 0. (3.56)<br />

This is called the likelihood equation. The derivative <strong>of</strong> the likelihood equated<br />

to zero, ∂L(θ ; x)/∂θ = 0, is also called the likelihood equation.<br />

The roots <strong>of</strong> the likelihood equation are <strong>of</strong>ten <strong>of</strong> interest, even if these<br />

roots do not provide the maximum <strong>of</strong> the likelihood function.<br />

Equation (3.56) is an estimating equation; that is, its solution, if it exists,<br />

is an estimator. Note that this estimator is not necessarily MLE; it is a root<br />

<strong>of</strong> the likelihood equation, or RLE. We will see in Chapter 6 that RLEs have<br />

desirable asymptotic properties.<br />

It is <strong>of</strong>ten useful to define an estimator as the solution <strong>of</strong> some estimating<br />

equation. We will see other examples <strong>of</strong> estimating equations in subsequent<br />

sections.<br />

Score Function<br />

The derivative <strong>of</strong> the log-likelihood on the left side <strong>of</strong> equation (3.56) plays<br />

an important role in statistical inference. It is called the score function, and<br />

<strong>of</strong>ten denoted as sn(θ ; x):<br />

sn(θ ; x) =<br />

∂l(θ ; x)<br />

. (3.57)<br />

∂θ<br />

(I should point out that I use the notation “∇l(θ ; x)” and the slightly more<br />

precise “∂l(θ ; x)/∂θ” more-or-less synonymously.)<br />

Finding an RLE is called scoring.<br />

In statistical inference, knowing how the likelihood or log-likelihood would<br />

vary if θ were to change is important. For a likelihood function (and hence,<br />

a log-likelihood function) that is differentiable with respect to the parameter,<br />

the score function represents this change.<br />

Likelihood Ratio<br />

When we consider two different distributions for a sample x, we have two different<br />

likelihoods, say L0 and L1. (Note the potential problems in interpreting<br />

the subscripts; here the subscripts refer to the two different distributions. For<br />

example L0 may refer to L(θ0 | x) in a notation consistent with that used<br />

above.) In this case, it may be <strong>of</strong> interest to compare the two likelihoods in<br />

order to make an inference about the two possible distributions. A simple<br />

comparison, <strong>of</strong> course, is the ratio. The ratio<br />

L(θ0 ; x)<br />

, (3.58)<br />

L(θ1 ; x)<br />

or L0/L1 in the simpler notation, is called the likelihood ratio with respect to<br />

the two possible distributions.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!