06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

458 6 Statistical Inference Based on Likelihood<br />

n<br />

i=1<br />

2(xi − θ)<br />

= 0.<br />

1 + (xi − θ) 2<br />

This may have multiple roots (depending on the sample), and so the one<br />

yielding the maximum would be the MLE. Depending on the sample, however,<br />

multiple roots can yield the same value <strong>of</strong> the likelihood function.<br />

Another example in which the MLE is not unique is U(θ − 1/2, θ + 1/2).<br />

Example 6.12 likelihood in a uniform family with fixed range<br />

Given the sample x1, . . ., xn, the likelihood function for U(θ − 1/2, θ + 1/2)<br />

is<br />

I[x (n)−1/2, x (1)+1/2](θ).<br />

It is maximized at any value between x(n) − 1/2 and x(1) + 1/2.<br />

Nonexistence and Other Properties<br />

We have already mentioned situations in which the likelihood approach does<br />

not seem to be the logical way, and have seen that sometimes in nonparametric<br />

problems, the MLE does not exist. This <strong>of</strong>ten happens when there are more<br />

“things to estimate” than there are observations. This can also happen in<br />

parametric problems. It may happen that the maximum does not exist because<br />

the likelihood is unbounded from above. In this case the argmax does not exist,<br />

and the maximum likelihood estimate does not exist.<br />

Example 6.13 nonexistence <strong>of</strong> MLE<br />

Consider the normal family <strong>of</strong> distributions with parameters µ and σ 2 . Suppose<br />

we have one observation x. The log-likelihood is<br />

lL(µ, σ 2 ; x) = − 1<br />

2 log(2πσ2 ) −<br />

(x − µ)2<br />

2σ2 ,<br />

which is unbounded when µ = x and σ 2 approaches zero. It is therefore clear<br />

that no MLE <strong>of</strong> σ 2 exists. Strictly speaking, we could also say that no MLE<br />

<strong>of</strong> µ exists either; however, for any fixed value <strong>of</strong> σ 2 in the (open) parameter<br />

space, µ = x maximizes the likelihood, so it is reasonable to call x the MLE<br />

<strong>of</strong> µ.<br />

Recall from Example 5.14 that the degree <strong>of</strong> the variance functional is 2.<br />

In this case, some people prefer to say that the likelihood function does not<br />

exist; that is, they suggest that the definition <strong>of</strong> a likelihood function include<br />

boundedness.<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!