06.06.2013 Views

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

Theory of Statistics - George Mason University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

460 6 Statistical Inference Based on Likelihood<br />

The derivatives <strong>of</strong> the log-likelihood function relate directly to useful concepts<br />

in statistical inference. If it exists, the derivative <strong>of</strong> the log-likelihood<br />

is the relative rate <strong>of</strong> change, with respect to the parameter placeholder θ, <strong>of</strong><br />

the probability density function at a fixed observation. If θ is a scalar, some<br />

positive function <strong>of</strong> the derivative, such as its square or its absolute value, is<br />

obviously a measure <strong>of</strong> the effect <strong>of</strong> change in the parameter, or <strong>of</strong> change<br />

in the estimate <strong>of</strong> the parameter. More generally, an outer product <strong>of</strong> the<br />

derivative with itself is a useful measure <strong>of</strong> the changes in the components <strong>of</strong><br />

the parameter:<br />

(k)<br />

∇lL θ ; x (k)<br />

∇lL θ ; x T .<br />

Notice that the average <strong>of</strong> this quantity with respect to the probability density<br />

<strong>of</strong> the random variable X,<br />

<br />

<br />

(k)<br />

I(θ1 ; X) = Eθ1 ∇lL θ ; X <br />

(k)<br />

∇lL θ ; X T <br />

, (6.30)<br />

is the information matrix for an observation on Y about the parameter θ.<br />

If θ is a scalar, the square <strong>of</strong> the first derivative is the negative <strong>of</strong> the<br />

second derivative,<br />

or, in general,<br />

<br />

∂<br />

∂θ lL(θ<br />

2 ; x)<br />

<br />

(k)<br />

∇lL θ ; x <br />

(k)<br />

∇lL θ ; x T MLEs in Exponential Families<br />

= − ∂2<br />

∂θ 2 lL(θ ; x),<br />

= −HlL<br />

θ (k) ; x . (6.31)<br />

If X has a distribution in the exponential class and we write its density in the<br />

natural or canonical form, the likelihood has the form<br />

L(η ; x) = exp(η T T(x) − ζ(η))h(x). (6.32)<br />

The log-likelihood equation is particularly simple:<br />

T(x) − ∂ζ(η)<br />

∂η<br />

Newton’s method for solving the likelihood equation is<br />

= 0. (6.33)<br />

η (k) = η (k−1) 2 ∂ ζ(η)<br />

−<br />

∂η(∂η) T<br />

−1 <br />

<br />

<br />

η=η (k−1) T(x) − ∂ζ(η)<br />

<br />

<br />

<br />

η=η (k−1)<br />

∂η<br />

<strong>Theory</strong> <strong>of</strong> <strong>Statistics</strong> c○2000–2013 James E. Gentle

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!