05.12.2012 Views

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

Student Notes To Accompany MS4214: STATISTICAL INFERENCE

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Reparameterization<br />

Negative parameter estimates can be avoided by reparameterizing the profile log-<br />

likelihood in (2.11) using α = ln(a). Since a = e α we are guaranteed to obtain a > 0.<br />

The reparameterized profile log-likelihood becomes<br />

�<br />

n� 1<br />

ℓα(α) = mα − m ln t<br />

m<br />

eα<br />

�<br />

i + (e α − 1)<br />

implying score function<br />

and information function<br />

I(α) =<br />

i=1<br />

S(α) = m − meα � n<br />

meα �n i=1 teα i<br />

i=1 teα<br />

�n i=1 teα i<br />

�<br />

n�<br />

t eα<br />

i ln(ti) − e α [� n<br />

i=1<br />

i ln(ti)<br />

+ e α<br />

+e α<br />

m�<br />

ln (ti) − m,<br />

i=1<br />

m�<br />

ln(ti)<br />

i=1<br />

i=1 teα i ln(ti)] 2<br />

�n i=1 teα i<br />

n�<br />

i=1<br />

t eα<br />

i [ln(ti)] 2<br />

�<br />

− e α<br />

m�<br />

ln(ti).<br />

The estimates â = 1.924941 and ˆ b = 78.12213 were obtained by applying this method<br />

to the Weibull data using starting values a0 = 0.07 and a0 = 76 in 103 and 105<br />

iterations respectively. However, the starting values a0 = 0.06 and a0 = 77 failed due<br />

to division by computationally tiny (1.0e-300) values.<br />

The step-halving scheme<br />

The Newton-Raphson method uses the (first and second) derivatives of ℓ(θ) to max-<br />

imize the function ℓ(θ), but the function itself is not used in the algorithm. The<br />

log-likelihood can be incorporated into the Newton-Raphson method by modifying the<br />

updating step to<br />

θi+1 = θi + λiI(θi) −1 S(θi), (2.12)<br />

where the search direction has been multiplied by some λi ∈ (0, 1] chosen so that the<br />

inequality<br />

ℓ � θi + λiI(θi) −1 S(θi) � > ℓ (θi) (2.13)<br />

holds. This requirement protects the algorithm from converging towards minima or<br />

saddle points. At each iteration the algorithm sets λi = 1, and if (2.13) does not<br />

hold λi is replaced with λi/2. The process is repeated until the inequality in (2.13) is<br />

satisfied. At this point the parameter estimates are updated using (2.12) with the value<br />

of λi for which (2.13) holds. If the function ℓ(θ) is concave and unimodal convergence<br />

is guaranteed. Finally, when<br />

maxima is guaranteed, even if ℓ(θ) is not concave.<br />

Ī(θ) is used in place of I(θ) convergence to a (local)<br />

37<br />

i=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!