02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

52 4 Markov regime switch<strong>in</strong>g<br />

Tak<strong>in</strong>g the logarithm <strong>of</strong> LT gives the loglikelihood function as<br />

log LT =<br />

T�<br />

t=1<br />

� �<br />

wt<br />

log . (4.24)<br />

wt−1<br />

It can be shown that the ratio <strong>of</strong> the scal<strong>in</strong>g factors can be obta<strong>in</strong>ed as wt/wt−1 =<br />

φ t−1Bt1 ′ . An efficient algorithm for recursively evaluat<strong>in</strong>g the loglikelihood based on<br />

the scaled forward probabilities can be derived with start<strong>in</strong>g equations<br />

and updat<strong>in</strong>g equations<br />

log L0 = 0, (4.25)<br />

φ 0 = πs, (4.26)<br />

vt = φ t−1Bt, (4.27)<br />

ut = vt1 ′ , (4.28)<br />

log Lt = log Lt−1 + log ut, (4.29)<br />

φ t = vtu −1<br />

t . (4.30)<br />

To obta<strong>in</strong> the loglikelihood function, the loop is repeated for t = 1, . . . , T (cf. Zucch<strong>in</strong>i<br />

et al. 2006, ¢ 4.3).<br />

Together with a statistical s<strong>of</strong>tware package that <strong>of</strong>fers functions for numerical maximization<br />

(or m<strong>in</strong>imization), this algorithm can be employed to estimate the unknown<br />

parameters <strong>of</strong> a hidden Markov model by means <strong>of</strong> direct numerical maximization. In<br />

this thesis, the ML parameters <strong>of</strong> HMMs are estimated us<strong>in</strong>g the R-functions nlm() and<br />

optim(), which can be employed to m<strong>in</strong>imize a negative loglikelihood.<br />

4.3.3 Standard errors <strong>of</strong> ML estimates<br />

In the context <strong>of</strong> HMMs it is not trivial to determ<strong>in</strong>e the accuracy <strong>of</strong> the ML parameters<br />

estimated by one <strong>of</strong> the methods described above. As the elements <strong>of</strong> the vector<br />

<strong>of</strong> estimated parameters can be shown to be correlated, it is not possible to compute<br />

the standard error <strong>of</strong> the overall model directly. One possibility to obta<strong>in</strong> distributional<br />

properties <strong>of</strong> the parameter estimates is to apply parametric bootstrap methods. As<br />

these procedures are beyond the scope <strong>of</strong> this thesis, <strong>in</strong> the follow<strong>in</strong>g the quality <strong>of</strong> a<br />

hidden Markov model will be evaluated based on its forecast performance rather than<br />

us<strong>in</strong>g classical statistical fit statistics. For details on how parametric bootstrap procedures<br />

can be employed to analyze the distributional properties <strong>of</strong> parameter estimates,<br />

the reader is referred to Zucch<strong>in</strong>i et al. (2006, ¢ 4.4) and the references given there<strong>in</strong>.<br />

4.4 Forecast<strong>in</strong>g and decod<strong>in</strong>g<br />

Based on Zucch<strong>in</strong>i et al. (2006, ¢ 5) this section briefly summarizes how HMMs can be<br />

used to produce forecasts ( ¢ 4.4.1), and how <strong>in</strong>formation with regard to the unobservable<br />

<strong>state</strong>s <strong>of</strong> the Markov cha<strong>in</strong> can be obta<strong>in</strong>ed ( ¢ 4.4.2). The <strong>in</strong>ference regard<strong>in</strong>g the hidden<br />

<strong>state</strong>s is commonly referred to as decod<strong>in</strong>g.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!