02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.4 Forecast<strong>in</strong>g and decod<strong>in</strong>g 53<br />

4.4.1 Forecast distributions<br />

Before a forecast distribution for HMMs can be derived, it is necessary to compute<br />

the correspond<strong>in</strong>g conditional distribution. Let X −u denote the sequence <strong>of</strong> random<br />

variables Xt for t = 1, . . . , T with Xu be<strong>in</strong>g excluded, and let x −u denote the observations<br />

xt for t = 1, . . . , T with xu be<strong>in</strong>g excluded:<br />

X (−u) = {X1, . . . , Xu−1, Xu+1, . . . , XT }, (4.31)<br />

x (−u) = {x1, . . . , xu−1, xu+1, . . . , xT }. (4.32)<br />

Together with the likelihood function <strong>in</strong> (4.15) and the forward-backward probabilities<br />

def<strong>in</strong>ed <strong>in</strong> ¢ 4.3.2.1, the conditional distribution <strong>of</strong> Xu for u = 1, . . . , T , given all other<br />

observations, can be derived as<br />

�<br />

P<br />

Xu = x|X (−u) = x (−u)�<br />

= αu−1ΓP (x)β ′ u<br />

αu−1Γβ ′ . (4.33)<br />

u<br />

The numerator can be regarded as the likelihood <strong>of</strong> the observed series with x be<strong>in</strong>g<br />

substituted for xu. The denom<strong>in</strong>ator can be <strong>in</strong>terpreted as the likelihood <strong>of</strong> the series<br />

with xu treated as be<strong>in</strong>g miss<strong>in</strong>g. Alternatively, the conditional probability can be<br />

<strong>in</strong>terpreted as a mixture <strong>of</strong> <strong>state</strong>-dependent probability distributions.<br />

The forecast distribution <strong>of</strong> a hidden Markov model, which is needed to compute the<br />

probability <strong>of</strong> an observation occurr<strong>in</strong>g l steps <strong>in</strong> the future, represents a special type <strong>of</strong><br />

a conditional distribution. It can be derived as<br />

�<br />

P<br />

XT +l = x|X (T ) (T<br />

= x<br />

)�<br />

= αT Γ l P (x)1 ′<br />

αT 1 ′ = φT Γ l P (x)1 ′ . (4.34)<br />

The scal<strong>in</strong>g algorithm <strong>in</strong>troduced <strong>in</strong> ¢ 4.3.2.2 can be employed to avoid problems <strong>of</strong><br />

numerical underflow. For <strong>in</strong>creas<strong>in</strong>g l the forecast distribution converges to πsP (x)1 ′ ,<br />

the stationary distribution.<br />

4.4.2 Decod<strong>in</strong>g<br />

Decod<strong>in</strong>g refers to the determ<strong>in</strong>ation <strong>of</strong> the most probable <strong>state</strong>s <strong>of</strong> the Markov cha<strong>in</strong><br />

for an estimated hidden Markov model. One dist<strong>in</strong>guishes between local decod<strong>in</strong>g and<br />

global decod<strong>in</strong>g. The former refers to the derivation <strong>of</strong> the most likely <strong>state</strong> at date t,<br />

which can also be used to generate <strong>state</strong> predictions as will be shown <strong>in</strong> ¢ 4.4.2.1. Global<br />

decod<strong>in</strong>g looks for the most probable sequence <strong>of</strong> <strong>state</strong>s ( ¢ 4.4.2.3).<br />

4.4.2.1 Local decod<strong>in</strong>g<br />

With the forward-backward probabilities <strong>in</strong>troduced above, the jo<strong>in</strong>t probability <strong>of</strong> the<br />

observations X (T ) = x (t) and the Markov cha<strong>in</strong> St be<strong>in</strong>g <strong>in</strong> <strong>state</strong> i at date t, can be<br />

shown to be equal to<br />

αt(i)βt(i) = P<br />

�<br />

X (T ) = x (T ) �<br />

, St = i , (4.35)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!