02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

46 4 Markov regime switch<strong>in</strong>g<br />

based on X can be computed by a l<strong>in</strong>ear comb<strong>in</strong>ation <strong>of</strong> the s<strong>in</strong>gle components:<br />

p(x) =<br />

f(x) =<br />

m�<br />

πipi(x) (discrete case), (4.2)<br />

i=1<br />

m�<br />

πifi(x) (cont<strong>in</strong>uous case), (4.3)<br />

i=1<br />

with the k-th moment <strong>of</strong> the mixture def<strong>in</strong>ed as a l<strong>in</strong>ear comb<strong>in</strong>ation <strong>of</strong> the respective<br />

components moments<br />

E(X k ) =<br />

m�<br />

i=1<br />

πiE(X k i ), k = 1, 2, . . . . (4.4)<br />

As it can be shown that the variance <strong>of</strong> a mixture model cannot be simply computed as<br />

a l<strong>in</strong>ear comb<strong>in</strong>ation <strong>of</strong> the respective components variances, the standard equality<br />

V ar(X) = E(X 2 ) − (E(X)) 2 , (4.5)<br />

can be employed together with (4.4) to estimate the variance <strong>of</strong> a mixture model (cf.<br />

Zucch<strong>in</strong>i et al. 2006, ¢ 2).<br />

The parameters <strong>of</strong> a mixture distribution are usually estimated by ML. For the example<br />

<strong>of</strong> the cont<strong>in</strong>uous case, the likelihood <strong>of</strong> an m-components mixture model can<br />

generally be <strong>state</strong>d as<br />

L(ψ 1, . . . , ψ m, x1, . . . , xT ) =<br />

T�<br />

j=1 i=1<br />

m�<br />

πifi(xj, ψi), (4.6)<br />

with observations x1, . . . , xT . The mixture weights π1, . . . , πm and the parameter vectors<br />

<strong>of</strong> the component distributions are <strong>in</strong>cluded <strong>in</strong> ψ 1, . . . , ψ m. As the ML estimates<br />

ˆψ 1, . . . , ˆ ψ m represent a solution to a system <strong>of</strong> nonl<strong>in</strong>ear equations, the likelihood can<br />

be maximized analytically only for rather trivial <strong>models</strong>. In most cases, the unknown<br />

parameters have to be estimated by employ<strong>in</strong>g direct numerical maximization procedures<br />

or by us<strong>in</strong>g the EM algorithm (cf. ¢ 3.4.3). Figure 4.2 shows the results <strong>of</strong> fitt<strong>in</strong>g<br />

mixtures <strong>of</strong> two (a) and three normals (b) to the weekly log-returns <strong>of</strong> the Technology<br />

sector. The components’ weights have been obta<strong>in</strong>ed by a hidden Markov model for<br />

which more details will be provided <strong>in</strong> the section below. Compared to the case where<br />

the returns are fitted by a s<strong>in</strong>gle normal distribution, the fit is clearly improved by both<br />

mixture <strong>models</strong>.<br />

4.1.2 Markov cha<strong>in</strong>s<br />

Let {St : t = 1, . . . , T } be a stochastic process, i.e. a sequence <strong>of</strong> random variables that<br />

can assume an <strong>in</strong>teger value <strong>in</strong> S = {1, . . . , m}, the <strong>state</strong> <strong>space</strong>. If for each date t,<br />

the probability that St+1 is equal to a particular value st ∈ S depends only on St, the<br />

current <strong>state</strong> <strong>of</strong> the process, such a process is called an m-<strong>state</strong> Markov process:<br />

P (St+1 = st+1|St = st, St−1 = st−1, . . . , S1 = s1) = P (St+1 = st+1|St = st). (4.7)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!