02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

48 4 Markov regime switch<strong>in</strong>g<br />

While γij(l) describes the conditional probability <strong>of</strong> be<strong>in</strong>g <strong>in</strong> <strong>state</strong> j at time t + l,<br />

with the Markov cha<strong>in</strong> start<strong>in</strong>g from <strong>state</strong> i at time t, it does not provide the marg<strong>in</strong>al<br />

probability <strong>of</strong> be<strong>in</strong>g <strong>in</strong> <strong>state</strong> i at a given time t. With the probability distribution <strong>of</strong> the<br />

<strong>in</strong>itial <strong>state</strong>, π(1) := (π1(1), . . . , πm(1)) = (P (S1 = 1), . . . , P (S1 = m)), the probability<br />

function <strong>of</strong> the <strong>state</strong> at time t is given by<br />

π(t) := (P (St = 1), . . . , P (St = m)) = π(t)Γ l−1 . (4.10)<br />

For a homogeneous and irreducible Markov cha<strong>in</strong>, π(t) can be shown to converge to a<br />

fixed vector πs := (π1, . . . , πm) for large t. This unique vector <strong>of</strong> dimension m satisfies<br />

πs = πsΓ, (4.11)<br />

and is called the vector <strong>of</strong> stationary transition probabilities. For exist<strong>in</strong>g πs, a Markov<br />

cha<strong>in</strong> is referred to as be<strong>in</strong>g stationary if πs describes the marg<strong>in</strong>al distribution <strong>of</strong> the<br />

<strong>state</strong>s for all t = 1, . . . , T .<br />

For more details on the well developed theory <strong>of</strong> Markov cha<strong>in</strong>s and further references,<br />

see, for example, Hamilton (1994b, ¢ 22).<br />

4.2 The basic hidden Markov model<br />

The sequence <strong>of</strong> observations and hidden <strong>state</strong>s <strong>in</strong> an <strong>in</strong>dependent mixture model are<br />

<strong>in</strong>dependent by def<strong>in</strong>ition. Any potential correlation between the <strong>state</strong>s cannot be captured<br />

by an <strong>in</strong>dependent mixture as it does not take <strong>in</strong>to account the respective <strong>in</strong>formation.<br />

One method <strong>of</strong> model<strong>in</strong>g serially correlated time series is to use an unobserved<br />

Markov cha<strong>in</strong> to select the parameters. This yields the hidden Markov model as a special<br />

dependent mixture model.<br />

With {Xt} = {Xt, t = 1, . . . , T } denot<strong>in</strong>g a sequence <strong>of</strong> observations and {St} =<br />

{St, t = 1, . . . , T } denot<strong>in</strong>g a Markov cha<strong>in</strong> <strong>in</strong> the <strong>state</strong> <strong>space</strong> {1, . . . , m}, their respective<br />

histories up to time t can be written as<br />

X (t) := {X1, . . . , Xt}, (4.12)<br />

S (t) := {S1, . . . , St}. (4.13)<br />

Consider a stochastic process that consists <strong>of</strong> two elements: (i) an underly<strong>in</strong>g and unobserved<br />

parameter process {St} for which the Markov property (4.7) holds, and (ii) a<br />

<strong>state</strong>-dependent observation process {Xt}, which fulfills the conditional <strong>in</strong>dependence<br />

property<br />

P (Xt = xt|X (t−1) = x (t−1) , S (t) = s (t) ) = P (Xt = xt | St = st). (4.14)<br />

This means that with known St, Xt only depends on St and not on the history <strong>of</strong> <strong>state</strong>s<br />

or observations. The pair <strong>of</strong> stochastic processes {Xt} and {St} is referred to as an<br />

m-<strong>state</strong> hidden Markov model whose basic structure is illustrated <strong>in</strong> Figure 4.3, which<br />

is taken from Bulla (2006, ¢ 2).<br />

Generally, different distributions are imposed for the various <strong>state</strong>s. In this thesis, the<br />

Markov cha<strong>in</strong> with transition probability matrix Γ will be assumed to be homogeneous

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!