02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4.3 Parameter estimation 49<br />

X 1<br />

X 2<br />

. . . S S S . . .<br />

1<br />

2<br />

3<br />

X 3<br />

Observations<br />

Markov cha<strong>in</strong><br />

(unobserved)<br />

Figure 4.3: Basic structure <strong>of</strong> a hidden Markov model (source: Bulla 2006, p. 17).<br />

and irreducible and to have a unique stationary distribution πs (cf. ¢ 4.1.2). In practical<br />

applications, the underly<strong>in</strong>g <strong>state</strong> process {St} is hidden and only the <strong>state</strong>-dependent<br />

sequence <strong>of</strong> observations {Xt} is known. Usually, the unobservable <strong>state</strong>s can be <strong>in</strong>terpreted<br />

<strong>in</strong> a natural way: for example, a hidden Markov model with two <strong>state</strong>s could be<br />

employed <strong>in</strong> the context <strong>of</strong> the weekly return series <strong>of</strong> the Technology sector considered<br />

<strong>in</strong> ¢ 4.1.1: the observations <strong>in</strong> times <strong>of</strong> high volatility would be referred to by one <strong>state</strong>,<br />

while the other <strong>state</strong> would refer to low volatility markets.<br />

For more details on the basic hidden Markov model, <strong>in</strong>clud<strong>in</strong>g a derivation <strong>of</strong> its<br />

moments and marg<strong>in</strong>al distributions, see, for example, MacDonald and Zucch<strong>in</strong>i (1997,<br />

¢ 2).<br />

4.3 Parameter estimation<br />

The parameters <strong>of</strong> a hidden Markov model are generally estimated via the maximum<br />

likelihood pr<strong>in</strong>ciple. As the estimates represent a solution to a system <strong>of</strong> nonl<strong>in</strong>ear<br />

equations, it is usually impossible to estimate the unknown parameters analytically.<br />

Instead, one has to refer either to direct numerical maximization procedures or to the<br />

EM algorithm, which was briefly mentioned <strong>in</strong> ¢ 3.4.3. In this thesis, the focus will be on<br />

parameter estimation by means <strong>of</strong> direct numerical maximization procedures. Provided<br />

that the <strong>in</strong>itial values to be employed are adequately accurate, numerical maximization<br />

leads to faster convergence than the EM algorithm. Direct numerical maximization,<br />

which avoids the derivation <strong>of</strong> the required formula for the EM algorithm, can be used<br />

both for non-stationary and for stationary Markov cha<strong>in</strong>s; cf. Bulla and Berzel (2006)<br />

who compare the compet<strong>in</strong>g estimation procedures based on a simulation experiment.<br />

For details on the implementation <strong>of</strong> the EM algorithm with regard to HMMs, the reader<br />

is referred to Baum et al. (1970) and MacDonald and Zucch<strong>in</strong>i (1997, ¢ 2).<br />

The likelihood function <strong>of</strong> the basic hidden Markov model is presented <strong>in</strong> ¢ 4.3.1.<br />

Subsection 4.3.2 briefly overviews parameter estimation by numerically maximiz<strong>in</strong>g the<br />

likelihood.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!