02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

30 3 L<strong>in</strong>ear Gaussian <strong>state</strong> <strong>space</strong> <strong>models</strong> and the Kalman filter<br />

Substitut<strong>in</strong>g ˜σ 2 ∗ back <strong>in</strong>to (3.60) gives the concentrated or pr<strong>of</strong>ile loglikelihood function<br />

that has to be maximized with respect to ψ∗:<br />

T − d<br />

log Lc(y) = − log(2π) −<br />

2<br />

1<br />

2<br />

T�<br />

t=d+1<br />

log ft −<br />

T − d<br />

(log ˜σ<br />

2<br />

2 ∗ (ψ∗) + 1). (3.62)<br />

When (3.62) is maximized <strong>in</strong>stead <strong>of</strong> (3.59), the dimension <strong>of</strong> the vector <strong>of</strong> parameters<br />

to be estimated is reduced by one. In addition to the obta<strong>in</strong>ed ga<strong>in</strong>s <strong>in</strong> computational<br />

efficiency, the results are likely to be more reliable. As no exact gradients are available<br />

for the concentrated loglikelihood, it has to be maximized numerically.<br />

If the Kalman filter is <strong>in</strong>itialized by employ<strong>in</strong>g the large κ approximation, round<strong>in</strong>g<br />

errors can lead to numerical problems. In this thesis, a feasible possibility to overcome<br />

this problem <strong>in</strong> the univariate case is to calculate start<strong>in</strong>g values from the first observations,<br />

as expla<strong>in</strong>ed by Harvey (1989, ¢ 3.3.4). For a more general algorithm, see Ansley<br />

and Kohn (1985) who propose an analytical but complex and difficult to implement<br />

solution to the exact <strong>in</strong>itialization problem. An alternative exact approach is discussed<br />

by de Jong (1991) who uses an augmentation technique to the exact Kalman filter. In<br />

a more recent work, Koopman (1997) proposes an exact analytical approach based on<br />

a trivial <strong>in</strong>itialization and develops a diffuse loglikelihood, log Ld(y|ψ). However, as<br />

it can be shown that the estimate ˆ ψ, obta<strong>in</strong>ed by maximiz<strong>in</strong>g log L(y|ψ) for fixed κ,<br />

converges to the estimate obta<strong>in</strong>ed by maximiz<strong>in</strong>g the diffuse loglikelihood as κ → ∞<br />

(cf. Durb<strong>in</strong> and Koopman 2001, ¢ 7.3.1), the approach taken here can be considered a<br />

valid procedure for univariate <strong>models</strong>.<br />

3.4.2 Numerical maximization<br />

Given the sample observations, the loglikelihood can be maximized by means <strong>of</strong> direct<br />

numerical maximization. The basic idea beh<strong>in</strong>d this method is to f<strong>in</strong>d the value ˆ ψ for<br />

which the loglikelihood is maximized. An algorithm is used to make different guesses for<br />

ψ and to compare the correspond<strong>in</strong>g numerical values <strong>of</strong> the loglikelihood. To compute<br />

the ML estimates, the algorithm performs a series <strong>of</strong> steps, each time start<strong>in</strong>g with a first<br />

guess for the unknown parameters. The algorithm then chooses the direction where to<br />

search, determ<strong>in</strong>es how far to move <strong>in</strong> the chosen direction, and computes and compares<br />

the value <strong>of</strong> the loglikelihood for the chosen values <strong>of</strong> ψ. If ψ leads sufficiently close<br />

to a maximum <strong>of</strong> the loglikelihood, the algorithm stops, otherwise the search cont<strong>in</strong>ues.<br />

Generally, numerical maximization methods differ with respect to the direction to search,<br />

the step size and the stopp<strong>in</strong>g rule (cf. Davidson and MacK<strong>in</strong>non 2004, ¢ 6.4).<br />

Many numerical maximization techniques are based on Newton’s method: for a given<br />

start<strong>in</strong>g value for ψ, the direction <strong>of</strong> search is determ<strong>in</strong>ed by the gradient or score vector,<br />

denoted as g(ψ); the step size is calculated from the Hessian matrix, denoted as H(ψ),<br />

which has a unique maximum only if it is negative def<strong>in</strong>ite for all ψ. For a more detailed<br />

description <strong>of</strong> the different available algorithms, see, for example, Hamilton (1994b, ¢ 5.7)<br />

or Judge et al. (1985, ¢ B.2). In practical applications, it is <strong>of</strong>ten impossible or computationally<br />

expensive to calculate the gradient and the Hessian analytically. However, it<br />

is usually feasible to compute g(ψ) numerically. For details on the calculation <strong>of</strong> the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!