02.12.2012 Views

Applications of state space models in finance

Applications of state space models in finance

Applications of state space models in finance

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

28 3 L<strong>in</strong>ear Gaussian <strong>state</strong> <strong>space</strong> <strong>models</strong> and the Kalman filter<br />

observations, y 1, . . . , y T , which are assumed to be IID, the jo<strong>in</strong>t density is given by the<br />

product <strong>of</strong> the <strong>in</strong>dividual densities, denoted by p(·):<br />

L(y, ψ) = p(y 1, . . . , y T ) =<br />

T�<br />

p(yt), (3.52)<br />

where L(y, ψ) is the jo<strong>in</strong>t probability density function <strong>of</strong> the t-th set <strong>of</strong> observations.<br />

When the jo<strong>in</strong>t density is evaluated at a given data set, L(y, ψ) is referred to as a<br />

likelihood function <strong>of</strong> the model. To avoid computational difficulties caused by extreme<br />

numbers that may result from the product above, it is generally simpler to work with<br />

the natural logarithm <strong>of</strong> the likelihood function:<br />

log L(y, ψ) =<br />

t=1<br />

T�<br />

log p(yt ). (3.53)<br />

The likelihood function and its logarithm are <strong>of</strong>ten simply denoted as L(y) and log L(y),<br />

respectively. If the vector <strong>of</strong> parameters is identifiable, the ML estimate <strong>of</strong> the parameters<br />

( ˆ ψ) is found by maximiz<strong>in</strong>g the likelihood with respect to ψ. 7 For a general <strong>in</strong>troduction<br />

<strong>in</strong>to the methodology <strong>of</strong> ML, see, for example, Greene (2003, ¢ 17) or Davidson<br />

and MacK<strong>in</strong>non (2004, ¢ 10).<br />

3.4.1.1 Prediction error decomposition<br />

As the observations for time series <strong>models</strong> are not generally <strong>in</strong>dependent, Equation (3.53)<br />

is replaced by a probability density function. The distribution <strong>of</strong> y t is conditioned on<br />

Y t−1, the <strong>in</strong>formation set at time t − 1:<br />

log L(y) =<br />

T�<br />

t=1<br />

t=1<br />

log p(y t |Y t−1), (3.54)<br />

with Y t = {y 1, . . . , y t} and p(y 1|Y 0) := p(y 1).<br />

If the observation disturbances and the <strong>in</strong>itial <strong>state</strong> vector <strong>in</strong> the general <strong>state</strong> <strong>space</strong><br />

model (3.1)–(3.7) have a multivariate normal distribution, it can be shown that the<br />

conditional distribution <strong>of</strong> y t itself is normal with conditional mean<br />

and conditional covariance<br />

E(y t|Y t−1) = Ztat, (3.55)<br />

V ar(y t |Y t−1) = F t. (3.56)<br />

The variance <strong>of</strong> the one-step ahead forecast error, F t, is def<strong>in</strong>ed as <strong>in</strong> (3.16). For Gaussian<br />

<strong>models</strong>, y t is conditionally distributed as N(Ztat, F t) with conditional probability<br />

density function<br />

p(yt|Y t−1) = 1<br />

2π |F t| 1<br />

�<br />

2 exp − 1<br />

2 v′ tF −1<br />

�<br />

t vt . (3.57)<br />

7 Identifiability means that the estimation yields a unique determ<strong>in</strong>ation <strong>of</strong> the parameter<br />

estimates for a given set <strong>of</strong> data. For more details on the identifiability <strong>of</strong> structural time series<br />

<strong>models</strong>, see Harvey (1989, 4.4).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!