11.07.2015 Views

Diffusion Processes with Hidden States from ... - FU Berlin, FB MI

Diffusion Processes with Hidden States from ... - FU Berlin, FB MI

Diffusion Processes with Hidden States from ... - FU Berlin, FB MI

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3 Theoryµ(t + τ) = O t−1 − F (q)P (O t − µ (q)P )τ,Σ(t + τ) = 12τ B(q)−1 ,A(t + τ) = 1 √ π√Σ(t + τ). (3.75)The equations (3.75) are simplified by the assumption that we do only want to know about theevolution of the system <strong>with</strong>in a short time interval (t, t + τ). This approach is called the ”Eulerdiscretization” [16].3.10.2 The LikelihoodThe considerations in the last subsection ultimately enables us to derive a joint likelihood function(see section 3.4 on page 21) for the model, given the complete data. In order to do so, we have tointroduce a joint probability distribution for the observation and hidden state sequence. Considera parameter tuple λ = {π, R, µ, Σ, A} <strong>with</strong> the initial distribution of the Markov chain π, the transitionmatrix R between different states of this Markov chain and additionally the three parametersµ, Σ, A, we obtain <strong>from</strong> (3.75). Now let the observed data (O t ) be given <strong>with</strong> constant time steppingτ and the probability for the transition <strong>from</strong> the hidden state i to the hidden state j given bythe i j −th. entry of the transition matrix T = exp(τR). With this new transition matrix T and theequations (3.75) we can replace the parameters Σ, A by the parameters F P , B, and subsequently geta new parameter tuple λ = {π, T , µ, F P , B}, and the associated joint probability distribution forthe observation O and hidden state sequence q byp(O,q|λ) = π(q 0 )ρ(O 0 |q 0 )T∏t=0T (q t−1 ,q t )ρ(O t |q t ,O t−1 ). (3.76)The joint likelihood function for the model λ, given the complete data, then isL (λ) = L (λ|O,q) = p(O,q|λ) (3.77)3.10.3 Parameter EstimationIn the sections 3.4, 3.6 and 3.7 we introduced already the methods we want to use, in order to findthe definite parameter tuple of a model that maximizes the likelihood function, given the completedata. Due to analytical and computational reasons it is convenient to maximize log-likelihoodfunction in place of the likelihood function itself. Like we already pointed out in section 3.6 onpage 24, the whole maximization process computationally will be accomplished in the frameworkof the Expectation-Maximization algorithm and the key object of this algorithm is the Expectation(3.30). The EM algorithm finally will be executed in two steps iteratively, until the log-likelihoodis maximal for the definite parameter tuple. The first step evaluates the expectation value Q based40

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!