02.09.2014 Views

multivariate poisson hidden markov models for analysis of spatial ...

multivariate poisson hidden markov models for analysis of spatial ...

multivariate poisson hidden markov models for analysis of spatial ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

loglikelihood providing at least a rough indication <strong>of</strong> the relative goodness <strong>of</strong> fit. The<br />

same conclusion was gained after the primarily loglinear <strong>analysis</strong> and the correlation<br />

matrix <strong>of</strong> the data.<br />

In order to assess the quality <strong>of</strong> clustering, the entropy criterion was calculated based on<br />

the posterior probabilities (McLachlan et al., 2000 and Brijs et al., 2004). A measure <strong>of</strong><br />

the strength <strong>of</strong> clustering is implied by the maximum likelihood estimates in terms <strong>of</strong><br />

the fitted posterior probabilities <strong>of</strong> component membership<br />

w<br />

ij<br />

<strong>for</strong> the finite mixture<br />

<strong>models</strong> and uj<br />

( i ) <strong>for</strong> the <strong>hidden</strong> Markov <strong>models</strong>. For example, if the maximum <strong>of</strong><br />

or uj<br />

( i ) is near to 1 <strong>for</strong> most <strong>of</strong> the observations, then it suggests that the clusters or<br />

states were well separated (McLachlan et al., 2000). The overall measure <strong>of</strong> strength<br />

can be assessed by the average <strong>of</strong> the maximum <strong>of</strong> the component-posterior<br />

probabilities over the data. The average measure can be represented by the entropy<br />

criterion given as<br />

I(<br />

k)<br />

= 1−<br />

n<br />

k<br />

∑∑<br />

i= 1 j=<br />

1<br />

w<br />

ij<br />

ln( w<br />

nln(1/<br />

k)<br />

<strong>for</strong> the finite mixture model with the convention that w ln( ) = 0 if w = 0 and<br />

ij<br />

)<br />

ij<br />

w ij<br />

ij<br />

w<br />

ij<br />

Im ( ) = 1−<br />

n<br />

m<br />

∑∑<br />

i= 1 j=<br />

1<br />

u ()ln( i u ()) i<br />

j<br />

nln(1/ m)<br />

j<br />

<strong>for</strong> the <strong>hidden</strong> Markov model with the convention that u ( i)ln( u ( i )) = 0 if u ( i ) = 0 . In<br />

the case <strong>of</strong> perfect classification, <strong>for</strong> each i there is only one uj<br />

( i ) = 1 and all the rest are<br />

0 <strong>for</strong> the <strong>hidden</strong> Markov model: there<strong>for</strong>e, the values near to 1 indicate a good<br />

j<br />

j<br />

j<br />

133

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!