41. Felsenstein, J. and Churchill, G.A. (1996). A <strong>hidden</strong> Markov Model approach to variation among sites in rate <strong>of</strong> evolution, Molecular Biology and Evolution, Vol. 13, pp. 93-104. 42. Fishman, G.S., (1996). Monte Carlo: Concepts, algorithms, and applications, New York: Springer-Verlag. 43. Fjφrt<strong>of</strong>t, R., Boucher, J., Delignon, Y., Garello, R., Le Caillec, J., Maitre, H., Nicolas, J., Pieczynski, W., Sigelle, M., and Tupin, F. (2000). Unsupervised Classification <strong>of</strong> Radar Images Based on Hidden Markov Models and Generalized Mixture Estimation, Proc. EOS/SPIE Symposium on Remote Sensing, Conference on SAR Image Analysis, Modelling, and Techniques V, Vol. SPIE 4173, pp. 87-98. 44. Fjφrt<strong>of</strong>t, R., Delignon, Y., Pieczynski, W., Sigelle, M. and Tupin, F. (2003). Unsupervised Classification <strong>of</strong> Radar Images Using Hidden Markov Chains and Hidden Markov Fields, IEEE Transactions on Geoscience and Remote Sensing, Vol. 41(3), pp. 675-686. 45. Fjφrt<strong>of</strong>t, R., Wojciech Pieczynski and Yves Delignon (2001), Generalised Mixture Estimation and Unsupervised Classification Based on Hidden Markov Chains and Hidden Markov Random Fields, Proc. Scandinavian Conference on Image Analysis (SCIA'01), pp. 733-740 46. Forney, Jr. G.D. (1973). The Viterbi Algorithm, Proceedings <strong>of</strong> IEEE, Vol. 61(3), pp. 263-278. 47. Fraley, C. and Raftery, A.E. (1998). How many clusters? Which clustering method? Answers via Model based Cluster Analysis, Technical Report No. 329, 184
Department <strong>of</strong> Statistics, University <strong>of</strong> Washington, Box 354322, Seattle, WA, 98193-4322, USA. 48. Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration <strong>of</strong> images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 6(6), pp.721-741. 49. Hasselblad, V., (1969). Estimation <strong>of</strong> Finite Mixtures <strong>of</strong> Distributions from the Exponential Family, Journal <strong>of</strong> the American Statistical Association, Vol. 64(328), pp. 1459-1471. 50. Holgate P. (1964). Estimation <strong>for</strong> the bivariate Poisson distribution, Biometrika, Vol. 51 (1), pp.241-245. 51. Johnson N.L. and Kotz S. (1969). Distributions in Statistics: Discrete Distributions. Boston: Houghton Mifflin. 52. Johnson, N.L., Kemp, A.W. and Kotz, S. (2005). Univariate Discrete Distributions, Hoboken, New Jercy: Wiley-Interscience. 53. Johnson, N.L., Kotz, S. and Balakrisnan, N. (1997). Discrete Multivariate Distributions, New York: John Wiley and Sons, pp. 1-30, 124-152. 54. Juang, B.H. and Rabiner, L.R.(1991). Hidden Markov Models <strong>for</strong> Speech Recognition, Technometrics, American Statistical Association and the American Society <strong>for</strong> Quality Control, Vol. 33(3), pp. 251-272. 55. Kano, K. and Kawamura, K. (1991). On Recurrence Relations <strong>for</strong> the Probability Function <strong>of</strong> Multivariate Generalized Poisson distribution, Communications in Statistics: Theory and Methods, Vol. 20(1), pp. 165-178. 185
- Page 1 and 2:
MULTIVARIATE POISSON HIDDEN MARKOV
- Page 3 and 4:
ABSTRACT Multivariate count data ar
- Page 5 and 6:
ACKNOWLEDGEMENT First I would like
- Page 7 and 8:
TABLE OF CONTENTS PERMISSION TO USE
- Page 9 and 10:
6.4 Data analysis..................
- Page 11 and 12:
Table 7.7: Loglikelihood and AIC to
- Page 13 and 14:
Figure 6.10: Loglikelihood, AIC and
- Page 15 and 16:
CHAPTER 1 GENERAL INTRODUCTION 1.1
- Page 17 and 18:
North Carolina, is modelled by Symo
- Page 19 and 20:
concept of Markov models to include
- Page 21 and 22:
egularity constraints on the underl
- Page 23 and 24:
CHAPTER 2 HIDDEN MARKOV MODELS ( HM
- Page 25 and 26:
Given the coin tossing experiment,
- Page 27 and 28:
P 11 P 22 P 12 1 2 P 21 P 32 P 13 P
- Page 29 and 30:
0.8 0.6 0.2 1 0.4 2 (1) P [H]=2/3 (
- Page 31 and 32:
… Urn 1 Urn 2 Urn N P[Red]= b 1 (
- Page 33 and 34:
5. The probability distribution of
- Page 35 and 36:
focus of this section. Random field
- Page 37 and 38:
Now consider a random field { X ( s
- Page 39 and 40:
for some real β . Again, the denom
- Page 41 and 42:
Since original HMMs were designed a
- Page 43 and 44:
CHAPTER 3 INFERENCE IN HIDDEN MARKO
- Page 45 and 46:
sequence. If we have several compet
- Page 47 and 48:
Using this equation we can calculat
- Page 49 and 50:
3.2.2 Problem 2 and its solution Gi
- Page 51 and 52:
Letting Ut( S1 = i1, S2 = i2,..., S
- Page 53 and 54:
ξ α () iPβ ( jb ) ( y ) t ij t+
- Page 55 and 56:
ˆ ( n) = b j Expected Number of ti
- Page 57 and 58:
CHAPTER 4 HIDDEN MARKOV MODEL AND T
- Page 59 and 60:
4.2.1 Wild Oats Figure 4.1: Wild Oa
- Page 61 and 62:
4.2.2.1 Effects on crop quality Wil
- Page 63 and 64:
at each of the 150 grid locations (
- Page 65 and 66:
In the literature review (section 1
- Page 67 and 68:
estimated through the observations
- Page 69 and 70:
CHAPTER 5 MULTIVARIATE POISSON DIST
- Page 71 and 72:
Y Y Y 1 2 3 = X = X 1 = X 2 3 + X +
- Page 73 and 74:
educe the computational burden; how
- Page 75 and 76:
and y 3 as illustrated above. Again
- Page 77 and 78:
q In general, the number of paramet
- Page 79 and 80:
Similar to the fully structured mod
- Page 81 and 82:
ecause the former captures more of
- Page 83 and 84:
(Tsiamyrtzis and Karlis, 2004). The
- Page 85 and 86:
notation, the following recursive s
- Page 87 and 88:
5.2.2 The multivariate Poisson dist
- Page 89 and 90:
ypy ( ,0,0) = θ py ( − 1,0,0) y1
- Page 91 and 92:
ecurrence relationship ypy 1 ( 1 ,
- Page 93 and 94:
Raftery, 1998); identification of t
- Page 95 and 96:
maximization (EM) algorithm is appl
- Page 97 and 98:
the EM algorithms for use on very l
- Page 99 and 100:
Another alternative is to use the p
- Page 101 and 102:
5.3.5 Estimation for the multivaria
- Page 103 and 104:
j j E[ X 12 i | Yi , Z ij = 1, Φ ]
- Page 105 and 106:
5.4 Multivariate Poisson hidden Mar
- Page 107 and 108:
where P = Pr( S = k | S − 1 j), 1
- Page 109 and 110:
P jk n ∑ vˆ jk i= 2 = n m ∑∑
- Page 111 and 112:
E[ X | Y, u ( i) = 1, Φ ] = d = y
- Page 113 and 114:
The bootstrap method is a powerful
- Page 115 and 116:
eplications are generally sufficien
- Page 117 and 118:
Y and assumes a probability distrib
- Page 119 and 120:
For the case of two categorical var
- Page 121 and 122:
ejected. In this situation, the (sm
- Page 123 and 124:
the Poisson distribution is well su
- Page 125 and 126:
0 1 2 3 4 Wild Buckwheat species109
- Page 127 and 128:
Table 6.4: The frequency of occurre
- Page 129 and 130:
6.4.1 Results for the different mul
- Page 131 and 132:
Proportion 1.0 0.9 0.8 0.7 0.6 0.5
- Page 133 and 134:
-600 -650 -700 Loglikelihood -750 -
- Page 135 and 136:
Figure 6.7 illustrates the evolutio
- Page 137 and 138:
Table 6.8: Parameter estimates (boo
- Page 139 and 140:
common covariance and the four stat
- Page 141 and 142:
-600 -700 -800 Loglikelihood -900 -
- Page 143 and 144:
Table 6.11: Parameter estimates (bo
- Page 145 and 146:
6.5 Comparison of the different mod
- Page 147 and 148: loglikelihood providing at least a
- Page 149 and 150: illustrates the contour plot of the
- Page 151 and 152: (a) Independent Contour 1 Contour 2
- Page 153 and 154: Karlis and Meligkotsidou (2006) dis
- Page 155 and 156: The mean vector and the covariance
- Page 157 and 158: The simple moments of B are polynom
- Page 159 and 160: and E ( Y ) = AM where ⎡λ1 ⎤
- Page 161 and 162: 7.5 Applications In addition to wee
- Page 163 and 164: The estimated covariance matrix and
- Page 165 and 166: The estimated covariance matrix (AI
- Page 167 and 168: Table 7.6 Bacterial counts by 3 sam
- Page 169 and 170: Table 7.8: Loglikelihood and AIC to
- Page 171 and 172: (c) Finite mixture with the five co
- Page 173 and 174: (i) Hidden Markov model with the fi
- Page 175 and 176: CHAPTER 8 COMPUTATIONAL EFFICIENCY
- Page 177 and 178: less time compared to the multivari
- Page 179 and 180: 1400 1200 CPU time (1/100 second) 1
- Page 181 and 182: 1400 1200 CPU time (1/100 second) 1
- Page 183 and 184: In this thesis, three species count
- Page 185 and 186: In the applications of the HMMs, li
- Page 187 and 188: indication of the relative goodness
- Page 189 and 190: underlying data. The advantage of t
- Page 191 and 192: 9.6 Further research We can present
- Page 193 and 194: REFERENCES 1. Aas, K., Eikvil, L. a
- Page 195 and 196: 17. Bicego, M., Murino, V. & Figuei
- Page 197: 33. Descombes, X., Morris, R.D., Ze
- Page 201 and 202: 66. Li C.S., Lu J.C., Park J., Kim
- Page 203 and 204: 85. Petrie T. (1969). Probabilistic
- Page 205 and 206: 104. University of Manitoba, Depart
- Page 207 and 208: z0
- Page 209 and 210: threep2[i]
- Page 211 and 212: loglike[nit]
- Page 213 and 214: theta33
- Page 215 and 216: } prob[g1+1, g2 + 1]
- Page 217 and 218: theta131
- Page 219 and 220: threep22[i]
- Page 221 and 222: dens=matrix(0,nrow=T,ncol=N) alpha=
- Page 223 and 224: # vˆ jk ( i) = P jk f ( y ; λ i