60 3 Algorithms <strong>and</strong> <strong>Techniques</strong>startx1 x2 x3 x4 endFig. 3.18. A second-order Markov chain of observations {x n }3.5.1 Regular Markov ModelsFor the stock prediction problem, one naive solution could be that we ignore the time aspect ofthe historic data, but only count the frequency of each instance (i.e., up or down) to predict thenext possible instance. Yet this approach does not consider the correlated relationship betw<strong>ee</strong>nthose neighboring events. It is naturally to think that the most recent event contributes morethan the older ones to predict future instance. For example, if we know that the price of a stockoften behaves in a continuous trend, then given the fact that today’s price has b<strong>ee</strong>n up, we c<strong>and</strong>raw a conclusion that tomorrow’s price will most likely go up. This scenario may let us toconsider Markov models in which we assume that future predictions are independent of all butthe most recent observations.Specifically, if we assume that the nth event is independent of all previous observationsexcept the most recent one (i.e., n − 1th event), then the distribution could be shown asp(x n |x 1 ,...,x n−1 )=p(x n |x n−1 ), (3.6)where p(x i ) denotes the probability of the event x i happens. This is also called the first-orderMarkov chain. If the distributions p(x n |x n−1 ) are assumed to be the same all the time, then themodel is named as homogeneous Markov chain.In a similar way, if we allow more early events to contribute to the prediction, thenthe model should be extended to higher-order Markov chain. For example, we can obtain asecond-order Markov chain, by assuming that the nth event is independent of all previousobservations except the most recent two (i.e., n − 2th <strong>and</strong> n − 1th events). It can be illustratedas in Figure 3.18. The distribution is shown as follows.p(x n |x 1 ,...,x n−1 )=p(x n |x n−2 ,x n−1 ), (3.7)We can extend the model to an M th -order Markov chain in which the conditional distributionfor an event depends on its previous M events. By this strategy, the knowledge ofcorrelation betw<strong>ee</strong>n more neighboring events can be flexibly embedded into the model. Nevertheless,on the other h<strong>and</strong>, this solution has the trade-off that more cost of computation willbe introduced. For example, suppose the events are discrete with S states. Then the conditionaldistribution p(x n |x n−M ,...,x n−1 ) in a M th -order Markov chain will be used to generatethe joint distribution <strong>and</strong> the number of parameters will becomes S M−1 (S−1). As the value ofM becomes larger, the number of parameters will becomes exponentially <strong>and</strong> the computationcost may be huge.If the event variables is in a continuous space, then linear-Gaussian conditional distributionsis appropriate to use, or we can apply a neural network which has a parametric modelfor the distribution. Refer to [211, 33, 182] for more detail. In the next section, we will introducethe hidden Markov model, orHMM [84], which deals with the situation that the statevariables are latent <strong>and</strong> discrete.
3.5 Markov Models 61yy yt-1 tt+1xt-1xtxt+1Fig. 3.19. A hidden Markov model with observations {x n } <strong>and</strong> latent states {y n }3.5.2 Hidden Markov ModelsA hidden Markov model (HMM) is a statistical model in which the system being modeledis assumed to be a Markov process with unobserved state [2]. In a regular Markov model (asintroduced in the last section), the state can be visible to the observer directly <strong>and</strong> thus, theonly parameter in the model is the state transition probability. However, in a hidden Markovmodel, the state can not be directly visible (i.e., hidden), but the event (or observation, output)which is dependent on the state, is visible. In other words, the state behaves like a latentvariable, where the latent variable is discrete. The hidden Markov model can be illustrated ina graph, as shown in Figure 3.19.The intuitive idea of HMM is that, because every state has a probability distribution overthe possible output, the output sequence generated by an HMM provides some related informationabout the state sequence.Hidden Markov models are especially known for their application in temporal patternrecognition such as sp<strong>ee</strong>ch [125], h<strong>and</strong>writing [187], natural language processing [179], musicalscore following, partial discharges <strong>and</strong> bioinformatics [141].Suppose that the hidden state behaves like a discrete multinomial variable y n <strong>and</strong> it controlshow to generate the corresponding event x n . The probability distribution of y n is dependenton the previous latent variable y n−1 <strong>and</strong> thus, we have a conditional distributionp(y n |y n−1 ). We assume that the latent variables having S states so that the conditional distributioncorresponds to a set of numbers which named transition probabilities. All of thesenumbers together are denoted by T <strong>and</strong> can be represented as T jk ≡ p(y nk = 1|y n−1, j = 1),where we have 0 ≤ T jk ≤ 1 with ∑ k T jk = 1. Therefore, the matrix T has S(S −1) independentparameters. The conditional distribution can be computed asS Sp(y n |y n−1,T )= ∏ ∏ T y n−1, jy nkjk . (3.8)k=1 j=1Note that for the initial latent variable y 1 , because it does not have a previous variable, theabove equation can be adapted as p(y 1 |π)=∏ S k=1 πy 1kk , where π denotes a vector of probabilitieswith π k ≡ p(y 1k = 1) <strong>and</strong> ∑ k π k = 1.Emission probabilities are defined as the conditional distributions of the output p(x n |y n ,φ),where φ is a set of parameters controlling the distribution. These probabilities could be modeledby conditional probabilities if x is discrete, or by Gaussians if the elements of x arecontinuous variables. The emission probabilities can be represented asSp(x n |y n ,φ)= ∏ p(x n |φ k ) y nk. (3.9)k=1
- Page 2 and 3:
Web Mining and Social Networking
- Page 4:
Guandong Xu • Yanchun Zhang • L
- Page 8 and 9:
VIIIPrefacefollowing characteristic
- Page 11:
Acknowledgements: We would like to
- Page 14 and 15:
XIVContents3.1.2 Basic Algorithms f
- Page 16 and 17:
XVIContentsPart III Social Networki
- Page 19:
Part IFoundation
- Page 22 and 23:
4 1 Introduction(3). Learning usefu
- Page 24 and 25:
6 1 Introductioncalled computationa
- Page 26 and 27:
8 1 Introduction• The data on the
- Page 28 and 29: 10 1 Introductionin a broad range t
- Page 31 and 32: 2Theoretical BackgroundsAs discusse
- Page 33 and 34: 2.2 Textual, Linkage and Usage Expr
- Page 35 and 36: 2.4 Eigenvector, Principal Eigenvec
- Page 37 and 38: 2.5 Singular Value Decomposition (S
- Page 39 and 40: 2.6 Tensor Expression and Decomposi
- Page 41 and 42: 2.7 Information Retrieval Performan
- Page 43 and 44: 2.8 Basic Concepts in Social Networ
- Page 45: 2.8 Basic Concepts in Social Networ
- Page 48 and 49: 30 3 Algorithms and TechniquesTable
- Page 50 and 51: 32 3 Algorithms and TechniquesSpeci
- Page 52 and 53: 34 3 Algorithms and Techniquesa sub
- Page 54 and 55: 36 3 Algorithms and TechniquesMetho
- Page 56 and 57: 38 3 Algorithms and TechniquesCusto
- Page 58 and 59: 40 3 Algorithms and TechniquesTable
- Page 60 and 61: 42 3 Algorithms and Techniquesa bSI
- Page 62 and 63: 44 3 Algorithms and Techniques{a}10
- Page 64 and 65: 46 3 Algorithms and Techniques3.2 S
- Page 66 and 67: 48 3 Algorithms and TechniquesConce
- Page 68 and 69: 50 3 Algorithms and TechniquesNaive
- Page 70 and 71: 52 3 Algorithms and Techniquesuses
- Page 72 and 73: 54 3 Algorithms and Techniquesin th
- Page 74 and 75: 56 3 Algorithms and Techniques// Fu
- Page 76 and 77: 58 3 Algorithms and Techniquesendd
- Page 80 and 81: 62 3 Algorithms and TechniquesHere
- Page 82 and 83: 64 3 Algorithms and Techniques3.8.2
- Page 84 and 85: 66 3 Algorithms and Techniquesfor e
- Page 86 and 87: 68 3 Algorithms and Techniquesthat
- Page 89 and 90: 4Web Content MiningIn recent years
- Page 91 and 92: score(q,d)=4.2 Web Search 73V(q) ·
- Page 93 and 94: 4.2 Web Search 75algorithm. The Web
- Page 95 and 96: 4.3 Feature Enrichment of Short Tex
- Page 97 and 98: 4.4 Latent Semantic Indexing 794.4
- Page 99 and 100: Notation4.5 Automatic Topic Extract
- Page 101 and 102: 4.5 Automatic Topic Extraction from
- Page 103 and 104: 4.6 Opinion Search and Opinion Spam
- Page 105: 4.6 Opinion Search and Opinion Spam
- Page 108 and 109: 90 5 Web Linkage Mining5.2 Co-citat
- Page 110 and 111: 92 5 Web Linkage Mining{ /1 out deg
- Page 112 and 113: 94 5 Web Linkage Mininga =(a(1),·
- Page 114 and 115: 96 5 Web Linkage Mining5.4.1 Bipart
- Page 116 and 117: 98 5 Web Linkage MiningNext, consid
- Page 118 and 119: 100 5 Web Linkage Mining(5) Creatin
- Page 120 and 121: 102 5 Web Linkage Miningpower-law d
- Page 122 and 123: 104 5 Web Linkage MiningFig. 5.10.
- Page 124 and 125: 106 5 Web Linkage Miningbetween use
- Page 126 and 127: 6Web Usage MiningIn previous chapte
- Page 129 and 130:
6.1 Modeling Web User Interests usi
- Page 131 and 132:
6.1 Modeling Web User Interests usi
- Page 133 and 134:
6.1 Modeling Web User Interests usi
- Page 135 and 136:
6.1 Modeling Web User Interests usi
- Page 137 and 138:
6.2 Web Usage Mining using Probabil
- Page 139 and 140:
6.2 Web Usage Mining using Probabil
- Page 141 and 142:
6.2 Web Usage Mining using Probabil
- Page 143 and 144:
6.3 Finding User Access Pattern via
- Page 145 and 146:
6.3 Finding User Access Pattern via
- Page 147 and 148:
6.3 Finding User Access Pattern via
- Page 149 and 150:
6.4 Co-Clustering Analysis of weblo
- Page 151 and 152:
6.5 Web Usage Mining Applications 1
- Page 153 and 154:
6.5 Web Usage Mining Applications 1
- Page 155 and 156:
6.5 Web Usage Mining Applications 1
- Page 157 and 158:
6.5 Web Usage Mining Applications 1
- Page 159 and 160:
6.5 Web Usage Mining Applications 1
- Page 161:
Part IIISocial Networking and Web R
- Page 164 and 165:
146 7 Extracting and Analyzing Web
- Page 166 and 167:
148 7 Extracting and Analyzing Web
- Page 168 and 169:
150 7 Extracting and Analyzing Web
- Page 170 and 171:
152 7 Extracting and Analyzing Web
- Page 172 and 173:
154 7 Extracting and Analyzing Web
- Page 174 and 175:
156 7 Extracting and Analyzing Web
- Page 176 and 177:
158 7 Extracting and Analyzing Web
- Page 178 and 179:
160 7 Extracting and Analyzing Web
- Page 180 and 181:
162 7 Extracting and Analyzing Web
- Page 182 and 183:
164 7 Extracting and Analyzing Web
- Page 184 and 185:
166 7 Extracting and Analyzing Web
- Page 186 and 187:
168 7 Extracting and Analyzing Web
- Page 188 and 189:
170 8 Web Mining and Recommendation
- Page 190 and 191:
172 8 Web Mining and Recommendation
- Page 192 and 193:
174 8 Web Mining and Recommendation
- Page 194 and 195:
176 8 Web Mining and Recommendation
- Page 196 and 197:
178 8 Web Mining and Recommendation
- Page 198 and 199:
180 8 Web Mining and Recommendation
- Page 200 and 201:
182 8 Web Mining and Recommendation
- Page 202 and 203:
184 8 Web Mining and Recommendation
- Page 204 and 205:
186 8 Web Mining and Recommendation
- Page 206 and 207:
188 8 Web Mining and Recommendation
- Page 208 and 209:
190 9 Conclusionsries commonly used
- Page 210 and 211:
192 9 Conclusionsas computer scienc
- Page 212 and 213:
194 9 Conclusionsresearches have de
- Page 214 and 215:
196 References14. J. Ayres, J. Gehr
- Page 216 and 217:
198 References49. D. Chakrabarti, R
- Page 218 and 219:
200 References82. C. Dwork, R. Kuma
- Page 220 and 221:
202 References119. J. Hou and Y. Zh
- Page 222 and 223:
204 References151. A. N. Langville
- Page 224 and 225:
206 References186. J. K. Mui and K.
- Page 226 and 227:
208 References223. C. Shahabi, A. M
- Page 228:
210 References260. G.-R. Xue, D. Sh