62 3 Algorithms <strong>and</strong> <strong>Techniques</strong>Here we discuss a special model, named homogeneous model. The word “homogeneous”means that the matrix T <strong>and</strong> the φ are always the same for the model no matter what kind ofconditional <strong>and</strong> emission distributions. As such, we can obtain the joint probability distributionover both latent variables <strong>and</strong> observations as[ N∏] N∏p(X,Y|θ)=p(y 1 |π) p(y n |y n−1 ,T) p(x m |y m ,φ). (3.10)n=2m=1where X = {x 1 ,...,x N }, Y = {y 1 ,...,y N }, <strong>and</strong> θ = {π,T,φ} denotes the set of parameterscontrolling the model. Note that the emission probabilities could be any possible type, e.g.,Gaussians, mixtures of Gaussians, <strong>and</strong> neural networks, since the emission density p(x|y) canbe directly obtained or indirectly got by using Bayes’ theorem based on the models.There are many variants of the st<strong>and</strong>ard HMM model, e.g., left-to-right HMM [211],generalized HMM [142], generalized pair HMM [195], <strong>and</strong> so forth. To estimate the parametersof an HMM, the most commonly used algorithm is named Expectation Maximization(EM) [72]. Several algorithms have b<strong>ee</strong>n later proposed to further improve the efficiency(or training HMM), such as f orward-backward algorithm [211], <strong>and</strong> sum-product approach.Readers can refer to [211, 33, 182] for more details.3.6 K-Nearest-NeighboringK-Nearest-Neighbor (kNN) approach is the most often used recommendation scoring algorithmin many recommender systems, which is to compare the current user activity with thehistoric records of other users for finding the top k users who share the most similar behaviorsto the current one. In conventional recommender systems, finding k nearest neighbors isusually accomplished by measuring the similarity in rating of items or visiting on web pagesbetw<strong>ee</strong>n current user <strong>and</strong> others. The found neighboring users are then used to produce a predictionof items that are potentially rated or visited but not done yet by the current active uservia collaborative filtering approaches. Therefore, the core component of the kNN algorithmis the similarity function that is used to measure the similarity or correlation betw<strong>ee</strong>n usersin terms of attribute vectors, in which each user activity is characterized as a sequence ofattributes associated with corresponding weights.3.7 Content-based RecommendationContent-based recommendation is a textual information filtering approach based on users historicratings on items. In a content-based recommendation, a user is associated with the attributesof the items that rated, <strong>and</strong> a user profile is learned from the attributes of the itemsto model the interest of the user. The recommendation score is computed by measuring thesimilarity of the attributes the user rated with those of not being rated, to determine whichattributes might be potentially rated by the same user. As a result of attribute similarity comparison,this method is actually a conventional information processing approach in the case ofrecommendation. The learned user profile reflects the long-time preference of a user withina period, <strong>and</strong> could be updated as more different rated attributes representing users interestare observed. Content-based recommendation is helpful for predicting individuals preferencesince it is on a basis of referring to the individuals historic rating data rather than taking otherspreference into consideration.
3.8 Collaborative Filtering Recommendation3.8 Collaborative Filtering Recommendation 63Collaborative filtering recommendation is probably the most commonly <strong>and</strong> widely used techniquethat has b<strong>ee</strong>n well developed for recommender systems. As the name indicated, collaborativerecommender systems work in a collaborative referring way that is to aggregate ratingsor preference on items, discover user profiles/patterns via learning from users historic ratingrecords, <strong>and</strong> generate new recommendation on a basis of inter-pattern comparison. A typicaluser profile in recommender system is expressed as a vector of the ratings on differentitems. The rating values could be either binary (like/dislike) or analogous-valued indicatingthe degr<strong>ee</strong> of preference, which is dependent on the application scenarios. In the context ofcollaborative filtering recommendation, there are two major kinds of collaborative filteringalgorithms mentioned in literature, namely memory-based <strong>and</strong> model-based collaborative filteringalgorithm [117, 193, 218].3.8.1 Memory-based collaborative recommendationMemory-based algorithms use the total ratings of users in the training database while computingrecommendation. These systems can also be classified into two sub-categories: user-based<strong>and</strong> item-based algorithms [218]. For example, user-based kNN algorithm, which is based oncalculating the similarity betw<strong>ee</strong>n two users, is to discover a set of users who have similartaste to the target user, i.e. neighboring users, using kNN algorithm. After k nearest neighboringusers are found, this system uses a collaborative filtering algorithm to produce a predictionof top-N recommendations that the user may be interested in later. Given a target user u, theprediction on item i is then calculated by:p u,i =k (∑ R j,i sim(u, j) )j=1k∑ sim(u, j)j=1(3.11)Here R j,i denotes the rating on item i voted by user j, <strong>and</strong> only k most similar users (i.e. knearest neighbors of user i) are considered in making recommendation.In contrast to user-based kNN, item-based kNN algorithm [218, 126] is a different collaborativefiltering algorithm, which is based on computing the similarity betw<strong>ee</strong>n two columns,i.e. two items. In item-based kNN systems, mutual item-item similarity relation table is constructedfirst on a basis of comparing the item vectors, in which each item is modeled as a setof ratings by all users. To produce the prediction on an item i for user u, it computes the ratioof the sum of the ratings given by the user on the items that are similar to i with respect to thesum of involved item similarities as follows:k (∑ Ru, j sim(i, j) )j=1p u,i =(3.12)k∑ sim(i, j)j=1Here R u, j denotes the prediction of rating given by user u on item j, <strong>and</strong> only the k mostsimilar items (k nearest neighbors of item i) are used to generate the prediction.
- Page 2 and 3:
Web Mining and Social Networking
- Page 4:
Guandong Xu • Yanchun Zhang • L
- Page 8 and 9:
VIIIPrefacefollowing characteristic
- Page 11:
Acknowledgements: We would like to
- Page 14 and 15:
XIVContents3.1.2 Basic Algorithms f
- Page 16 and 17:
XVIContentsPart III Social Networki
- Page 19:
Part IFoundation
- Page 22 and 23:
4 1 Introduction(3). Learning usefu
- Page 24 and 25:
6 1 Introductioncalled computationa
- Page 26 and 27:
8 1 Introduction• The data on the
- Page 28 and 29:
10 1 Introductionin a broad range t
- Page 31 and 32: 2Theoretical BackgroundsAs discusse
- Page 33 and 34: 2.2 Textual, Linkage and Usage Expr
- Page 35 and 36: 2.4 Eigenvector, Principal Eigenvec
- Page 37 and 38: 2.5 Singular Value Decomposition (S
- Page 39 and 40: 2.6 Tensor Expression and Decomposi
- Page 41 and 42: 2.7 Information Retrieval Performan
- Page 43 and 44: 2.8 Basic Concepts in Social Networ
- Page 45: 2.8 Basic Concepts in Social Networ
- Page 48 and 49: 30 3 Algorithms and TechniquesTable
- Page 50 and 51: 32 3 Algorithms and TechniquesSpeci
- Page 52 and 53: 34 3 Algorithms and Techniquesa sub
- Page 54 and 55: 36 3 Algorithms and TechniquesMetho
- Page 56 and 57: 38 3 Algorithms and TechniquesCusto
- Page 58 and 59: 40 3 Algorithms and TechniquesTable
- Page 60 and 61: 42 3 Algorithms and Techniquesa bSI
- Page 62 and 63: 44 3 Algorithms and Techniques{a}10
- Page 64 and 65: 46 3 Algorithms and Techniques3.2 S
- Page 66 and 67: 48 3 Algorithms and TechniquesConce
- Page 68 and 69: 50 3 Algorithms and TechniquesNaive
- Page 70 and 71: 52 3 Algorithms and Techniquesuses
- Page 72 and 73: 54 3 Algorithms and Techniquesin th
- Page 74 and 75: 56 3 Algorithms and Techniques// Fu
- Page 76 and 77: 58 3 Algorithms and Techniquesendd
- Page 78 and 79: 60 3 Algorithms and Techniquesstart
- Page 82 and 83: 64 3 Algorithms and Techniques3.8.2
- Page 84 and 85: 66 3 Algorithms and Techniquesfor e
- Page 86 and 87: 68 3 Algorithms and Techniquesthat
- Page 89 and 90: 4Web Content MiningIn recent years
- Page 91 and 92: score(q,d)=4.2 Web Search 73V(q) ·
- Page 93 and 94: 4.2 Web Search 75algorithm. The Web
- Page 95 and 96: 4.3 Feature Enrichment of Short Tex
- Page 97 and 98: 4.4 Latent Semantic Indexing 794.4
- Page 99 and 100: Notation4.5 Automatic Topic Extract
- Page 101 and 102: 4.5 Automatic Topic Extraction from
- Page 103 and 104: 4.6 Opinion Search and Opinion Spam
- Page 105: 4.6 Opinion Search and Opinion Spam
- Page 108 and 109: 90 5 Web Linkage Mining5.2 Co-citat
- Page 110 and 111: 92 5 Web Linkage Mining{ /1 out deg
- Page 112 and 113: 94 5 Web Linkage Mininga =(a(1),·
- Page 114 and 115: 96 5 Web Linkage Mining5.4.1 Bipart
- Page 116 and 117: 98 5 Web Linkage MiningNext, consid
- Page 118 and 119: 100 5 Web Linkage Mining(5) Creatin
- Page 120 and 121: 102 5 Web Linkage Miningpower-law d
- Page 122 and 123: 104 5 Web Linkage MiningFig. 5.10.
- Page 124 and 125: 106 5 Web Linkage Miningbetween use
- Page 126 and 127: 6Web Usage MiningIn previous chapte
- Page 129 and 130: 6.1 Modeling Web User Interests usi
- Page 131 and 132:
6.1 Modeling Web User Interests usi
- Page 133 and 134:
6.1 Modeling Web User Interests usi
- Page 135 and 136:
6.1 Modeling Web User Interests usi
- Page 137 and 138:
6.2 Web Usage Mining using Probabil
- Page 139 and 140:
6.2 Web Usage Mining using Probabil
- Page 141 and 142:
6.2 Web Usage Mining using Probabil
- Page 143 and 144:
6.3 Finding User Access Pattern via
- Page 145 and 146:
6.3 Finding User Access Pattern via
- Page 147 and 148:
6.3 Finding User Access Pattern via
- Page 149 and 150:
6.4 Co-Clustering Analysis of weblo
- Page 151 and 152:
6.5 Web Usage Mining Applications 1
- Page 153 and 154:
6.5 Web Usage Mining Applications 1
- Page 155 and 156:
6.5 Web Usage Mining Applications 1
- Page 157 and 158:
6.5 Web Usage Mining Applications 1
- Page 159 and 160:
6.5 Web Usage Mining Applications 1
- Page 161:
Part IIISocial Networking and Web R
- Page 164 and 165:
146 7 Extracting and Analyzing Web
- Page 166 and 167:
148 7 Extracting and Analyzing Web
- Page 168 and 169:
150 7 Extracting and Analyzing Web
- Page 170 and 171:
152 7 Extracting and Analyzing Web
- Page 172 and 173:
154 7 Extracting and Analyzing Web
- Page 174 and 175:
156 7 Extracting and Analyzing Web
- Page 176 and 177:
158 7 Extracting and Analyzing Web
- Page 178 and 179:
160 7 Extracting and Analyzing Web
- Page 180 and 181:
162 7 Extracting and Analyzing Web
- Page 182 and 183:
164 7 Extracting and Analyzing Web
- Page 184 and 185:
166 7 Extracting and Analyzing Web
- Page 186 and 187:
168 7 Extracting and Analyzing Web
- Page 188 and 189:
170 8 Web Mining and Recommendation
- Page 190 and 191:
172 8 Web Mining and Recommendation
- Page 192 and 193:
174 8 Web Mining and Recommendation
- Page 194 and 195:
176 8 Web Mining and Recommendation
- Page 196 and 197:
178 8 Web Mining and Recommendation
- Page 198 and 199:
180 8 Web Mining and Recommendation
- Page 200 and 201:
182 8 Web Mining and Recommendation
- Page 202 and 203:
184 8 Web Mining and Recommendation
- Page 204 and 205:
186 8 Web Mining and Recommendation
- Page 206 and 207:
188 8 Web Mining and Recommendation
- Page 208 and 209:
190 9 Conclusionsries commonly used
- Page 210 and 211:
192 9 Conclusionsas computer scienc
- Page 212 and 213:
194 9 Conclusionsresearches have de
- Page 214 and 215:
196 References14. J. Ayres, J. Gehr
- Page 216 and 217:
198 References49. D. Chakrabarti, R
- Page 218 and 219:
200 References82. C. Dwork, R. Kuma
- Page 220 and 221:
202 References119. J. Hou and Y. Zh
- Page 222 and 223:
204 References151. A. N. Langville
- Page 224 and 225:
206 References186. J. K. Mui and K.
- Page 226 and 227:
208 References223. C. Shahabi, A. M
- Page 228:
210 References260. G.-R. Xue, D. Sh