36 3 Algorithms <strong>and</strong> <strong>Techniques</strong>Method: call FP-growth FP-tr<strong>ee</strong>, nullProcedure FP-growth(Tr<strong>ee</strong>, α):If Tr<strong>ee</strong> contains a single prefix-pathLet P be the single prefix-path part of Tr<strong>ee</strong>;Let Q be the multipath part with the top branching node replaced by a null root;For each combination β of the nodes in P doGenerate pattern β ∪ α with support=minimum support of nodes in β;Let freq pattern set(P) be the set of patterns generated;endendelseLet Q be Tr<strong>ee</strong>;endFor each item a i ∈ Q dogenerate pattern β = a i ∪ α with support=a i .support;construct β’s conditional pattern-base <strong>and</strong> then β’ conditional FT-tr<strong>ee</strong> Tr<strong>ee</strong> β ;If Tr<strong>ee</strong> β ̸=/0 then call Fp-growth(Tr<strong>ee</strong> β ,β);Let freq pattern set(Q) be the set of patterns generated;endreturn( freq pattern set(P)∪ freq pattern set(Q)∪( freq pattern set(P)× freq pattern set(Q)));3.1.3 Sequential Pattern <strong>Mining</strong>The sequential mining problem was first introduced in [11]; two sequential patterns examplesare: “80% of the people who buy a television also buy a video camera within a day”, <strong>and</strong>“Every time Microsoft stock drops by 5%, then IBM stock will also drop by at least 4% withinthr<strong>ee</strong> days”. The above patterns can be used to determine the efficient use of shelf space forcustomer convenience, or to properly plan the next step during an economic crisis. Sequentialpattern mining is also very important for analyzing biological data [18] [86], in which a verysmall alphabet (i.e., 4 for DNA sequences <strong>and</strong> 20 for protein sequences) <strong>and</strong> long patternswith a typical length of few hundreds or even thous<strong>and</strong>s frequently appear.Sequence discovery can be thought of as essentially an association discovery over a temporaldatabase. While association rules [9, 138] discern only intra-event patterns (itemsets),sequential pattern mining discerns inter-event patterns (sequences). There are many other importanttasks related to association rule mining, such as correlations [42], causality [228],episodes [176], multi-dimensional patterns [154, 132], max-patterns [24], partial periodicity[105], <strong>and</strong> emerging patterns [78]. Incisive exploration of sequential pattern mining issue willdefinitely help to get the efficient solutions to the other research problems shown above.Efficient sequential pattern mining methodologies have b<strong>ee</strong>n s<strong>tud</strong>ied extensively in manyrelated problems, including the general sequential pattern mining [11, 232, 269, 202, 14],constraint-based sequential pattern mining [95], incremental sequential pattern mining [200],frequent episode mining [175], approximate sequential pattern mining [143], partial periodicpattern mining [105], temporal pattern mining in data stream [242], maximal <strong>and</strong> closed sequentialpattern mining [169, 261, 247]. In this section, due to space limitation, we focuson introducing the general sequential pattern mining algorithm, which is the most basic onebecause all the others can benefit from the strategies it employs, i.e., Apriori heuristic <strong>and</strong>
3.1 Association Rule <strong>Mining</strong> 37projection-based pattern growth. More details <strong>and</strong> survey on sequential pattern mining can befound in [249, 172].Sequential Pattern <strong>Mining</strong> ProblemLet I = {i 1 ,i 2 ,...,i k } be a set of items. A subset of I is called an itemset or an element. Asequence, s, is denoted as 〈t 1 ,t 2 ,...,t l 〉, where t j is an itemset, i.e., (t j ⊆ I) for 1 ≤ j ≤ l. Theitemset, t j , is denoted as (x 1 x 2 ...x m ), where x k is an item, i.e., x k ∈ I for 1 ≤ k ≤ m. Forbrevity, the brackets are omitted if an itemset has only one item. That is, itemset (x) is writtenas x. The number of items in a sequence is called the length of the sequence. A sequencewith length l is called an l-sequence. A sequence, s a = 〈a 1 ,a 2 ,...,a n 〉, is contained in anothersequence, s b = 〈b 1 ,b 2 ,...,b m 〉, if there exists integers 1 ≤ i 1 < i 2 < ... < i n ≤ m, such thata 1 ⊆ b i1 , a 2 ⊆ b i2 ,..., a n ⊆ b in . We denote s a a subsequence of s b , <strong>and</strong> s b a supersequence ofs a . Given a sequence s = 〈s 1 ,s 2 ,...,s l 〉, <strong>and</strong> an item α, s ⋄ α denotes that s concatenates withα, which has two possible forms, such as Itemset Extension ( IE), s⋄α=〈s 1 ,s 2 ,...,s l ∪{α}〉,or Sequence Extension ( SE), s ⋄ α=〈s 1 ,s 2 ,...,s l ,{α}〉. Ifs ′ = p ⋄ s, then p is a pre f ix of s ′<strong>and</strong> s is a su f f ix of s ′ .A sequence database, S, is a set of tuples 〈sid,s〉, where sid is a sequence id <strong>and</strong> s isa sequence. A tuple 〈sid,s〉 is said to contain a sequence β, ifβ is a subsequence of s. Thesupport of a sequence, β, in a sequence database, S, is the number of tuples in the databasecontaining β, denoted as support(β). Given a user specified positive integer, ε, a sequence,β, is called a frequent sequential pattern if support(β) ≥ ε.Existing Sequential Pattern <strong>Mining</strong> AlgorithmsSequential pattern mining algorithms can be grouped into two categories. One category isApriori-like algorithm, such as Apriori-all [11], GSP [232], SPADE [269], <strong>and</strong> SPAM [14],the other category is projection-based pattern growth, such as PrefixSpan [202].AprioriALLSequential pattern mining was first introduced by Agrawal in [11] where thr<strong>ee</strong> Apriori basedalgorithms were proposed. Given the transaction database with thr<strong>ee</strong> attributes customer-id,transaction-time <strong>and</strong> purchased-items, the mining process were decomposed into five phases:• Sort Phase: The original transaction database is sorted based on the customer <strong>and</strong> transactiontime. Figure 3.3 shows the sorted transaction data.• L-itemsets Phase: The sorted database is first scanned to obtain those frequent (or large)1-itemsets based on the user specified support threshold. Suppose the minimal support is70%. In this case the minimal support count is 2, <strong>and</strong> the result of large 1-itemsets is listedin Figure 3.4.
- Page 2 and 3:
Web Mining and Social Networking
- Page 4: Guandong Xu • Yanchun Zhang • L
- Page 8 and 9: VIIIPrefacefollowing characteristic
- Page 11: Acknowledgements: We would like to
- Page 14 and 15: XIVContents3.1.2 Basic Algorithms f
- Page 16 and 17: XVIContentsPart III Social Networki
- Page 19: Part IFoundation
- Page 22 and 23: 4 1 Introduction(3). Learning usefu
- Page 24 and 25: 6 1 Introductioncalled computationa
- Page 26 and 27: 8 1 Introduction• The data on the
- Page 28 and 29: 10 1 Introductionin a broad range t
- Page 31 and 32: 2Theoretical BackgroundsAs discusse
- Page 33 and 34: 2.2 Textual, Linkage and Usage Expr
- Page 35 and 36: 2.4 Eigenvector, Principal Eigenvec
- Page 37 and 38: 2.5 Singular Value Decomposition (S
- Page 39 and 40: 2.6 Tensor Expression and Decomposi
- Page 41 and 42: 2.7 Information Retrieval Performan
- Page 43 and 44: 2.8 Basic Concepts in Social Networ
- Page 45: 2.8 Basic Concepts in Social Networ
- Page 48 and 49: 30 3 Algorithms and TechniquesTable
- Page 50 and 51: 32 3 Algorithms and TechniquesSpeci
- Page 52 and 53: 34 3 Algorithms and Techniquesa sub
- Page 56 and 57: 38 3 Algorithms and TechniquesCusto
- Page 58 and 59: 40 3 Algorithms and TechniquesTable
- Page 60 and 61: 42 3 Algorithms and Techniquesa bSI
- Page 62 and 63: 44 3 Algorithms and Techniques{a}10
- Page 64 and 65: 46 3 Algorithms and Techniques3.2 S
- Page 66 and 67: 48 3 Algorithms and TechniquesConce
- Page 68 and 69: 50 3 Algorithms and TechniquesNaive
- Page 70 and 71: 52 3 Algorithms and Techniquesuses
- Page 72 and 73: 54 3 Algorithms and Techniquesin th
- Page 74 and 75: 56 3 Algorithms and Techniques// Fu
- Page 76 and 77: 58 3 Algorithms and Techniquesendd
- Page 78 and 79: 60 3 Algorithms and Techniquesstart
- Page 80 and 81: 62 3 Algorithms and TechniquesHere
- Page 82 and 83: 64 3 Algorithms and Techniques3.8.2
- Page 84 and 85: 66 3 Algorithms and Techniquesfor e
- Page 86 and 87: 68 3 Algorithms and Techniquesthat
- Page 89 and 90: 4Web Content MiningIn recent years
- Page 91 and 92: score(q,d)=4.2 Web Search 73V(q) ·
- Page 93 and 94: 4.2 Web Search 75algorithm. The Web
- Page 95 and 96: 4.3 Feature Enrichment of Short Tex
- Page 97 and 98: 4.4 Latent Semantic Indexing 794.4
- Page 99 and 100: Notation4.5 Automatic Topic Extract
- Page 101 and 102: 4.5 Automatic Topic Extraction from
- Page 103 and 104: 4.6 Opinion Search and Opinion Spam
- Page 105:
4.6 Opinion Search and Opinion Spam
- Page 108 and 109:
90 5 Web Linkage Mining5.2 Co-citat
- Page 110 and 111:
92 5 Web Linkage Mining{ /1 out deg
- Page 112 and 113:
94 5 Web Linkage Mininga =(a(1),·
- Page 114 and 115:
96 5 Web Linkage Mining5.4.1 Bipart
- Page 116 and 117:
98 5 Web Linkage MiningNext, consid
- Page 118 and 119:
100 5 Web Linkage Mining(5) Creatin
- Page 120 and 121:
102 5 Web Linkage Miningpower-law d
- Page 122 and 123:
104 5 Web Linkage MiningFig. 5.10.
- Page 124 and 125:
106 5 Web Linkage Miningbetween use
- Page 126 and 127:
6Web Usage MiningIn previous chapte
- Page 129 and 130:
6.1 Modeling Web User Interests usi
- Page 131 and 132:
6.1 Modeling Web User Interests usi
- Page 133 and 134:
6.1 Modeling Web User Interests usi
- Page 135 and 136:
6.1 Modeling Web User Interests usi
- Page 137 and 138:
6.2 Web Usage Mining using Probabil
- Page 139 and 140:
6.2 Web Usage Mining using Probabil
- Page 141 and 142:
6.2 Web Usage Mining using Probabil
- Page 143 and 144:
6.3 Finding User Access Pattern via
- Page 145 and 146:
6.3 Finding User Access Pattern via
- Page 147 and 148:
6.3 Finding User Access Pattern via
- Page 149 and 150:
6.4 Co-Clustering Analysis of weblo
- Page 151 and 152:
6.5 Web Usage Mining Applications 1
- Page 153 and 154:
6.5 Web Usage Mining Applications 1
- Page 155 and 156:
6.5 Web Usage Mining Applications 1
- Page 157 and 158:
6.5 Web Usage Mining Applications 1
- Page 159 and 160:
6.5 Web Usage Mining Applications 1
- Page 161:
Part IIISocial Networking and Web R
- Page 164 and 165:
146 7 Extracting and Analyzing Web
- Page 166 and 167:
148 7 Extracting and Analyzing Web
- Page 168 and 169:
150 7 Extracting and Analyzing Web
- Page 170 and 171:
152 7 Extracting and Analyzing Web
- Page 172 and 173:
154 7 Extracting and Analyzing Web
- Page 174 and 175:
156 7 Extracting and Analyzing Web
- Page 176 and 177:
158 7 Extracting and Analyzing Web
- Page 178 and 179:
160 7 Extracting and Analyzing Web
- Page 180 and 181:
162 7 Extracting and Analyzing Web
- Page 182 and 183:
164 7 Extracting and Analyzing Web
- Page 184 and 185:
166 7 Extracting and Analyzing Web
- Page 186 and 187:
168 7 Extracting and Analyzing Web
- Page 188 and 189:
170 8 Web Mining and Recommendation
- Page 190 and 191:
172 8 Web Mining and Recommendation
- Page 192 and 193:
174 8 Web Mining and Recommendation
- Page 194 and 195:
176 8 Web Mining and Recommendation
- Page 196 and 197:
178 8 Web Mining and Recommendation
- Page 198 and 199:
180 8 Web Mining and Recommendation
- Page 200 and 201:
182 8 Web Mining and Recommendation
- Page 202 and 203:
184 8 Web Mining and Recommendation
- Page 204 and 205:
186 8 Web Mining and Recommendation
- Page 206 and 207:
188 8 Web Mining and Recommendation
- Page 208 and 209:
190 9 Conclusionsries commonly used
- Page 210 and 211:
192 9 Conclusionsas computer scienc
- Page 212 and 213:
194 9 Conclusionsresearches have de
- Page 214 and 215:
196 References14. J. Ayres, J. Gehr
- Page 216 and 217:
198 References49. D. Chakrabarti, R
- Page 218 and 219:
200 References82. C. Dwork, R. Kuma
- Page 220 and 221:
202 References119. J. Hou and Y. Zh
- Page 222 and 223:
204 References151. A. N. Langville
- Page 224 and 225:
206 References186. J. K. Mui and K.
- Page 226 and 227:
208 References223. C. Shahabi, A. M
- Page 228:
210 References260. G.-R. Xue, D. Sh