10.07.2015 Views

Web Mining and Social Networking: Techniques and ... - tud.ttu.ee

Web Mining and Social Networking: Techniques and ... - tud.ttu.ee

Web Mining and Social Networking: Techniques and ... - tud.ttu.ee

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

7.2 Temporal Analysis on Semantic Graph using Thr<strong>ee</strong>-Way Tensor Decomposition 155Fig. 7.8. Thr<strong>ee</strong>-way DEDECOM model [15]7.2.2 AlgorithmsNotationsTo exactly formulate the mathematical problems in the thr<strong>ee</strong>-way model, the following similarnotations that are discussed Chapter 2 are introduced. Scalars are denoted by lowercase letters,e.g. a. Vectors are denoted by boldface lowercase letter, e.g. a, the ith element of a is denotedby a i . Matrices are denoted by boldface capital letters, e.g. A. The jth column of A is denotedby a j <strong>and</strong> the entry (i, j) by a ij . The thr<strong>ee</strong>-way array is denoted by a tensor symbol, e.g. X,the element (i, j,k) of a thr<strong>ee</strong>-way array X is denoted by x ijk , <strong>and</strong> the kth frontal slice of X isdenoted by X k [15].The symbol ⊗ denotes the matrix Kroneck product, <strong>and</strong> the symbol ∗ denotes theHadamard matrix product. The Frobenius norm of a matrix, ‖Y‖ F , is the square root of thesum of squares of its all elements [15].AlgorithmAs discussed before, the proposed algorithm is in fact to find the best approximation of theoriginal data, X k . In this manner, the solution to the thr<strong>ee</strong>-way DEDICOM model is equivalentto an optimization problem, i.e. finding a best approximation to X. To fit the thr<strong>ee</strong>-wayDEDICOM model, we n<strong>ee</strong>d to tackle the following minimization problem,m∥min ∑∥ ∥X k − AD k RD k A T 2∥ (7.8)A,R,D k=1FTo solve Eq.7.8, there are a few algorithms. Here Bader et al. [15] proposed an alternatingoptimization algorithm, which is called ASALSAN (for Alternating Simultaneous Approximation,Least Square, <strong>and</strong> Newton). The improvement of the algorithm claimed by the authorsis the capability of dealing with large <strong>and</strong> sparse arrays.The algorithm is composed of one initialization step <strong>and</strong> thr<strong>ee</strong> updating steps. The initializationstep starts with r<strong>and</strong>omly selecting A, R <strong>and</strong> D k = I. Then the algorithm updates A, R<strong>and</strong> D in an alternating manner as follows.• Updating A: First obtain all front slices of X by stacking the data side by side, i.e. X k ,k = 1,···,m; then update A with the following formula:A ←[ m∑k=1(X k AD k R T D k + X T k AD kRD k) ] normalized(7.9)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!