Information Network Analysis and Discovery Cuiping Li Guoming He Information School, Renmin University of China
Related Work 1.Whole graph Level - Macro properties (Laws, generators) -Summary/Visualization -Index 2. Sub-graph Level -Frequent Pattern Mining -Clustering (Community/group detection) -Connected Sub-graph, Central Piece -Pattern Match 3.Node or Link Level -Ranking -Proximity/Similarity -Node Classification -Outlier Detection (Abnormal nodes/links)
Node Proximity/Similarity: Why? Link prediction [Liben-Nowell+], [Tong+] Ranking [Haveliwala], [Chakrabarti+] Management [Minkov+] Image caption [Pan+] Neighborhooh Formulation [Sun+] Conn. subgraph [Faloutsos+], [Tong+], [Koren+] Pattern match [Tong+] Collaborative Filtering [Fouss+] Many more…
Node Similarity: Related Work(1) Computer Network’99: Finding related pages in the World Wide Web, Jeffrey Dean, Monika R. Henzinger (adapting from HITS) KDD’02: SimRank: A Measure of Structural-Context Similarity, Glen Jeh, Jennifer Widom (Adapting from PageRank) –Exploiting Hierarchical Domain Structure to Compute Similarity. P. Ganesan, H. Garcia-Molina, and J. Widom. Transactions on Information Systems, 21(1): 64-93, January 2003.Exploiting Hierarchical Domain Structure to Compute Similarity Vertex similarity in networks: Phys. Rev. E 73, (2006) Optimization on simrank –WWW’05: Scaling link-base similarity search, D.Fogaras, B. Racz (Approximate) –VLDB’08: Accuracy Estimate and Optimization Techniques for SimRank Computation Dmitry Lizorkin, Pavel Velikhov, Maxim Grinev, Denis Turdakov.
Node Similarity: Related Work(2) Domain-Integrated of simrank: –VLDB’08: Simrank++: Query Rewriting through Link Analysis of the Click Graph, Loannis Antonellis (Stanford University), Hector Garcia-Molina (Stanford University), Chi-Chao Chang (Yahoo!). (keywords, ads) Clustering using simrank: –SIGIR’03: ReCom: Reinforcement Clustering of multi- type interrelated data objects, J. Wang, H.J. Zeng, Z. Chen, H.J. LU,L. Tao –VLDB’06: LinkCLus:Efficient Clustering via Heterogeneous Semantic Links, Xiaoxin Yin, Jiawei Han, Philip Yu
Existing Research: Limitation 1 Not Dynamic –Static Algorithm Iterative –Challenges of Dynamic Network Re-computation even one node or edge changes –Our Solution Non-iterative Incremental Computation Cuiping Li, Jiawei Han, Guoming He, Xin Jin, Yizhou Sun, Yintao Yu, Tianyi Wu, "Fast Computation of SimRank for Static and Dynamic Information Networks", Int. Conf. on Extending Data Base Technology (EDBT'10), Lausanne, Switzerland, March 2010
Existing Research: Limitation 2 Not Efficient –Our Solution: employ the modern hardware resource GPU (Graphic Process Unit) Multi-Processor
Compute Node Similarity for Dynamic Network SimRank formula Or Intuition –Two objects are similar if they are referenced by similar objects.
How to Compute SimRank Incremetally Fist glance at SimRank formula –It is Iterative. Has no chance to be computed incrementally Key Observation –SimRank iteration formula has the same form as the well-known Sylvester Equations, based on this, we can compute SimRank without iteration.
Vec-Operator and Kronecker Products Vec-Operator –Vec flattens an n x n matrix A into an n 2 x 1 vector –It stacks the columns of the matrix on top of each other, from left to right Kronecker Product –Product of two matrices A and B –Each element of A is multiplied with the full matrix B:
Sylvester Equations Sylvester Equations: X=SXT + X 0 –Given three n x n matrixes S, T, and X 0 –We want to determine X –Solvable in O(n 3 )
Sylvester Equations Rewrite the Sylvester Equations as vec(X)=vec(SXT) + vec(X 0 ) Exploit the well-known fact vec(SXT) = (T T S)vec(X) We can get vec(X)= (T T S)vec(X) + vec(X 0 ) We can get (I - T T S)vec(X) = vex(X 0 ) Now we have to solve vec(X)=(I - T T S) -1 vec(X 0 )
SimRank SimRank has the same form as the Sylvester equations X=cA T XA +(1-c)e, (A is the normalized adjacent matrix, e is an identity matirx) Similarly, for SimRank, we have to solve vec(X)=(I -cA T A T ) -1 vec((1-c)e) vec(X)= (1-c) (I -cA T A T ) -1 vec(e) –A T A T can be solved in O(n 3 ) –More importantly, when A is sparse/skew, we can improve the efficiently further.
15 Advantages of non-iterative method vec(X)= (1-c) (I -cA T A T ) -1 vec(e) It can be solved approximately It can be computed incrementally It can be computed pair-wisely
vec(X)= (1-c) (I -cW W) -1 vec(e) 利用奇异值分解 SVD 和 Sherman-Morrison 方程求 L 的逆 Approximation W =
W 的 low rank SVD 分解 k 的大小 –k 越大,计算时间越长,精确度越高 Error Bound Approximation
预计算 计算某对结点 (i,j) 的 SimRank Approximation
Incremental Computation 只需要对 U, ,V 进行维护即可
Applications Similarity Tracking: return the N most similar nodes of i at each time step t. Centrality Tracking: return the N most central nodes at each time step t.
Experimental Result on DBLP Top-10 Most Similar Terms for ‘Prof. Jennifer Widom’ up to Each Time Step
Experimental Result Top-10 Most Similar Authors for ‘Prof. Jennifer Widom’ up to Each Time Step
预计算时间
计算不同个数结点对的时间
Wikepedia Data We set the threshold T to be 1.0e-6. For k=15 –the pre-compute time of the Wikipedia dataset is approx hours –the query time for every 1000 node pairs is seconds