Download presentation
Presentation is loading. Please wait.
Published byTerence Sanders Modified over 9 years ago
1
Analysis of Social Media MLD 10-802, LTI 11-772 William Cohen 2-15-11
2
The “force” on nodes in a graph Suppose every node has a value (IQ, income,..) y(i) – Each node i has value y i … and neighbors N(i), degree d i – If i,j connected then j exerts a force -K*[y i -y j ] on i – Total: – Matrix notation: F = -K(D-A)y - the Laplacian – Interesting (?) goal: set y so (D-A)y = c*y – Picture: neighbors pull i up or down, but net force doesn’t change relative positions of nodes
3
Spectral Clustering: Graph = Matrix How do I pick y to be an eigenvector for a block-stochastic matrix?
4
Spectral Clustering: Graph = Matrix W*v 1 = v 2 “propogates weights from neighbors” M [Shi & Meila, 2002] e2e2 e3e3 -0.4-0.200.2 -0.4 -0.2 0.0 0.2 0.4 x x x x x x y yy y y x x x x x x z z z z z z z z z z z e1e1 e2e2
6
Another way the Laplacian comes up: it defines a cost formula for y where y assigned nodes to + or – classes so as to keep connected nodes in the same class. Turns out: to minimize y T X y / (y T y) find smallest eigenvector of X But: this will not be +1/-1, so it’s a “relaxed” solution
7
Some more terms If A is an adjacency matrix (maybe weighted) and D is a (diagonal) matrix giving the degree of each node – Then D-A is the (unnormalized) Laplacian – W=AD -1 is a probabilistic adjacency matrix – I-W is the (normalized or random-walk) Laplacian – etc…. The largest eigenvectors of W correspond to the smallest eigenvectors of I-W – So sometimes people talk about “bottom eigenvectors of the Laplacian”
8
A W A W K-nn graph (easy) Fully connected graph, weighted by distance
9
Spectral Clustering: Graph = Matrix W*v 1 = v 2 “propogates weights from neighbors” M [Shi & Meila, 2002] e2e2 e3e3 -0.4-0.200.2 -0.4 -0.2 0.0 0.2 0.4 x x x x x x y yy y y x x x x x x z z z z z z z z z z z e1e1 e2e2
10
Spectral Clustering: Graph = Matrix W*v 1 = v 2 “propogates weights from neighbors” M If Wis connected but roughly block diagonal with k blocks then the top eigenvector is a constant vector the next k eigenvectors are roughly piecewise constant with “pieces” corresponding to blocks
11
Spectral Clustering: Graph = Matrix W*v 1 = v 2 “propogates weights from neighbors” M If W is connected but roughly block diagonal with k blocks then the “top” eigenvector is a constant vector the next k eigenvectors are roughly piecewise constant with “pieces” corresponding to blocks Spectral clustering: Find the top k+1 eigenvectors v 1,…,v k+1 Discard the “top” one Replace every node a with k-dimensional vector x a = Cluster with k-means
12
Experimental results: best-case assignment of class labels to clusters Eigenvectors of WEigenvecs of variant of W
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.