Download presentation
Presentation is loading. Please wait.
Published byBerniece Spencer Modified over 8 years ago
1
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman
2
Laplacians Interpolation on graphs Spring networks Clustering Isotonic regression Sparsification Solving Laplacian Equations Best results The simplest algorithm Outline
3
Interpolate values of a function at all vertices from given values at a few vertices. Minimize Subject to given values 1 0 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 (Zhu,Ghahramani,Lafferty ’03) Interpolation on Graphs
4
Interpolate values of a function at all vertices from given values at a few vertices. Minimize Subject to given values 1 0 0.51 0.61 0.53 0.30 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 (Zhu,Ghahramani,Lafferty ’03) Interpolation on Graphs
5
Interpolate values of a function at all vertices from given values at a few vertices. Minimize Subject to given values Take derivatives. Minimize by solving Laplacian 1 0 0.51 0.61 0.53 0.30 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 (Zhu,Ghahramani,Lafferty ’03) Interpolation on Graphs
6
Interpolate values of a function at all vertices from given values at a few vertices. Minimize Subject to given values 1 0 0.51 0.61 0.53 0.30 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 (Zhu,Ghahramani,Lafferty ’03) Interpolation on Graphs In the solution, variables are the average of their neighbors
7
The Laplacian Quadratic Form
8
The Laplacian Matrix of a Graph
9
Nail down some vertices, let rest settle View edges as rubber bands or ideal linear springs In equilibrium, nodes are averages of neighbors. Spring Networks
10
Nail down some vertices, let rest settle View edges as rubber bands or ideal linear springs Spring Networks When stretched to length potential energy is
11
Nail down some vertices, let rest settle Physics: position minimizes total potential energy subject to boundary constraints (nails) Spring Networks
12
0 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 Spring Networks Interpolate values of a function at all vertices from given values at a few vertices. Minimize 1
13
1 0 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 Spring Networks Interpolate values of a function at all vertices from given values at a few vertices. Minimize
14
Interpolate values of a function at all vertices from given values at a few vertices. Minimize 1 0 0.51 0.61 0.53 0.30 ANAPC10 CDC27 ANAPC5 UBE2C ANAPC2 CDC20 Spring Networks In equilibrium, nodes are averages of neighbors.
15
(Tutte ’63) Drawing by Spring Networks
16
(Tutte ’63)
17
Drawing by Spring Networks (Tutte ’63)
18
Drawing by Spring Networks (Tutte ’63)
19
Drawing by Spring Networks (Tutte ’63)
20
If the graph is planar, then the spring drawing has no crossing edges! Drawing by Spring Networks (Tutte ’63)
21
Drawing by Spring Networks (Tutte ’63)
22
Drawing by Spring Networks (Tutte ’63)
23
Drawing by Spring Networks (Tutte ’63)
24
Drawing by Spring Networks (Tutte ’63)
25
Drawing by Spring Networks (Tutte ’63)
26
S S Measuring boundaries of sets Boundary: edges leaving a set
27
S 0 0 0 0 0 0 1 1 0 1 1 1 1 1 0 0 0 1 S Characteristic Vector of S: Measuring boundaries of sets
28
Boundary: edges leaving a set S 0 0 0 0 0 0 1 1 0 1 1 1 1 1 0 0 0 1 S Characteristic Vector of S: Measuring boundaries of sets
29
Find large sets of small boundary S 0.2 -0.20 0.4 -1.1 0.5 0.8 1.1 1.0 1.6 1.3 0.9 0.7 -0.4 -0.3 1.3 S 0.5 Heuristic to find x with small Compute eigenvector Consider the level sets Spectral Clustering and Partitioning
30
2 14 3 5 6 Symmetric Non-positive off-diagonals Diagonally dominant The Laplacian Matrix of a Graph
32
where Laplacian Matrices of Weighted Graphs
33
B is the signed edge-vertex adjacency matrix with one row for each W is the diagonal matrix of weights Laplacian Matrices of Weighted Graphs where
34
Laplacian Matrices of Weighted Graphs 2 14 3 5 6
35
Quickly Solving Laplacian Equations Where m is number of non-zeros and n is dimension S,Teng ’04: Using low-stretch trees and sparsifiers
36
Where m is number of non-zeros and n is dimension Koutis, Miller, Peng ’11: Low-stretch trees and sampling S,Teng ’04: Using low-stretch trees and sparsifiers Quickly Solving Laplacian Equations
37
Where m is number of non-zeros and n is dimension Cohen, Kyng, Pachocki, Peng, Rao ’14: Koutis, Miller, Peng ’11: Low-stretch trees and sampling S,Teng ’04: Using low-stretch trees and sparsifiers Quickly Solving Laplacian Equations
38
Cohen, Kyng, Pachocki, Peng, Rao ’14: Koutis, Miller, Peng ’11: Low-stretch trees and sampling S,Teng ’04: Using low-stretch trees and sparsifiers Quickly Solving Laplacian Equations Good code: LAMG (lean algebraic multigrid) – Livne-Brandt CMG (combinatorial multigrid) – Koutis
39
An -accurate solution to is an x satisfying where Quickly Solving Laplacian Equations Where m is number of non-zeros and n is dimension S,Teng ’04: Using low-stretch trees and sparsifiers
40
An -accurate solution to is an x satisfying Quickly Solving Laplacian Equations S,Teng ’04: Using low-stretch trees and sparsifiers Allows fast computation of eigenvectors corresponding to small eigenvalues.
41
Laplacians appear when solving Linear Programs on on graphs by Interior Point Methods Maximum and Min-Cost Flow (Daitch, S ’08, Mądry ‘13) Shortest Paths (Cohen, Mądry, Sankowski, Vladu ‘16) Lipschitz Learning : regularized interpolation on graphs (Kyng, Rao, Sachdeva,S ‘15) Isotonic Regression (Kyng, Rao, Sachdeva ‘15) Laplacians in Linear Programming
42
(Ayer et. al. ‘55) Isotonic Regression 2.5 3.2 4.0 3.7 3.6 3.9 A function is isotonic with respect to a directed acyclic graph if x increases on edges.
43
Isotonic Regression 2.5 3.2 4.0 3.7 3.6 3.9 College GPA High-school GPA SAT (Ayer et. al. ‘55)
44
Isotonic Regression 2.5 3.2 4.0 3.7 3.6 3.9 College GPA High-school GPA SAT Estimate by nearest neighbor? (Ayer et. al. ‘55)
45
Isotonic Regression 2.5 3.2 4.0 3.7 3.6 3.9 College GPA High-school GPA SAT Estimate by nearest neighbor? We want the estimate to be monotonically increasing (Ayer et. al. ‘55)
46
Isotonic Regression 2.5 3.2 4.0 3.7 3.6 3.9 College GPA High-school GPA SAT Given find the isotonic x minimizing (Ayer et. al. ‘55)
47
Isotonic Regression 2.5 3.2 3.866 3.6 3.866 College GPA High-school GPA SAT Given find the isotonic x minimizing (Ayer et. al. ‘55)
48
Given find the isotonic x minimizing (Kyng, Rao, Sachdeva ’15) Fast IPM for Isotonic Regression
49
Given find the isotonic x minimizing or for any in time (Kyng, Rao, Sachdeva ’15) Fast IPM for Isotonic Regression
50
Signed edge-vertex incidence matrix x is isotonic if Linear Program for Isotonic Regression
51
Given y, minimize subject to Linear Program for Isotonic Regression
52
Given y, minimize subject to Linear Program for Isotonic Regression
53
Given y, minimize subject to Linear Program for Isotonic Regression
54
Given y, minimize subject to Linear Program for Isotonic Regression
55
Minimize subject to Linear Program for Isotonic Regression
56
Minimize subject to IPM solves a sequence of equations of form with positive diagonal matrices Linear Program for Isotonic Regression
57
are positive diagonal Laplacian! Linear Program for Isotonic Regression
58
are positive diagonal Laplacian! Kyng, Rao, Sachdeva’15: Reduce to solving Laplacians to constant accuracy Linear Program for Isotonic Regression
59
Spectral Sparsification Every graph can be approximated by a sparse graph with a similar Laplacian
60
for all x A graph H is an -approximation of G if Approximating Graphs
61
for all x A graph H is an -approximation of G if Approximating Graphs That is, Where iff
62
for all x A graph H is an -approximation of G if Approximating Graphs Preserves boundaries of every set
63
Solutions to linear equations are similiar for all x A graph H is an -approximation of G if Approximating Graphs
64
Spectral Sparsification Every graph G has an -approximation H with edges (Batson, S, Srivastava ’09)
65
Spectral Sparsification Every graph G has an -approximation H with edges (Batson, S, Srivastava ’09) Random regular graphs approximate complete graphs
66
Fast Spectral Sparsification (S & Srivastava ‘08) If sample each edge with probability inversely proportional to its effective spring constant, only need samples (Lee & Sun ‘15) Can find an -approximation with edges in time for every
67
(Kyng & Sachdeva ‘16) Approximate Gaussian Elimination Gaussian Elimination: compute upper triangular U so that Approximate Gaussian Elimination: compute sparse upper triangular U so that
68
Gaussian Elimination 1. Find the rank-1 matrix that agrees on the first row and column 2. Subtract it
69
Gaussian Elimination 1. Find the rank-1 matrix that agrees on the first row and column 2. Subtract it 3. Repeat
70
Gaussian Elimination 1. Find the rank-1 matrix that agrees on the next row and column 2. Subtract it
71
Gaussian Elimination 1. Find the rank-1 matrix that agrees on the next row and column 2. Subtract it
72
Gaussian Elimination
75
Computation time proportional to the sum of the squares of the number of nonzeros in these vectors
76
Gaussian Elimination of Laplacians If this is a Laplacian, then so is this
77
Gaussian Elimination of Laplacians If this is a Laplacian, then so is this 2 1 4 3 6 5 4 8 4 1 2 4 3 6 5 2 2 2 When eliminate a node, add a clique on its neighbors
78
Approximate Gaussian Elimination 2 1 4 3 6 5 4 8 4 1 2 4 3 6 5 2 2 2 1. when eliminate a node, add a clique on its neighbors 2. Sparsify that clique, without ever constructing it (Kyng & Sachdeva ‘16)
79
Approximate Gaussian Elimination 1.When eliminate a node of degree d, add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node 1
80
(Kyng & Sachdeva ‘16) Approximate Gaussian Elimination 0. Initialize by randomly permuting vertices, and making copies of every edge 1.When eliminate a node of degree d, add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node Total time is
81
(Kyng & Sachdeva ‘16) Approximate Gaussian Elimination Analysis by Random Matrix Theory: Write U T U as a sum of random matrices. Random permutation and copying control the variances of the random matrices Apply Matrix Freedman inequality (Tropp ‘11)
82
(Kyng & Sachdeva ‘16) Approximate Gaussian Elimination 0. Initialize by randomly permuting vertices, and making copies of every edge Total time is 1.When eliminate a node of degree d, add d edges at random between its neighbors, sampled with probability proportional to the weight of the edge to the eliminated node Can be improved by sacrificing some simplicity
83
Other families of linear systems complex-weighted Laplacians connection Laplacians Other optimization problems Recent Developments (Kyng, Lee, Peng, Sachdeva, S ‘16)
84
To learn more My web page on: Laplacian linear equations, sparsification, local graph clustering, low-stretch spanning trees, and so on. My class notes from “Graphs and Networks” and “Spectral Graph Theory” Lx = b, by Nisheeth Vishnoi Laplacians.jl
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.