Download presentation
Presentation is loading. Please wait.
Published byCornelius Harrell Modified over 6 years ago
1
Structural Properties of Low Threshold Rank Graphs
Shayan Oveis Gharan University of Washington I am going to talk about new analysis of spectral graph algorithm using higher eigenvalues
2
Spectrum of (normalized) Adjacency/Laplacian matrices
Spectral Graph Theory Combinatorial properties of a graph Spectrum of (normalized) Adjacency/Laplacian matrices Spectral graph algorithms are like a hammer. You can apply everywhere. Here is just a tiny list of its application. There is a huge literature on spectral graph algorithms but still we don’t really know why they work. As I said these are very simple to implement algorithms but they are doing something very complicated. Let me give you one example of these algorithms that you see it yourself.
3
Setting Let 𝐺 be an unweighted 𝑑-regular graph with 𝑛= 𝑉 vertices G is 𝜖-expander if 𝜆 2 ≤1−𝜖. Matrix Eigenvalues Normlzd Adj 𝐴/𝑑 −1≤ 𝜆 𝑛 ≤… ≤𝜆 1 =1 Normld Laplacian 𝐼−𝐴/𝑑 0=1− 𝜆 1 ≤…≤1− 𝜆 𝑛 ≤2 So, let me start by defining some notations. Let L be a normalization of the adjacency matrix, known as the normalized laplacian matrix. Let me not define it here, but it is just the laplacian matrix where the entries are normalized by the degrees of the endpoints. We choose L to normalize the eigenvalues to sit between 0 and 2. There is a basic fact in algebraic graph theory which says, lambda_2 is equal to 0 iff G is disconnected. Cheeger’s inequality provides a robust version fact. So, as you can imagine it says lambda_2 is very close to 0 iff G is barely connected. Example: 4-cycle 𝜆1=1, 𝜆2=0, 𝜆3=0, 𝜆4=−1
4
Spectral Characterizations of Graphs
𝜆2 ≈1 𝐺 is almost disconnected Cheeger’s Inequality [Alon-Milman’85,Alon’86] 𝜆2≪1 𝐺 is an expander Cheeger’s inequality 𝜆 2 , 𝜆 𝑛 ≈0 𝐺≈ random Expander Mixing Lemma Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline. 𝜆𝑛 ≈−1 𝐺 is almost bipartite [Trevisan’08]
5
Generalizations to Higher Eigenvalues
In the next 5-10 minutes I give you an overview of the proof ideas. If you don’t want to get into the details of the proof you may rest here and get back in afterwards.
6
Low Threshold Rank Graphs
Think 𝑟𝑎𝑛𝑘 𝜖 ≤𝑂 1 . The 𝜖-threshold rank of a 𝑑-regular graph 𝐺 is 𝑟𝑎𝑛𝑘 𝐺 𝜖 =| 𝜆 𝑖 : |𝜆 𝑖 >𝜖 | Examples: Ramanujan expanders: 𝑟𝑎𝑛𝑘 𝑑 =1 . Dense graphs: So, 𝑟𝑎𝑛𝑘 𝜖 ≤ 𝑛 𝑑 𝜖 2 . 𝐴 𝑑 𝐹 2 = 1 𝑑 2 𝐴 𝐹 2 = 𝑛 𝑑 . 𝜖 2 ⋅𝑟𝑎𝑛𝑘 𝜖 ≤ 𝑖 𝜆 𝑖 2 = Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
7
Structure of Low Threshold Rank Graphs
Thm [Tanaka’11,O-Trevisan’14]: For any graph 𝐺 and any 𝑘, there is a partitioning of 𝐺 into ≤𝑘 sets each inducing an Ω 1− 𝜆 𝑘+1 2 𝑘 4 expander. So, Low threshold rank graphs are unions of 𝑟𝑎𝑛𝑘( 1 2) many expander graphs Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
8
Diameter of Low Threshold Rank Graphs
Thm [O-Trevisan’13]: For any graph 𝐺 and any 𝑘, the diameter of 𝐺 is at most ≲ 𝑘 ⋅ log 𝑛 1− 𝜆 𝑘 . Diameter of low threshold rank graphs is at most ≲𝑟𝑎𝑛𝑘( 1 2 ) log 𝑛 Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
9
Eigenvectors of Low Threshold Rank Graphs
Thm [Kwok-Lau-Lee-O-Trevisan’13]: For any graph 𝐺, any 1≤𝑖<𝑘, the 𝑖-th eigenvector 𝑣 𝑖 can be approximated by a 2k-step function 𝑔 s.t., 𝑣 𝑖 −𝑔 2 ≤4 1−𝜆 𝑖 1−𝜆 𝑘 , If 𝐺 is low threshold rank, each of the first few eigenvector are approx constant on each expander. 𝑣 𝑖 3 𝑣 𝑖 6 𝑣 𝑖 1 𝑣 𝑖 7 𝑣 𝑖 8 𝑣 𝑖 2 𝑣 𝑖 5 𝑣 𝑖 0 𝑣 𝑖 4 𝑣 𝑖 9 Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
10
Cheeger’s Inequality for Low Threshold Rank
Thm [Kwok-Lau-Lee-O-Trevisan’13]: For any graph 𝐺, 1−𝜆 2 2 ≤𝜙 𝐺 ≤𝑂 𝑘 1−𝜆 2 1−𝜆 𝑘 where 𝜙 𝐺 = min 𝑆 ≤𝑛/2 𝜙(𝑆) = min 𝑆 ≤𝑛/2 𝐸 𝑆, 𝑆 𝑑⋅|𝑆| If 𝜆 𝑘 ≪1, then 𝑣 2 encodes expanders; Also, 𝜙(𝐺) does not cut an expander Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
11
Low Threshold Rank Graphs in Optimization
[Kolla-Tulsiani’10, Arora-Barak-Steurer’10]: Subspace Enumeration Unique Games, Sparsest Cut and SSE admit constant factor approximations on low threshold rank graphs [Barak-Raghavendra-Steurer’11, Guruswami-Sinop’11’12]: SDP rounding 2-CSP problems, Quadratic Programming, … are “easy” on low threshold rank graphs. Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
12
Third Approach: Weak Regularity Lemma for Low Threshold Rank Graphs
In the next 5-10 minutes I give you an overview of the proof ideas. If you don’t want to get into the details of the proof you may rest here and get back in afterwards.
13
Weak Regularity Lemma [Frieze-Kannan’98]
For any graph 𝐺=(𝑉,𝐸) and 𝜀>0, there are 𝑠≾1/𝜀2 cut matrices 𝐶1, 𝐶2, …, 𝐶𝑠 s.t. 𝐴≈ 𝛼 1 𝐶 1 +…+ 𝛼 𝑠 𝐶 𝑠 . 𝐶𝑖 = 𝐶𝑈𝑇(𝑆𝑖 , 𝑇𝑖 ) = 𝑆𝑖 𝑇𝑖 Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
14
Weak Regularity Lemma [Frieze-Kannan’98]
For any graph 𝐺=(𝑉,𝐸) and 𝜀>0, there are 𝑠≾1/𝜀2 cut matrices 𝐶1, 𝐶2, …, 𝐶 𝑠 s.t. for 𝐶=𝛼1𝐶1+…+ 𝛼𝑠𝐶𝑠 𝐴−𝐶 𝐶𝑈𝑇 ≤𝜖 𝑛 2 , i.e., max 𝑆,𝑇⊆𝑉 𝐴 𝑆,𝑇 −𝐶 𝑆,𝑇 = max 𝑆,𝑇⊆𝑉 | 1 𝑆 , 𝐴−𝐶 1 𝑇 | ≤ 𝜀𝑛2 Gives PTAS for max-cut in dense graphs Enough to guess the intersection of | 𝐶 𝑖 ∩𝑂𝑃𝑇| for all 𝑖. Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
15
Our Result Theorem (O.,Trevisan’13): For any graph 𝐺=(𝑉,𝐸), 𝜀>0, there are 𝑠≲ 𝑟𝑎𝑛𝑘(𝜖)/𝜀2 cut matrices 𝐶 1 ,…, 𝐶 𝑠 , s.t., 𝐴− 𝛼 1 𝐶 1 +…+ 𝛼 𝑠 𝐶 𝑠 𝐶𝑈𝑇 ≲𝜖⋅ 𝐸 . Furthermore, this can be computed in polynomial time. Gives a PTAS for max-cut, max-bisection in low threshold rank graphs Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
16
Proof Ideas In the next 5-10 minutes I give you an overview of the proof ideas. If you don’t want to get into the details of the proof you may rest here and get back in afterwards.
17
Low Rank Approximation of A
Let 𝐴=𝑑 𝑖 𝜆 𝑖 𝑣 𝑖 𝑣 𝑖 𝑇 Define 𝐵≔𝑑 𝑖: 𝜆 𝑖 >𝜖 𝜆 𝑖 𝑣 𝑖 𝑣 𝑖 𝑇 Then, for 𝑆,𝑇⊆𝑉, So, 𝐴−𝐵 𝐶𝑈𝑇 ≤2𝜖 𝐸 . 1 𝑆 , 𝐴−𝐵 1 𝑇 ≤ 1 𝑆 ⋅ 𝐴−𝐵 1 𝑇 Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline. ≤ 1 𝑆 ⋅ 𝐴−𝐵 2 ⋅ 1 𝑇 ≤ 𝑛 ⋅ 𝑑𝜖 ⋅ 𝑛 =2𝜖|𝐸|
18
Main Lemma There are 𝑠≤ 1 𝜖 2 𝐵 𝑑 𝐹 2 cut matrices 𝐶1,…, 𝐶 𝑠 s.t.
𝐵−( 𝛼 1 𝐶 1 +…+ 𝛼 𝑠 𝐶 𝑆 ) 𝐶𝑈𝑇 ≤𝜖 𝐸 . This completes the proof of weak regularity lemma because 𝐵 𝑑 𝐹 2 = 𝑖: 𝜆 𝑖 >𝜖 𝜆 𝑖 2 ≤𝑟𝑎𝑛𝑘(𝜖). Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
19
Pf by Induction Suppose 𝐶=𝛼1𝐶1+…+𝛼𝑖𝐶𝑖
Lem: There are 𝑠≤ 1 𝜖 𝐵 𝑑 𝐹 2 cut matrices 𝐶1,…, 𝐶 𝑠 s.t. 𝐵−( 𝛼 1 𝐶 1 +…+ 𝛼 𝑠 𝐶 𝑆 ) 𝐶𝑈𝑇 ≤𝜖 𝐸 . Suppose 𝐶=𝛼1𝐶1+…+𝛼𝑖𝐶𝑖 Suppose there is a bad cut: ∃𝑆,𝑇: |𝐵(𝑆,𝑇)−𝐶(𝑆,𝑇)|>𝜀.|𝐸| Define 𝐶 𝑖+1 =𝐶𝑈𝑇 𝑆,𝑇 and 𝛼 𝑖+1 ≔ 𝐵 𝑆,𝑇 −𝐶 𝑆,𝑇 𝑑 2 𝑆 ⋅|𝑇| We show convergence: Use potential fn ℎ 𝐵 = 𝐵 𝑑 𝐹 2 . Then, Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline. ℎ 𝐵−𝐶− 𝛼 𝑖+1 𝐶 𝑖+1 −ℎ 𝐵−𝐶 ≤− 𝜖 2 𝑠≤ 1 𝜖 𝐵 𝑑 𝐹 2
20
Few Remarks Proof naturally generalizes to weighted non-regular graphs
To make it algorithmic, we use [Alon-Naor’05]: That for any matrix 𝐴∈ ℝ 𝑛×𝑛 finds 𝑆,𝑇 s.t. | 1 𝑆 ,𝐴 1 𝑇 |≥ 𝐴 𝐶𝑈𝑇 We use LP to estimate 𝑆 𝑖 ∩𝑂𝑃𝑇 ,| 𝑇 𝑖 ∩𝑂𝑃𝑇| for all 𝑖. In time 2 𝑂(𝑟𝑎𝑛𝑘 𝜖 1.5 / 𝜖 3 ) +𝑝𝑜𝑙𝑦(𝑛,𝑟𝑎𝑛𝑘 𝜖 , 1 𝜖 ) finds 8𝜖|𝐸| additive approximation to max-cut and max-bisection. Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline.
21
Spectral Characterizations: Higher Eigenvalues
𝜆 𝑘 ≪1 Eigenvectors of G are 2k-step fns [Kwok-Lee-Lau-O- Trevisan’13] 𝜆𝑘≪1 𝐺 is union of expanders [O-Trevisan’14] 𝑟𝑎𝑛𝑘 𝜖 ≤𝑘 𝐺≈𝑂( 𝑘 𝜖 2 ) cuts [O-Trevisan’13] Now let me tell you what we know about these algorithms in theory. Classical analysis of spectral graph algorithms only exploits the first or last two eigenvalues. Here are some examples, for example the bounds on chromatic number of a graph, cheeger’s inequality, finding edge separator of a graph or approximating the maximum problem. I should mention that in some random/semi-random instances there are results that use matrix perturbation theory and multiple eigenvectors, but here I do not have any assumption on the structure of the graphs. I can tell you more details offline. 𝜆 𝑘 ≈1 𝐺 has a natural 𝑘-partitioning [Lee-O-Trevisan’12,Louis- Raghavendra-Tetali-Vempala’12]
22
Conclusion and Future Directions
High threshold rank graphs are natural generalization of expanders A generalization of weak regularity lemma to low threshold rank graphs Future Directions: Generalizations to hypergraphs and r-CSPs. Stronger forms of regularity Lemma for low threshold graphs Let me conclude this part of the talk. Cheeger’s Inequality in one of the fundamental results in spectral graph theory. It is a 30 year old inequality with many applications in various fields of computer science. We generalize this inequality to k-way partitionings. Towards this we employ recent tools from high dimensional geometry and develop new techniques. Our proof provides a rigorous justification for spectral clustering algorithms without any assumption on the graph. Furthermore, we introduce new components that can be used and possibly improve spectral clustering algorithms. Here is an example. Here is a set of data points. If we applly classical spectral clustering algorithm we get the following clustering. But if we use the dimension reduction technique it significantly improves the quality. Note that I do not claim dimension reduction always improve the quality. I am just saying this might help in getting better answers.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.