Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

Shortest Vector In A Lattice is NP-Hard to approximate
1 Decomposing Hypergraphs with Hypertrees Raphael Yuster University of Haifa - Oranim.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
4/5/05Tucker, Sec Applied Combinatorics, 4rth Ed. Alan Tucker Section 4.3 Graph Models Prepared by Jo Ellis-Monaghan.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Entropy Rates of a Stochastic Process
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Analysis of Network Diffusion and Distributed Network Algorithms Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization.
Graph Clustering. Why graph clustering is useful? Distance matrices are graphs  as useful as any other clustering Identification of communities in social.
Graph Clustering. Why graph clustering is useful? Distance matrices are graphs  as useful as any other clustering Identification of communities in social.
(Omer Reingold, 2005) Speaker: Roii Werner TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A AA A.
Ch 7.9: Nonhomogeneous Linear Systems
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
Undirected ST-Connectivity in Log-Space By Omer Reingold (Weizmann Institute) Year 2004 Presented by Maor Mishkin.
Zig-Zag Expanders Seminar in Theory and Algorithmic Research Sashka Davis UCSD, April 2005 “ Entropy Waves, the Zig-Zag Graph Product, and New Constant-
Orthogonality and Least Squares
Ramanujan Graphs of Every Degree Adam Marcus (Crisply, Yale) Daniel Spielman (Yale) Nikhil Srivastava (MSR India)
Boyce/DiPrima 9th ed, Ch 11.2: Sturm-Liouville Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9th edition, by.
Undirected ST-Connectivity In Log Space
Dirac Notation and Spectral decomposition
Undirected ST-Connectivity In Log Space Omer Reingold Slides by Sharon Bruckner.
Network Measures Social Media Mining. 2 Measures and Metrics 2 Social Media Mining Network Measures Klout.
5.1 Orthogonality.
1 Entropy Waves, The Zigzag Graph Product, and New Constant-Degree Expanders Omer Reingold Salil Vadhan Avi Wigderson Lecturer: Oded Levy.
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Entropy Rate of a Markov Chain
1 A fast algorithm for Maximum Subset Matching Noga Alon & Raphael Yuster.
An introduction to expander families and Ramanujan graphs
Lecture 22 More NPC problems
Expanders via Random Spanning Trees R 許榮財 R 黃佳婷 R 黃怡嘉.
Chapter 2 Mathematical preliminaries 2.1 Set, Relation and Functions 2.2 Proof Methods 2.3 Logarithms 2.4 Floor and Ceiling Functions 2.5 Factorial and.
Section 2.3 Properties of Solution Sets
15-853:Algorithms in the Real World
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
1 Decomposition into bipartite graphs with minimum degree 1. Raphael Yuster.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
Complexity and Efficient Algorithms Group / Department of Computer Science Testing the Cluster Structure of Graphs Christian Sohler joint work with Artur.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Introduction to Graph Theory
Presented by Alon Levin
Binomial Coefficients and Identities
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
1 Entropy Waves, The Zigzag Graph Product, and New Constant-Degree Expanders Omer Reingold Salil Vadhan Avi Wigderson Lecturer: Oded Levy.
Expanders and Ramanujan Graphs Mike Krebs, Cal State LA For slideshow: click “Research and Talks” from
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Costas Busch - LSU 1 More NP-complete Problems. Costas Busch - LSU 2 Theorem: If: Language is NP-complete Language is in NP is polynomial time reducible.
Markov Chains and Random Walks
Hans Bodlaender, Marek Cygan and Stefan Kratsch
Random walks on undirected graphs and a little bit about Markov Chains
More NP-complete Problems
Markov Chains Mixing Times Lecture 5
Network analysis.
Spectral Clustering.
Degree and Eigenvector Centrality
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Richard Anderson Lecture 25 NP-Completeness
Ilan Ben-Bassat Omri Weinstein
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Locality In Distributed Graph Algorithms
Presentation transcript:

Expanders Eliyahu Kiperwasser

What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High connectivity. – No “bottle-neck”.

What is it? (First definition) Graph is an expander if and only if for every subset S,  is a constant larger than 1. G=(V,E) is an expander if the number of edges originating from every subset of vertices is larger than the number of vertices at least by a constant factor.

That’s Easy… We are all familiar with graphs which are in fact expanders with more than a constant factor. – i.e. cliques. The challenge is to find sparse graphs which hold as expanders.

Construction of Explicit Expanders In this section, we describe two ways to build such marvelous objects as expanders. The following two methods show that some constant-degree regular graph exist with good expansion. We will show: – The Margulis/Gaber-Galil Expander. – The Lubotsky-Philips-Sarnak Expander.

Lubotsky-Philips-Sarnak Expander A graph on p+1 nodes, where p is a prime. Let graph vertices be V=ZpU{inf} V is a normal algebraic field with one extension, it contains also 0 -1, meaning inf. Given a vertex x, connect it to: – X+1 – X-1 – 1/X Proof is out of this lecture’s scope.

Lubotsky-Philips-Sarnak Example Given a vertex x, connect it to: – X+1 – X-1 – 1/X Inf

Margulis/Gaber-Galil Expander A graph on m 2 nodes Every node is a pair (x,y) where x,y  Z m (x,y) is connected to – (x+y,y), (x-y,y) – (x+y+1,y), (x-y-1,y) – (x,y+x), (x,y-x) – (x,y+x+1), (x,y-x-1) (all operations are modulo m) Proof is out of lecture scope.

Spectral gap From now on we will discuss only d-regular undirected graphs, where A(G) is the adjacency matrix. Since A is a real symmetric matrix, it contains the following real eigenvalues: λ 1 => λ 2 =>…=> λ n. Let λ = max{ |λ i (G)| : i>1} We define spectral gap as d-λ.

A second definition of expanders We can define an expander graph by looking at the adjacency matrix A. Graph is an expander if and only if A’s spectral gap is larger than zero.

Expansion vs. Spectral gap Theorem: – If G is an (n, d, λ)-expander then – We prove only one direction.

Expanders vs. Spectral gap It is easily seen from previous theorem that if the spectral gap (d-λ) is zero, the graph we are looking at is not an expander. However, if the spectral gap is larger then the “qualities” of the expander are improved (i.e. more connectivity, larger cuts, etc’).

Rayleigh Quotient The following is a lemma which will appear useful is the future. Lemma: Proof: – For A(G), λ 1 =d is easily seen to be the largest eigenvalue accompanied by the vector of ones as an eigenvector. – There exists an orthonormal basis V 1,V 2,…,V n where each V i is an eigenvector of A(G).

Rayleigh Quotient Proof cont.

Lower Bound Theorem In this section we will prove the correctness of the lower bound suggested by previous theorem. We prove that Proof: – Let

Lower Bound Theorem Proof cont. – By combining the Rayleigh coefficient with the fact that (Ax,x)<=||Ax||*||x||, we get: – We will develop this inner product further:

Lower Bound Theorem Proof cont. – After previous calculations we now have all that is needed to use Rayleigh lemma: – Since S contains at most half of the graph’s vertices, we conclude:

Markov Chains Definition: A finite state machine with probabilities for each transition, that is, a probability that the next state is j given that the current state is i.finite state machine Named after Andrei Andreyevich Markov ( ), who studied poetry and other texts as stochastic sequences of characters.Andrei Andreyevich Markov We will use Markov chains for the proof of our next final lemma, in order to analyze a random walk on an expander graph.

In directed graphs … There is an exponentially decreasing probability to reach a distant vertex.

In undirected graphs In an undirected graph this probability can decrease by a polynomial factor. Expander guarantee an almost uniform chance to “hit” each vertex. For example, clique provides a perfect uniform distribution. …

Random walks On the left we see the adjacency matrix associated with the graph. On the right we see the probability of transition between vertex i to j. Markov Chain

Random walks - Explanation Hence, the probability of hitting an arbitrary vertex v on the i th step is equal to the sum over all v neighbors of the probability of hitting those vertices multiply by the probability of the transition to v.

Random walks – Algebraic notation We can re-write the expression in a compact manner: Where x is the initial distribution.

Random walks - Example Suppose x is a uniform distribution on the vertices then after one step on the graph we receive the following distribution:

An Expander Lemma Let G be an (n,d,λ)-expander and F subset of E. Then the probability that a random walk, starting in the zero-th step from a random edge in F, passes through F on its t step is bounded by Later used to prove PCP theorem.

A random walk as a Markov chain Proof – Let X be a vector containing the current distribution on the vertices. – Let X’ be the next distribution vector, meaning the probability to visit vertex u is (Ax) u – In algebraic notation,

Expressing the success probability Proof cont. – the initial distribution x. – Observation: The distribution we reach after the i- th step is. – Let P be the probability we are interested at, which is that of traversing an edge of F in the t step. – Let be the number of edges of F incident on w, divided by d. – then

Plugging the initial x Proof cont. – To calculate X, we pick an edge in F, then pick one of the endpoints of that edge to start on. Resulting: – Using the previous results we get:

Decomposing x Proof cont. – Observation: The sum of each row in A/d equals one. – Hence, if is a uniform distribution on the vertices of the graph, then – We decompose any vector x to uniform distribution plus the remaining orthogonal components. – More intuitively, we separate x to V 1 component and the rest of the orthogonal basis. Random Walk

Final Expander Lemma Proof cont. – By linearity,

Final Expander Lemma Proof cont. – Hence,

Final Expander Lemma Proof cont. – Since the entries of X are positive, – Maximum can be achieved when all edges incident to v are in F, therefore F

Final Expander Lemma Proof cont. – We will continue with previous calculation:

Final Expander Lemma Proof cont. – Let’s see what we have so far, – By combining those result, we finish our proof: The graph is d-regular,

Additional Lemma The following lemma will be useful to us when proving the PCP theorem. If G is a d-regular graph on the vertex set V and H is a d’-regular graph on V then G’=GUH=(V,E(G)UE(H)) is a d+d’-regular graph such that λ(G’)<=λ(G)+λ(H)

Lemma Proof Choose x which sustains the following: – In words, the second eigenvector normalized.

The End Questions?