Random Walks on Distributed N etworks Masafumi Yamash ita (Kyushu Univ., Japan)

Slides:



Advertisements
Similar presentations
The Cover Time of Random Walks Uriel Feige Weizmann Institute.
Advertisements

Great Theoretical Ideas in Computer Science
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
Exact Inference in Bayes Nets
Self Stabilizing Algorithms for Topology Management Presentation: Deniz Çokuslu.
Self-Stabilization in Distributed Systems Barath Raghavan Vikas Motwani Debashis Panigrahi.
Random Walks. Random Walks on Graphs - At any node, go to one of the neighbors of the node with equal probability. -
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
Markov Chains 1.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Analysis of Network Diffusion and Distributed Network Algorithms Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
1 Random Walks in WSN 1.Efficient and Robust Query Processing in Dynamic Environments using Random Walk Techniques, Chen Avin, Carlos Brito, IPSN 2004.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 24April 8, 2004Carnegie Mellon University.
How To Explore a Fast Changing World (On the Cover Time of Dynamic Graphs) Chen Avin Ben Gurion University Joint work with Michal Koucky & Zvi Lotker (ICALP-08)
Mixing Times of Markov Chains for Self-Organizing Lists and Biased Permutations Prateek Bhakta, Sarah Miracle, Dana Randall and Amanda Streib.
Mixing Times of Self-Organizing Lists and Biased Permutations Sarah Miracle Georgia Institute of Technology.
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
Efficient and Robust Query Processing in Dynamic Environments Using Random Walk Techniques Chen Avin Carlos Brito.
Entropy Rate of a Markov Chain
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Selected topics in distributed computing Shmuel Zaks
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Expanders via Random Spanning Trees R 許榮財 R 黃佳婷 R 黃怡嘉.
Random Walks Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2005 Lecture 24April 7, 2005Carnegie Mellon University.
Discrete Structures Lecture 12: Trees Ji Yanyan United International College Thanks to Professor Michael Hvidsten.
Many random walks are faster than one Noga AlonTel Aviv University Chen AvinBen Gurion University Michal KouckyCzech Academy of Sciences Gady KozmaWeizmann.
By J. Burns and J. Pachl Based on a presentation by Irina Shapira and Julia Mosin Uniform Self-Stabilization 1 P0P0 P1P1 P2P2 P3P3 P4P4 P5P5.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Random walks on undirected graphs and a little bit about Markov Chains Guy.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Self-stabilization. What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward.
CS 542: Topics in Distributed Systems Self-Stabilization.
Complexity and Efficient Algorithms Group / Department of Computer Science Testing the Cluster Structure of Graphs Christian Sohler joint work with Artur.
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
Self-stabilization. Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward error recovery). Guarantees.
Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘length’ c ij (cost, time, distance, …). Determine a path of shortest.
Distributed, Self-stabilizing Placement of Replicated Resources in Emerging Networks Bong-Jun Ko, Dan Rubenstein Presented by Jason Waddle.
ITEC452 Distributed Computing Lecture 15 Self-stabilization Hwajung Lee.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. Fast.
Introduction to NP Instructor: Neelima Gupta 1.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Week 11 - Wednesday.  What did we talk about last time?  Graphs  Paths and circuits.
1 Lecture 5 (part 2) Graphs II (a) Circuits; (b) Representation Reading: Epp Chp 11.2, 11.3
Complexity and Efficient Algorithms Group / Department of Computer Science Testing the Cluster Structure of Graphs Christian Sohler joint work with Artur.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Markov Chains and Random Walks
CONNECTED-COMPONENTS ALGORITHMS FOR MESH-CONNECTED PARALLEL COMPUTERS
Advanced Statistical Computing Fall 2016
Random walks on undirected graphs and a little bit about Markov Chains
Markov Chains Mixing Times Lecture 5
Path Coupling And Approximate Counting
Haim Kaplan and Uri Zwick
Randomized Algorithms Markov Chains and Random Walks
CS60002: Distributed Systems
Markov Chain Monte Carlo: Metropolis and Glauber Chains
Algorithms (2IL15) – Lecture 7
Lecture 10 Graph Algorithms
Presentation transcript:

Random Walks on Distributed N etworks Masafumi Yamash ita (Kyushu Univ., Japan)

Table of Contents What is a random walk? Applications  Markov chain Monte Carlo  Searching a distributed network for information  Self-stabilizing mutual exclusion Random walks on distributed networks  Random walks using local information  Random walks on dynamic graphs Open problems

What is a random walk? (1) Given graph G=(V,E) and trans prob matrix P = (P ij ), random walk starting at u: x = x 0, x 1, …, where Pr(x 0 =u) = 1, Pr(x k =j | x k-1 =i) = P ij 1 1/ /3 1 1/2 1/3 2/3 1/6 symmetric: S T =S (doubly stochastic) uniform distribution simple random walk P 0 quick traversal

What is a random walk? (2) Motivation: Random walks with different properties emerge from different trans prob matrix P. Goal: Design good matrix P meeting requirements.  Simple --- simple random walk P 0  Uniform distribution --- symmetric matrix S  Quick traversal --- global info on G necessary Goodness measures  Hitting time --- ave # of steps to reach a given vertex  Cover time --- ave # of steps to visit all vertices  Target stationary distrib --- prob to visit each vertex  Mixing time --- ave # of steps to conv to stationary distrib

Known about simple random walks (1) Hitting time =  (n 3 ) For any G, H(G) = O(n 3 ). There exists G such that H(G) =  (n 3 ). H(G) =(1-o(1))(4/27) n 3 (Brightwell+Winkler 1990) Cover time =  (n 3 ) C(G) =(1+o(1))(4/27) n 3 (Feige 1995) K2n/3 Xn/3 Lollipop Ln

Known about simple random walks (2) Stationary distribution  = (  i ) (vector satisfying  P =  ) Prob that token stays at i converges to  i  i = d(i) / 2m (m = |E|) --- not uniform Uniform distribution requested? P is symmetric =>  is uniform (Why?)  i Pij =  j Pji =  j Pij (detailed balance) Any distribution  requested? Metropolis-Hastings algorithm (MCMC) ij  j P ji  i P ij

Table of Contents What is a random walk? Applications  Markov chain Monte Carlo  Searching a distributed network for information  Self-stabilizing mutual exclusion Random walks on distributed networks  Random walks using local information  Random walks on dynamic graphs Open problems

Appl 1: Markov Chain Monte Carlo Given set S and prob distrib , pick elements from S following . 1. Design G = (S,E) and P such that  P =  2. Random walk x: x 0, x 1, …, x i, … => Pick x i (i must be larger than the mixing time) Challenge: Design P with small mixing time. => S G = (S,E)

Appl 2: Searching network for a file Naïve algorithm (for node j, search initiated by i): 1.Upon receiving mssg search(i,f), 2.if j has f then send back mssg found(j,f) to i 3.otherwise select a neighbor k and forward search(i,f) to k How to determine neighbor k?  Depth first search (deterministic selection) Fast: H(G) =  (n+m) May not work correctly on dynamic network  Simple random walk (select uniformly at random) Slow: H(G) =  (n 3 ) Challenge: Design P based on local info good on dynamic network.

Appl 3: Self-stabilizing mutual exclusion(1) Token circulation in a uni-dir ring Token ring algorithm A: Upon receiving mssg token, (critical section) forward it to the next node; How robust is A? => A cannot recognize the situations below. token (privilege)

Appl 3: Self-stabilizing mutual exclusion(2) Self-stabilizing systems (Dijkstra 1974): Tolerate a finite number of transient failures. Equivalently, work correctly from any init conf. Idea for self-stabilizing algorithm A (for u): Set of states :  = {0,1,…,n-2} If s’  (s+1) mod n, s’ := (s+1) mod (n-1). u has a token iff s’  (s+1) mod (n-1). ss’ u Ass: u can read s =>

Appl 3: Self-stabilizing mutual exclusion(3) Correctness of A? 1.Every conf contains a token. 2.# of tokens never increase. 3.# of tokens may not decrease. Theorem 1. For anonymous uni-dir ring, there is a deterministic self-stabilizing mutual exclusion algorithm only if n is prime. The problem is unsolvable deterministically

Appl 3: Self-stabilizing mutual exclusion(4) Israeli-Jalfon algo B for general graph (1990) – idea Each node has a register s. u has token iff s 0  max {s i of neighbors} If u has token, select a neighbor u’ at random and transfer token to u’. Tokens randomly walk in network. u s 0 = 2 s 1 = 1 s 2 = 0 -> 3 s 3 = 1 s 4 = 2 u’u’ s 5 = 3

Appl 3: Self-stabilizing mutual exclusion(5) Correctness of B 1. Every conf contains a token. 2.# of tokens never increase. 3.Since two simple random walks eventually meets, # of tokens eventually decreases to 1. 4.Since simple random walk eventually visits every nodes, all nodes enjoy the privilege. Theorem 2. The problem is solvable probabilistically. Performance (under static networks) 1.Waiting time: Hitting or cover time =  (n 3 ) 2.Convergence time: Hitting time =  (n 3 ) 3.Fairness: stationary distrib is not uniform. Challenge: Design P with good hitting/cover times.

Table of Contents What is a random walk? Applications  Markov chain Monte Carlo  Searching a distributed network for information  Self-stabilizing mutual exclusion Random walks on distributed networks  Random walks using local information  Random walks on dynamic graphs Open problems

Random walks on distributed networks Design issues in sequential and distributed appl appl.dynamic networks hitting time cover time local info stationary distrib mixing time MCMC ☓△☓△◎◎ distr comp ◎◎◎◎△△

Random walks using local info on static graphs (1) Impact of using degree info of neighbors  Guarantee uniform distrib? -> symmetric trans prob matrix S Sij = 1 / max{d(i), d(j)}  Guarantee any distribution  ? -> Metropolis walks (Metropolis et al. 1953, Hastings 1970) Mij is 1/d(i) if d(i)  d(j) (  i /  j ) else  j / (  i d(j) ) Theorem 3. Stationary distrib of M is . (Why?)  i Mij =  i / d(i) =  j M ji if d(i)  d(j) (  i /  j ). The other case is symmetrical. By detailed balance cond.

Random walks using local info(2) Impact of using global info  Cover time of simple random walks on trees = O(n 2 ) (Aleliunas et al. 1979)  Given G, extract spanning tree T, and have token circulate T. Theorem 4. For any G, H(G,P T ) =C(G,P T ) = O(n 2 ). (Epple, c.f. Ikeda et al. 2009) Theorem 5. For any P, H(Xn,P) = C(Xn,P) =  (n 2 ). (Ikeda et al. 2009) Corollary 1. P T is best possible. Recall H(G,P0) = C(G,P0) =  (n 3 ) for simple random walk P0. G T 1/ /

Random walks using local info (3) What is the essential local info? Given G, let Dij = d(j) -1/2 /  k  N(i) d(k) -1/2. (D is a Gibbs distribution.) Theorem 6. For any G, H(G,D) = O(n 2 ), C(G,D) = O(n 2 log n ). (Ikeda et al. 2009) (Why?) If j  N(i), H(i,j)  max { d(i), d(j) } n. For all i,j, there is path X connecting i and j such that sum of the degs of nodes in X is less than 3n. Thus H(i,j)  6n 2. Cover time is by Matthew’s theorem. Corollary 2. With respect to hitting time, D is best possible. Open: close gap on cover time.

Random walks using local info(4) Impact of stationary distrib  f = max{  i } / min{  i } Theorem 7. For any f, there are G (G’) and  (  ’) such that H(G,P) =  ( f n 2 ) (C(G’,P) =  ( f n 2 log n)) for any P. (Nonaka et al. 2010) Performance of Metropolis walks M Theorem 8. For any G and , H(G,M) = O( f n 2 ) and C(G,M) = O( f n 2 log n)). (Nonaka et al. 2010) Corollary 3. For any G and , M is best possible.

Random walks using local Info (5) Are simple random walks really bad? Yes: H(G,P T ) = C(G,P T ) = O(n 2 ) for all G, and H(L n, P 0 ) = C(L n, P 0 ) =  (n 3 ). But, not so bad as long as on trees. Theorem 9. For all trees T, H(T,P 0 ) / H(T,P*) =  (n 1/2 ) and C(T,P 0 ) / C(T,P*) = O((n log n) 1/2 ), where P* is the best P (global info available) for T. (Nonaka et al. 2011)

Simple random walks on dynamic graphs(1) G t = (V t, E t ) be the graph at time t  0. In general, both V t and E t may change. Assume V t = V for all t  0. Consider simple random walks on {G t }. 1.Choosing before checking E t (CBC): if the node chosen is not adjacent in G t, token takes the self-loop. 2.Choosing after checking E t (CAC) Let G = (V,E) be connected. At any time t, each edge in E is up (i.e., in E t ) with prob p. Theorem 10. (CBC) When G is K n, H({G t }, P 0 ) = n/p and C({G t }, P 0 ) = n H n /p, where H n is harmonic number. (Why?) Identify it as simple random walk on K n with self-loop with prob (1 – p).

Random walks on dynamic graphs(2) Theorem 11. (CAC) When G is K n, H({G t }, P 0 ) = n / (1-q n-1 ) and C({G t }, P 0 ) = n H n / (1-q n-1 ), where q =1 – p. (Why?) Roughly, same as Theorem 10, but success prob is 1 – (1-p) n-1. Theorem 12. (CBC) For general G, H({Gt}, P0) = H(G,P0) / p and C({Gt}, P0) = C(G,P0) / p. The order of performance of simple random walk on {G t } is exactly the same as on G, if p is constant. Theorem 13. (CAC) For any general G, F(G,P 0 ) / (1 - q  )  F ({G t }, P 0 )  F(G,P 0 ) / (1 - q  ), where, F  {H, C}, q = 1 - p,  is max deg,  is min deg. (Koba et al. 2010)

Random walks on dynamic graphs(3) Edge Markovian graph : {E t } is Markov chain following prob trans matrix Q. Markovian graph is Bernoulli if all rows of Q are identical. Assume self-loop at every node. Theorem 14. (CBC) For any connected Bernoulli graph {G t }, H({G t }, P 0 ) = O(n 3 ) and C({G t }, P 0 ) = O(n 3 log n). (Avin et al. 2008) Simple random walks are still OK. Theorem 15. (CBC) There is Markovian (but not Bernoulli) graph {G t } such that H({G t }, P 0 ) =  (2 n ). (Avin et al. 2008) Simple random walks may not perform well. Theorem 16. (CBC) For any connected dynamic graph {G t } of max deg , C({G t }, P 0 ) = O(  2 n 3 ln 2 n), where P 0 considers G as K  +1 (i.e., max lazy chain). (Avin et al. 2008) Observe why it does not contradict to Theorem 15.

Table of Contents What is a random walk? Applications  Markov chain Monte Carlo  Searching a distributed network for information  Self-stabilizing mutual exclusion Random walks on distributed networks  Random walks using local information  Random walks on dynamic graphs Open problems

We are interested in and working on 1.Understanding the impact of local info – close gap of bounds on cover time between  (n 2 ) and O(n 2 log n). 2.Analyzing the performances of random walks using local info on dynamic graphs. 3.Analyzing the cover ratio of random walks on dynamic graphs with variable vertex set. 4.Analyzing the performances of multiple random walks on dynamic graphs.