Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

2 4 Theorem:Proof: What shall we do for an undirected graph?
Chapter 9 Graphs.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Complexity Theory Lecture 4 Lecturer: Moni Naor. Recap Last week: Space Complexity Savitch’s Theorem: NSPACE(f(n)) µ SPACE(f 2 (n)) –Collapse of NPSPACE.
Tirgul 7 Review of graphs Graph algorithms: –DFS –Properties of DFS –Topological sort.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Tirgul 8 Graph algorithms: Strongly connected components.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Markov Chains 1.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Markov Chains Lecture #5
2 4 Theorem:Proof: What shall we do for an undirected graph?
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 14 Strongly connected components Definition and motivation Algorithm Chapter 22.5.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
1 On the Computation of the Permanent Dana Moshkovitz.
Sampling and Approximate Counting for Weighted Matchings Roy Cagan.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 14 Strongly connected components Definition and motivation Algorithm Chapter 22.5.
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
GRAPHS Education is what remains after one has forgotten what one has learned in school. Albert Einstein Albert Einstein Smitha N Pai.
GRAPH Learning Outcomes Students should be able to:
Entropy Rate of a Markov Chain
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Chapter 2 Graph Algorithms.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
GRAPHS CSE, POSTECH. Chapter 16 covers the following topics Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component,
Data Structures Week 9 Introduction to Graphs Consider the following problem. A river with an island and bridges. The problem is to see if there is a way.
Expanders via Random Spanning Trees R 許榮財 R 黃佳婷 R 黃怡嘉.
Random Walks on Distributed N etworks Masafumi Yamash ita (Kyushu Univ., Japan)
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.
Week 11 - Monday.  What did we talk about last time?  Binomial theorem and Pascal's triangle  Conditional probability  Bayes’ theorem.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
Graph Theory and Applications
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Graph theory and networks. Basic definitions  A graph consists of points called vertices (or nodes) and lines called edges (or arcs). Each edge joins.
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
PERTEMUAN 26. Markov Chains and Random Walks Fundamental Theorem of Markov Chains If M g is an irreducible, aperiodic Markov Chain: 1. All states are.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
1 GRAPH Learning Outcomes Students should be able to: Explain basic terminology of a graph Identify Euler and Hamiltonian cycle Represent graphs using.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Random walks on undirected graphs and a little bit about Markov Chains
Markov Chains Mixing Times Lecture 5
Randomized Algorithms Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Presentation transcript:

Markov Chains and Random Walks

Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, … }, we say that X is a discrete time stochastic process. Otherwise it is called continuous time stochastic process. Here we consider a discrete time stochastic process X n, n=0,1,2, …

If X n =i, then the process is said to be in state i at time n. Pr[X n+1 =j|X n =i,X n-1 =i n-1, …,X 0 =i 0 ]=P i,j for all states i 0,i 1, …,i n-1,i,j and all n ≥ 0. X n+1 depends only on X n. Such a stochastic process is known as a Markov chain. i 1 2 j ⋮ P i,1 P i,2 P i,j P i,j ≥ 0  j P i,j =1

The n-step transition prob of the Markov chain is defined as the conditional prob., given that the chain is currently in state i, that will be in state j after n additional transitions. I.e, Chapman-Kolmogorov equation:

Proof : Let P (n) denote the matrix of n-step transition probabilities, then the Chapman-Kolmogorov equations  P (n) =P n.

Eg

Classification of states Def: State j is said to be accessible from state i if for some n ≥ 0. We say states i and j communicate if they are both accessible from each other. (i ↔ j) The Markov chain is said to be “ irreducible ” if there is only one class, I.e., if all states communicate with each other. For any state i, let f i denote the prob. that, starting in state i, the process will ever reenter that state. State i is said to be “ recurrent ” if f i =1, and “ transient ” if f i <1. Starting in state i, the prob. that the process will be in state i for exactly n time periods equals f i n-1 (1-f i ), n ≥ 1.

More definitions

represents the number of periods that the process is in state i.

Prop: State i is recurrent if transient if Cor: If state i is recurrent, and state i communicates with state j, then state j is recurrent.

Proof: ∵ i communicates with j, ∴ Exist k,m s.t..For any integer n, Thus, j is also recurrent. ji m k n

Stantionary Distributions

Proof of theorem:

Recall r ji t is the probability that starting at j, the chain first visits i at time t. By irreducibility, we have

Proof of Theorem:

Random Walks on Undirected graphs Lemma: A random walk on an undirected graph G is aperiodic iff G is not bipartite. Pf: A graph is bipartite iff it does not have odd cycle. In an undirected graph, there is always a path of length 2 from a vertex to itself.

Thus, if the graph is bipartite then the random walk is periodic (d=2). If not bipartite, then it has an odd cycle and GCD(2,odd-number)=1. Thus, the Markov chain is aperiodic.

Thm: G: not bipartite, finite, undirected and connected. A random walk on G converges to a stationary distribution, where  v =d(v)/2|E|. Pf: Thus, is a distribution. Let P be the transition probability matrix. Let N(v) be the neighbors of v.

Thus, Cor: h v,u : the expected number of steps to reach u from v. For any u  G, h u,u =2|E|/d(u).

Lemma: If (u,v)  E, then h v,u  2|E|. Pf: N(u): the set of neighbors of u. Thus,  h v,u <2|E|.

Def: The cover time of a graph G=(V,E) is the maximum over all v  V of the expected time to visit all of the nodes in the graph by a random walk starting from v. Lemma: The cover time of G=(V,E) is bounded by 4|V||E|.

Pf: Choose a spanning tree of G. Then there exists a cyclic tour on this tree, where each edge is traversed once in each direction, which can be found by doing a DFS. Let v 0,v 1, …,v 2|V|-2 =v 0 be the sequence of vertices in the tour, starting from v 0.

Clearly, the expected time to go through the vertices in the tour is an upper bound on the cover time. Hence, the cover time is bounded above by

Application: s-t connectivity It can be done with BFS, DFS by using  (n) space. The following randomized algorithm works with only O(logn) bits. s-t Connectivity Algorithm: Input: G 1.Start a random walk from s. 2.If the walk reaches t within 4n 3 steps, return that there is a path. Otherwise, return no path.

Assume G has no bipartite connected component. (The results can be made to apply to bipartite graphs with some additional work.) Thm: The s-t connectivity algorithm returns the correct answer with probability ½ and it only errs by returning that there is no path from s to t when there is such a path.

Pf: The algorithm gives correct answer, when G has no s-t path. If G has an s-t path, the algorithm errs if it does not find the path in 4n 3 steps. The expected time to reach t from s is bounded by the cover time, which is at most 4|V||E|<2n 3. By Markov ’ s inequality, Why O(logn) bits?