Download presentation
Presentation is loading. Please wait.
Published byNoreen Kennedy Modified over 9 years ago
1
Markov Chains and Random Walks
2
Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, … }, we say that X is a discrete time stochastic process. Otherwise it is called continuous time stochastic process. Here we consider a discrete time stochastic process X n, n=0,1,2, …
3
If X n =i, then the process is said to be in state i at time n. Pr[X n+1 =j|X n =i,X n-1 =i n-1, …,X 0 =i 0 ]=P i,j for all states i 0,i 1, …,i n-1,i,j and all n ≥ 0. X n+1 depends only on X n. Such a stochastic process is known as a Markov chain. i 1 2 j ⋮ P i,1 P i,2 P i,j P i,j ≥ 0 j P i,j =1
4
The n-step transition prob of the Markov chain is defined as the conditional prob., given that the chain is currently in state i, that will be in state j after n additional transitions. I.e, Chapman-Kolmogorov equation:
5
Proof : Let P (n) denote the matrix of n-step transition probabilities, then the Chapman-Kolmogorov equations P (n) =P n.
6
Eg. 0 23 1 00.3 0.4 0.5 0.6 0.8 0.2 0.5 0.7
7
Classification of states Def: State j is said to be accessible from state i if for some n ≥ 0. We say states i and j communicate if they are both accessible from each other. (i ↔ j) The Markov chain is said to be “ irreducible ” if there is only one class, I.e., if all states communicate with each other. For any state i, let f i denote the prob. that, starting in state i, the process will ever reenter that state. State i is said to be “ recurrent ” if f i =1, and “ transient ” if f i <1. Starting in state i, the prob. that the process will be in state i for exactly n time periods equals f i n-1 (1-f i ), n ≥ 1.
8
More definitions
9
represents the number of periods that the process is in state i.
10
Prop: State i is recurrent if transient if Cor: If state i is recurrent, and state i communicates with state j, then state j is recurrent.
11
Proof: ∵ i communicates with j, ∴ Exist k,m s.t..For any integer n, Thus, j is also recurrent. ji m k n
12
Stantionary Distributions
13
Proof of theorem:
14
Recall r ji t is the probability that starting at j, the chain first visits i at time t. By irreducibility, we have
15
Proof of Theorem:
18
Random Walks on Undirected graphs Lemma: A random walk on an undirected graph G is aperiodic iff G is not bipartite. Pf: A graph is bipartite iff it does not have odd cycle. In an undirected graph, there is always a path of length 2 from a vertex to itself.
19
Thus, if the graph is bipartite then the random walk is periodic (d=2). If not bipartite, then it has an odd cycle and GCD(2,odd-number)=1. Thus, the Markov chain is aperiodic.
20
Thm: G: not bipartite, finite, undirected and connected. A random walk on G converges to a stationary distribution, where v =d(v)/2|E|. Pf: Thus, is a distribution. Let P be the transition probability matrix. Let N(v) be the neighbors of v.
21
Thus, Cor: h v,u : the expected number of steps to reach u from v. For any u G, h u,u =2|E|/d(u).
22
Lemma: If (u,v) E, then h v,u 2|E|. Pf: N(u): the set of neighbors of u. Thus, h v,u <2|E|.
23
Def: The cover time of a graph G=(V,E) is the maximum over all v V of the expected time to visit all of the nodes in the graph by a random walk starting from v. Lemma: The cover time of G=(V,E) is bounded by 4|V||E|.
24
Pf: Choose a spanning tree of G. Then there exists a cyclic tour on this tree, where each edge is traversed once in each direction, which can be found by doing a DFS. Let v 0,v 1, …,v 2|V|-2 =v 0 be the sequence of vertices in the tour, starting from v 0.
25
Clearly, the expected time to go through the vertices in the tour is an upper bound on the cover time. Hence, the cover time is bounded above by
26
Application: s-t connectivity It can be done with BFS, DFS by using (n) space. The following randomized algorithm works with only O(logn) bits. s-t Connectivity Algorithm: Input: G 1.Start a random walk from s. 2.If the walk reaches t within 4n 3 steps, return that there is a path. Otherwise, return no path.
27
Assume G has no bipartite connected component. (The results can be made to apply to bipartite graphs with some additional work.) Thm: The s-t connectivity algorithm returns the correct answer with probability ½ and it only errs by returning that there is no path from s to t when there is such a path.
28
Pf: The algorithm gives correct answer, when G has no s-t path. If G has an s-t path, the algorithm errs if it does not find the path in 4n 3 steps. The expected time to reach t from s is bounded by the cover time, which is at most 4|V||E|<2n 3. By Markov ’ s inequality, Why O(logn) bits?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.