Theory of Computational Complexity Probability and Computing 7.3-7.5 2012. 1. 23 Lee Minseon Iwama and Ito lab M1 1.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 15. Graph Algorithms
CS 336 March 19, 2012 Tandy Warnow.
Great Theoretical Ideas in Computer Science
5.1 Real Vector Spaces.
2 4 Theorem:Proof: What shall we do for an undirected graph?
Chapter 8 Topics in Graph Theory
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Introduction to Graph Theory Instructor: Dr. Chaudhary Department of Computer Science Millersville University Reading Assignment Chapter 1.
Walks, Paths and Circuits Walks, Paths and Circuits Sanjay Jain, Lecturer, School of Computing.
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
Bayesian Networks, Winter Yoav Haimovitch & Ariel Raviv 1.
Great Theoretical Ideas in Computer Science for Some.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Graphs Graphs are the most general data structures we will study in this course. A graph is a more general version of connected nodes than the tree. Both.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Topics Review of DTMC Classification of states Economic analysis
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
C++ Programming: Program Design Including Data Structures, Third Edition Chapter 21: Graphs.
Great Theoretical Ideas in Computer Science.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
© 2006 Pearson Addison-Wesley. All rights reserved14 A-1 Chapter 14 Graphs.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Minimum Spanning Trees. Subgraph A graph G is a subgraph of graph H if –The vertices of G are a subset of the vertices of H, and –The edges of G are a.
GRAPH Learning Outcomes Students should be able to:
Entropy Rate of a Markov Chain
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
Chapter 2 Graph Algorithms.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© by Kenneth H. Rosen, Discrete Mathematics & its Applications, Sixth Edition, Mc Graw-Hill, 2007 Chapter 9 (Part 2): Graphs  Graph Terminology (9.2)
Discrete Structures Lecture 12: Trees Ji Yanyan United International College Thanks to Professor Michael Hvidsten.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Flows in Planar Graphs Hadi Mahzarnia. Outline O Introduction O Planar single commodity flow O Multicommodity flows for C 1 O Feasibility O Algorithm.
Lecture 11 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Chapter 10 Graph Theory Eulerian Cycle and the property of graph theory 10.3 The important property of graph theory and its representation 10.4.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
Chapter 8 Maximum Flows: Additional Topics All-Pairs Minimum Value Cut Problem  Given an undirected network G, find minimum value cut for all.
Graph theory and networks. Basic definitions  A graph consists of points called vertices (or nodes) and lines called edges (or arcs). Each edge joins.
PERTEMUAN 26. Markov Chains and Random Walks Fundamental Theorem of Markov Chains If M g is an irreducible, aperiodic Markov Chain: 1. All states are.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Great Theoretical Ideas in Computer Science for Some.
COMPSCI 102 Introduction to Discrete Mathematics.
Approximation Algorithms based on linear programming.
1 GRAPH Learning Outcomes Students should be able to: Explain basic terminology of a graph Identify Euler and Hamiltonian cycle Represent graphs using.
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
CSC317 1 At the same time: Breadth-first search tree: If node v is discovered after u then edge uv is added to the tree. We say that u is a predecessor.
English for Economic Informatics I Tomáš Foltýnek Theoretical Foundations of Informatics.
Theory of Computational Complexity Yusuke FURUKAWA Iwama Ito lab M1.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Computing Connected Components on Parallel Computers
Graph theory Definitions Trees, cycles, directed graphs.
Eulerian tours Miles Jones MTThF 8:30-9:50am CSE 4140 August 15, 2016.
Chapter 5. Optimal Matchings
Planarity Testing.
CSE 421: Introduction to Algorithms
Randomized Algorithms Markov Chains and Random Walks
Lectures on Graph Algorithms: searching, testing and sorting
Presentation transcript:

Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1

Chapter 7 : Markov Chains and Random Walks 7.3 Stationary Distributions Example: A Simple Queue 7.4 Random Walks on Undirected Graphs Application: An s-t Connectivity Algorithm 7.5 Parrondo`s Paradox 2

7.3 Stationary Distributions Def 7.8 : A stationary distribution of a Markov chain is a probability distribution such that 3 : The one-step transition probability matrix of a Markov chain

7.3 Stationary Distributions Def 7.8 : A stationary distribution of a Markov chain is a probability distribution such that 4 If a chain ever reaches a stationary distribution then it maintains that distribution for all future time.

7.3 Stationary Distributions Def 7.8 : A stationary distribution of a Markov chain is a probability distribution such that 5 If a chain ever reaches a stationary distribution then it maintains that distribution for all future time. → Stationary distributions play a key role in analyzing Markov chains !

7.3 Stationary Distributions Theorem 7.7 : Any finite, irreducible, and ergodic Markov chain has the following properties: 1.The chain has a unique stationary distribution 2.For all j and I, the limit exists and it is independent of j 3. 6

7.3 Stationary Distributions Theorem 7.7 : 2. For all j and I, the limit exists and it is independent of j 7 Proof : Using the Lemma 7.8 Using the fact that exists, for any j and I, these limits exist and are independent of the starting state j

7.3 Stationary Distributions 8 Proof : Recall : the probability that starting at j, the chain first visits i at time t the chain is irreducible For any there exists such that Theorem 7.7 : 3.

7.3 Stationary Distributions 9 Proof :

7.3 Stationary Distributions Proof : We can deduce

7.3 Stationary Distributions 11 Proof : Letting approach 0, for any pair i and j, Now let forms a stationary distribution? Check whether is a proper distribution or not Check whether is a stationary distribution or not

7.3 Stationary Distributions 12 Proof : Check whether is a proper distribution or not ↔ Check whether or not is a proper distribution

7.3 Stationary Distributions 13 Proof : Check whether is a stationary distribution or not ↔ Check whether or not is a stationary distribution

7.3 Stationary Distributions 14 Proof : Suppose there were another stationary distribution Theorem 7.7 : 1. The chain has a unique stationary distribution

7.3 Stationary Distributions 15 Remarks about Theorem 7.7: The requirement that the Markov chain should be aperiodic is not necessary for the existence of a stationary distribution. Any finite chain has at least one component that is recurrent.

7.3 Stationary Distributions 16 Ways to compute the stationary distribution of a finite Markov chain : 1.Solve the system of linear equations 2.Use the cut-sets of the Markov chain ← Smart!

7.3 Stationary Distributions 17 Solve the system of linear equations: Ex) We have five equations for the four unknowns, and The equations have a unique solution.

7.3 Stationary Distributions 18 Use the cut-sets of the Markov chain : Theorem 7.9 : Let S be a set of states of a finite, irreducible, aperiodic Markov chain. In the stationary distribution, the probability that the chain leaves the set S equals the probability that it enters S. In the stationary distribution, the probability of crossing the cut-set in one direction is equal to the probability of crossing the cut-set in the other direction.

7.3 Stationary Distributions 19 Use the cut-sets of the Markov chain : Ex) A simple Markov chain used to represent bursty behavior. Three equations for the two unknowns, and 1 Is it same when using the cut-set formulation?

7.3 Stationary Distributions 20 Use the cut-sets of the Markov chain : Ex) Using the cut-set formulation, in the stationary distribution the probability of leaving state 0 must equal the probability of entering state 0. Same! 1

7.3 Stationary Distributions 21 Theorem 7.10 : Consider a finite, irreducible, and ergodic Markov chain with transition matrix. If there are nonnegative numbers such that and if, for any pair of states i, j, Then is the stationary distribution corresponding to.

7.3 Stationary Distributions 22 Proof : Check whether is a stationary distribution or not ↔ Check whether or not is a stationary distribution ← Using the assumption of the theorem 7.10

7.3 Stationary Distributions 23 Theorem 7.11 : Any irreducible aperiodic Markov chain belongs to one of the following two categories: 1. the chain is ergodic – for any pair of states i and j, the limit exists and is independent of j, and the chain has a unique stationary distribution or 2.No state is positive recurrent – for all i and j,, and the chain has no stationary distribution.

Chapter 7 : Markov Chains and Random Walks 7.3 Stationary Distributions Example: A Simple Queue 7.4 Random Walks on Undirected Graphs Application: An s-t Connectivity Algorithm 7.5 Parrondo`s Paradox 24

7.3.1 Example: A Simple Queue 25 A queue is a line where customers wait for service. We can examine a queue using Markov chain ? We examine a model for a bounded queue where time is divided into steps of equal length. At each time step, exactly one of the following occurs.

7.3.1 Example: A Simple Queue 26 If the queue has fewer than n customers, then with probability a new customer joins the queue. n 1 … n

7.3.1 Example: A Simple Queue 27 If the queue has fewer than n customers, then with probability a new customer joins the queue. If the queue is not empty, then with probability the head of the line is served and leaves the queue. n 1 … n

7.3.1 Example: A Simple Queue 28 If the queue has fewer than n customers, then with probability a new customer joins the queue. If the queue is not empty, then with probability the head of the line is served and leaves the queue. With the remaining probability, the queue is unchanged n 1 … n

7.3.1 Example: A Simple Queue 29 If the queue has fewer than n customers, then with probability a new customer joins the queue. If the queue is not empty, then with probability the head of the line is served and leaves the queue. With the remaining probability, the queue is unchanged : The number of customers in the queue at time t. yield a finite-state Markov chain! n 1 … n

7.3.1 Example: A Simple Queue 30 Nonzero entries of Transition matrix n 1 … n

7.3.1 Example: A Simple Queue 31 Transition Matrix n 1 … n

7.3.1 Example: A Simple Queue 32 A unique stationary distribution exists, We can write

7.3.1 Example: A Simple Queue 33 A Solution to the preceding system of equations. Another way to compute the stationary probability in this case is to use cut-sets. In the stationary distribution, the probability of moving from state i to state i+1 must be equal to the probability of moving from state i + 1 to i. We can get a solution using a simple induction

7.3.1 Example: A Simple Queue 34 Adding the requirement, we have

7.3.1 Example: A Simple Queue 35 We will examine the case where there is no upper limit n on the number of customers in a queue. In this case, The Markov chain is no longer finite and has a countably infinte state space. Applying Theorem 7.11, The Markov chain has a stationary distribution if and only if all

7.3.1 Example: A Simple Queue 36 should converge, and we can verify that

7.3.1 Example: A Simple Queue 37 A solution of the system of equations

7.3.1 Example: A Simple Queue 38 All of the are greater than 0 if and only if. The rate at which customers arrive is lower than the rate at which they are served. The rate at which customers arrive is higher than the rate at which they are served. → There is no stationary distribution, and the queue length will become arbitrarily long. The rate at which customers arrive is equal to the rate at which they are served. → There is no stationary distribution, and the queue length will become arbitrarily long.

Chapter 7 : Markov Chains and Random Walks 7.3 Stationary Distributions Example: A Simple Queue 7.4 Random Walks on Undirected Graphs Application: An s-t Connectivity Algorithm 7.5 Parrondo`s Paradox 39

7.4 Random Walks on Undirected Graphs 40 A random walk on an undirected graph is a special type of Markov chain that is often used in analyzing algorithms. Let G = (V,E) be a finite, undirected, and connected graph.

7.4 Random Walks on Undirected Graphs Def 7.9 : A random walk on G is a Markov chain defined by the sequence of moves of a particle between vertices of G. In this process, the place of the particle at a given time step is the state of the system. If the particle is at vertex i and if i has d(i) outgoing edges, then the probability that the particle follows the edge (i, j) and moves to a neighbor j is 1/d(i). 41

7.4 Random Walks on Undirected Graphs Lemma 7.12 : A random walk on an undirected graph G is aperiodic if and only if G is not bipartite. 42 Proof: A graph is bipartite if and only if it does not have cycles with an odd number of edges. In an undirected graph, there is always a path of length 2 from a vertex to itself. If the graph I bipartite then the random walk I periodic with period d=2. if the graph is not bipartite then it has an odd cycle, and by traversing that cycle we have an odd-length path from any vertex to itself. It follows that the Markov chain is aperiodic.

7.4 Random Walks on Undirected Graphs 43 A random walk on a finite, undirected, connected, and non-bipartite graph G satisfies the conditions of Theorem7.7 The random walk converges to a stationary distribution. We will show that this distribution depends only on the degree sequence of the graph.

7.4 Random Walks on Undirected Graphs Theorem 7.13 : A random walk on G converges to a stationary distribution, where 44 Proof: and is a proper distribution over

7.4 Random Walks on Undirected Graphs 45 Proof: : The transition probability matrix of the Markov chain. : The neighbors of The relation is equivalent to And the theorem follows.

7.4 Random Walks on Undirected Graphs Corollary 7.14 : For any vertex u in G, 46 Proof: : The expected number of steps to reach u from v

7.4 Random Walks on Undirected Graphs Lemma 7.15: For any pair of vertices u and v, 47 Proof: N(u) : The set of neighbors of vertex u in G. We compute in two different ways : Therefore, And we conclude that

7.4 Random Walks on Undirected Graphs Definition 7.10: The cover time of a graph G = (V,E) is the maximum over all vertices of the expected time to visit all of the nodes in the graph by a random walk starting from. 48

7.4 Random Walks on Undirected Graphs Lemma 7.16 : The cover time of G=(V,E) is bounded above by 49 Proof: Choose a spanning tree of G; that is, choose any subset of the edges that gives an acyclic graph connecting all of the vertices of G. There exists a cyclic tour on this spanning tree in which every edge is traversed once in each direction; For example, such a tour can be found by considering the sequence of vertices passed through when doing a depth-first search.

7.4 Random Walks on Undirected Graphs 50 Proof: Let be the sequence of vertices in the tour, starting from vertex. Clearly the expected time to go through the vertices in the tour is an upper bound on the cover time. Hence the cover time is bounded above by where the first inequality comes from Lemma 7.15

Chapter 7 : Markov Chains and Random Walks 7.3 Stationary Distributions Example: A Simple Queue 7.4 Random Walks on Undirected Graphs Application: An s-t Connectivity Algorithm 7.5 Parrondo`s Paradox 51

7.4.1 Application: An s-t Connectivity Algorithm 52 Suppose we are given an undirected graph G = (V,E) and two vertices s and t in G. There is a path connecting s and t ? This is easily done in linear time using a standard breadth-first search or depth-first search. However, require Here we develop a randomized algorithm that works with only

7.4.1 Application: An s-t Connectivity Algorithm s-t Connectivity Algorithm: 1.Start a random walk from s. 2.If the walk reaches t within steps, return that there is a path. Otherwise, return that there is no path. 53 We use the cover time result (Lemma 7.16) to bound the number of steps that the random walk has to run.

7.4.1 Application: An s-t Connectivity Algorithm Theorem 7.17 : The s-t connectivity algorithm returns the correct answer with probability ½, and it only errs by returning that there is no path from s to t when there is such a path. 54 Proof: If there is no path then the algorithm returns the correct answer. If there is a path, the algorithm errs if it does not find the path within steps of the walk.

7.4.1 Application: An s-t Connectivity Algorithm 55 Proof: The expected time to reach t from s ( if there is a path) is bounded from above by the cover time of their shared component, which by Lemma 7.16 is at most By Markov`s inequality, the probability that a walk takes more than steps to reach s from t is at most 1/2

7.4.1 Application: An s-t Connectivity Algorithm 56 The algorithm must keep of its current position, which takes O(logn) bits, as well as the number of steps taken in the random walk, which also takes only O(logn) bits (since we count up to only )

Chapter 7 : Markov Chains and Random Walks 7.3 Stationary Distributions Example: A Simple Queue 7.4 Random Walks on Undirected Graphs Application: An s-t Connectivity Algorithm 7.5 Parrondo`s Paradox 57

7.5 Parrondo`s Paradox 58 The paradox appears to contradict the old saying that two wrongs don`t make a right, showing that two losing games can be combined to make a winning game.

7.5 Parrondo`s Paradox 59 Game A : We repeatedly flip a biased coin ( call it coin a) Coin a comes up heads with probability and tails with probability. You win a dollar if the coin comes up heads and lose a dollar if it comes up tails. Clearly, this is a losing game for you Ex) if then your expected loss is 2 cents per game.

7.5 Parrondo`s Paradox 60 Game B : We repeatedly flip a biased coin. The coins that is flipped depends on how you have been doing so far in the game. w : The number of your wins so far l : The number of your losses so far w-l: Your winnings. If it is negative, you have lost money.

7.5 Parrondo`s Paradox 61 Game B : Game B uses two biased coins, coins b and coin c. Coin b : If your winnings in dollars are a multiple of 3, then you coin b. coin b comes up heads with probability and tails with probability. Coin c : Otherwise you flip coin c. coin c comes up heads with probability and tails with probability. You win a dollar if the coin comes up heads and lose a dollar if it comes up tails.

7.5 Parrondo`s Paradox Example of Game B : Coin b : Coin b comes up heads with probability and tails with probability. Coin c : Coin c comes up heads with probability and tails with probability. 62 If we use coin b for the 1/3 of the time that your winnings are a multiple of 3 and use coin c for the other 2/3 of the time, then probability w of winnings is : Game B is in your favor? But coin b is not necessarily used 1/3 of the time!

7.5 Parrondo`s Paradox 63 Consider what happens when you first start the game, when your winnings are 0. You use coin b and most likely lose. You use coin c and most likely win. You may spend a great deal of time going back and forth between having lost one dollar and breaking even Before either winning one dollar or losing two dollars. you may use coin b more than 1/3 of the time.

7.5 Parrondo`s Paradox 64 How to determine if you are more likely to lose than win Using the absorbing states ⅰ By solving equations directly ⅱ By considering sequence of moves Using the stationary distribution

7.5 Parrondo`s Paradox 65 Analyzing the absorbing states: Suppose that we start playing game B when your winnings are 0, continuing until you either lose 3 dollars or win 3 dollars. Consider the Markov chain on the state space consisting of the integers {-3,-2,-1,0,1,2,3}, where the states represent your winnings. We want to know, when you start at 0, whether or not you are more likely to reach -3 before reaching 3.

7.5 Parrondo`s Paradox 66 Analyzing the absorbing states: : probability that you will end up having lost 3 dollars before having won 3 dollars when your current winnings are i dollars. : we are more likely to lose 3 dollars than win 3 dollars starting from 0. : boundary conditions.

7.5 Parrondo`s Paradox 67 Analyzing the absorbing states: A system of five equations with five unknowns Hence it can be solved easily

7.5 Parrondo`s Paradox 68 Analyzing the absorbing states: On example of game B, the solution is It shows one is much more likely to lose than win playing this game over the long run.

7.5 Parrondo`s Paradox 69 How to determine if you are more likely to lose than win Using the absorbing states ⅰ By solving equations directly ⅱ By considering sequence of moves Using the stationary distribution

7.5 Parrondo`s Paradox 70 Considering sequence of moves Consider any sequence of moves that starts at 0 and ends at 3 before reaching -3. Ex) s = 0,1,2,1,2,1,0,-1,-2,-1,0,1,2,1,2,3 We create a one-to-one and onto mapping of such sequences with the sequences that start at 0 and end at -3 before reaching 3 by negating every number starting from the last 0 in the sequence. Ex) f(s) = 0,1,2,1,2,1,0,-1,-2,-1,0,-1,-2,-1,-2,-3

7.5 Parrondo`s Paradox Lemma 7.18 : For any sequence s of moves that starts at 0 and ends at 3 before reaching -3, we have 71 Proof: : The number of transition from 0 to 1 : The number of transition from 0 to -1 : The sum of the number of transitions from -2 to -1, -1 to 0, 1 to 2, and 2 to 3 : the sum of the number of transition from 2 to 1, 1 to 0, -1 to -2, and -2 to -3

7.5 Parrondo`s Paradox 72 Proof: The probability that the sequence s occurs

7.5 Parrondo`s Paradox 73 Proof: What happens when we transform s to f(s)? We change one transition from 0 to 1 into a transition from 0 to -1. After this point, is 2 more than, since the sequence ends at 3. The probability that the sequence f(s) occurs

7.5 Parrondo`s Paradox 74 Proof: The probability that the sequence s occurs

7.5 Parrondo`s Paradox 75 Considering sequence of moves S : the set of all sequences of moves that start at 0 and end at 3 before reaching -3 This ratio is less than 1, then you are more likely to lose tha n win. On example, the ratio < 1, It shows one is much more likely to lose than win playing this game

7.5 Parrondo`s Paradox 76 How to determine if you are more likely to lose than win Using the absorbing states ⅰ By solving equations directly ⅱ By considering sequence of moves Using the stationary distribution

7.5 Parrondo`s Paradox 77 Using the stationary distribution Markov chain on the state { 0, 1, 2 } The states represent the remainder when our winnings are divided by 3. ( w – l mod 3) The probability that we win a dollar in the stationary distribution: Check if this is greater than or less than 1/2

7.5 Parrondo`s Paradox 78 Using the stationary distribution The equations for the stationary distribution

7.5 Parrondo`s Paradox 79 Using the stationary distribution Since there are four equations and only three unknowns, it can be solved easily

7.5 Parrondo`s Paradox 80 Using the stationary distribution If the probability of winning in the stationary distribution is less than ½, you lose. In example, Therefore game B is a losing game in the long run.