Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.

Similar presentations


Presentation on theme: "CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University."— Presentation transcript:

1 CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University

2 2 Classification of States: 1  A path is a sequence of states, where each transition has a positive probability of occurring.  State j is reachable (or accessible) ( يمكن الوصول إليه ) from state i ( i  j ) if there is a path from i to j –equivalently P ij (n) > 0 for some n≥0, i.e. the probability to go from i to j in n steps is greater than zero.  States i and j communicate ( i  j ) ( يتصل ) if i is reachable from j and j is reachable from i. (Note: a state i always communicates with itself)  A set of states C is a communicating class if every pair of states in C communicates with each other, and no state in C communicates with any state not in C.

3 3 Classification of States: 1  A state i is said to be an absorbing state if p ii = 1.  A subset S of the state space X is a closed set if no state outside of S is reachable from any state in S (like an absorbing state, but with multiple states), this means p ij = 0 for every i  S and j  S  A closed set S of states is irreducible( غير قابل للتخفيض ) if any state j  S is reachable from every state i  S.  A Markov chain is said to be irreducible if the state space X is irreducible.

4 4 Example  Irreducible Markov Chain 012 p 01 p 12 p 00 p 10 p 21 p 22 p 01 p 12 p 00 p 10 p 14 p 22 4 p 23 p 32 p 33 012 3 Absorbing State Closed irreducible set  Reducible Markov Chain

5 5 Classification of States: 2  State i is a transient state ( حالة عابرة ) if there exists a state j such that j is reachable from i but i is not reachable from j.  A state that is not transient is recurrent ( حالة متكررة ). There are two types of recurrent states: 1.Positive recurrent: if the expected time to return to the state is finite. 2.Null recurrent (less common): if the expected time to return to the state is infinite (this requires an infinite number of states).  A state i is periodic with period k >1, if k is the smallest number such that all paths leading from state i back to state i have a multiple of k transitions.  A state is aperiodic if it has period k =1.  A state is ergodic if it is positive recurrent and aperiodic.

6 6 Classification of States: 2 Example from Book Introduction to Probability: Lecture Notes D. Bertsekas and J. Tistsiklis – Fall 200

7 7 Transient and Recurrent States  We define the hitting time T ij as the random variable that represents the time to go from state j to stat i, and is expressed as:  k is the number of transition in a path from i to j.  T ij is the minimum number of transitions in a path from i to j.  We define the recurrence time T ii as the first time that the Markov Chain returns to state i.  The probability that the first recurrence to state i occurs at the n th- step is  T i Time for first visit to i given X 0 = i.  The probability of recurrence to state i is

8 8 Transient and Recurrent States  The mean recurrence time is  A state is recurrent if f i =1  If M i <  then it is said Positive Recurrent  If M i =  then it is said Null Recurrent  A state is transient if f i <1  If, then is the probability of never returning to state i.

9 9 Transient and Recurrent States  We define N i as the number of visits to state i given X 0 =i,  Theorem: If N i is the number of visits to state i given X 0 =i, then  Proof Transition Probability from state i to state i after n steps

10 10 Transient and Recurrent States  The probability of reaching state j for first time in n-steps starting from X 0 = i.  The probability of ever reaching j starting from state i is

11 11 Three Theorems  If a Markov Chain has finite state space, then: at least one of the states is recurrent.  If state i is recurrent and state j is reachable from state i then: state j is also recurrent.  If S is a finite closed irreducible set of states, then: every state in S is recurrent.

12 12 Positive and Null Recurrent States  Let M i be the mean recurrence time of state i  A state is said to be positive recurrent if M i <∞.  If M i =∞ then the state is said to be null-recurrent.  Three Theorems  If state i is positive recurrent and state j is reachable from state i then, state j is also positive recurrent.  If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient.  If S is a finite closed irreducible set of states, then every state in S is positive recurrent.

13 13 Example p 01 p 12 p 00 p 10 p 14 p 22 4 p 23 p 32 p 33 012 3 Recurrent State Transient States Positive Recurrent States

14 14 Periodic and Aperiodic States  Suppose that the structure of the Markov Chain is such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d.  If no such integer exists (i.e., d =1 ) then the state is called aperiodic.  Example 10.5 012 1 Periodic State d = 2

15 15 Steady State Analysis  Recall that the state probability, which is the probability of finding the MC at state i after the k th step is given by: An interesting question is what happens in the “long run”, i.e., Questions:  Do these limits exists?  If they exist, do they converge to a legitimate probability distribution, i.e.,  How do we evaluate π j, for all j. This is referred to as steady state or equilibrium or stationary state probability

16 16 Steady State Analysis  Recall the recursive probability If steady state exists, then π ( k+1 )  π ( k ), and therefore the steady state probabilities are given by the solution to the equations If an Irreducible Markov Chain, then the presence of periodic states prevents the existence of a steady state probability Example: periodic.m and

17 17 Steady State Analysis THEOREM: In an irreducible aperiodic Markov chain consisting of positive recurrent states a unique stationary state probability vector π exists such that π j > 0 and where M j is the mean recurrence time of state j The steady state vector π is determined by solving and Ergodic Markov chain.

18 18 Discrete Birth-Death Example 1-p p p p 01i p Thus, to find the steady state vector π we need to solve and

19 19 Discrete Birth-Death Example In other words Solving these equations we get In general Summing all terms we get

20 20 Discrete Birth-Death Example Therefore, for all states j we get If p<1/2, then All states are transient If p>1/2, then All states are positive recurrent

21 21 Discrete Birth-Death Example If p=1/2, then All states are null recurrent

22 22 Reducible Markov Chains In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state. The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set Transient Set T Irreducible Set S 1 Irreducible Set S 2

23 23 Transient Set T Reducible Markov Chains Suppose we start from state i. Then, there are two ways to go to S.  In one step or  Go to r  T after k steps, and then to S. Define Irreducible Set S i r s1s1 snsn

24 24 Reducible Markov Chains Next consider the general case for k=2,3,… First consider the one-step transition


Download ppt "CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University."

Similar presentations


Ads by Google