CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Markov Chains.
Discrete Time Markov Chains
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Chapter 17 Markov Chains.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Entropy Rate of a Markov Chain
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Industrial Engineering Dep
Discrete-time markov chain (continuation)
Much More About Markov Chains
Discrete-time markov chain (continuation)
Markov Chains Carey Williamson Department of Computer Science
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
September 1, 2010 Dr. Itamar Arel College of Engineering
Carey Williamson Department of Computer Science University of Calgary
Discrete time Markov Chain
Discrete time Markov Chain
Discrete-time markov chain (continuation)
Presentation transcript:

CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University

2 Classification of States: 1  A path is a sequence of states, where each transition has a positive probability of occurring.  State j is reachable (or accessible) ( يمكن الوصول إليه ) from state i ( i  j ) if there is a path from i to j –equivalently P ij (n) > 0 for some n≥0, i.e. the probability to go from i to j in n steps is greater than zero.  States i and j communicate ( i  j ) ( يتصل ) if i is reachable from j and j is reachable from i. (Note: a state i always communicates with itself)  A set of states C is a communicating class if every pair of states in C communicates with each other, and no state in C communicates with any state not in C.

3 Classification of States: 1  A state i is said to be an absorbing state if p ii = 1.  A subset S of the state space X is a closed set if no state outside of S is reachable from any state in S (like an absorbing state, but with multiple states), this means p ij = 0 for every i  S and j  S  A closed set S of states is irreducible( غير قابل للتخفيض ) if any state j  S is reachable from every state i  S.  A Markov chain is said to be irreducible if the state space X is irreducible.

4 Example  Irreducible Markov Chain 012 p 01 p 12 p 00 p 10 p 21 p 22 p 01 p 12 p 00 p 10 p 14 p 22 4 p 23 p 32 p Absorbing State Closed irreducible set  Reducible Markov Chain

5 Classification of States: 2  State i is a transient state ( حالة عابرة ) if there exists a state j such that j is reachable from i but i is not reachable from j.  A state that is not transient is recurrent ( حالة متكررة ). There are two types of recurrent states: 1.Positive recurrent: if the expected time to return to the state is finite. 2.Null recurrent (less common): if the expected time to return to the state is infinite (this requires an infinite number of states).  A state i is periodic with period k >1, if k is the smallest number such that all paths leading from state i back to state i have a multiple of k transitions.  A state is aperiodic if it has period k =1.  A state is ergodic if it is positive recurrent and aperiodic.

6 Classification of States: 2 Example from Book Introduction to Probability: Lecture Notes D. Bertsekas and J. Tistsiklis – Fall 200

7 Transient and Recurrent States  We define the hitting time T ij as the random variable that represents the time to go from state j to stat i, and is expressed as:  k is the number of transition in a path from i to j.  T ij is the minimum number of transitions in a path from i to j.  We define the recurrence time T ii as the first time that the Markov Chain returns to state i.  The probability that the first recurrence to state i occurs at the n th- step is  T i Time for first visit to i given X 0 = i.  The probability of recurrence to state i is

8 Transient and Recurrent States  The mean recurrence time is  A state is recurrent if f i =1  If M i <  then it is said Positive Recurrent  If M i =  then it is said Null Recurrent  A state is transient if f i <1  If, then is the probability of never returning to state i.

9 Transient and Recurrent States  We define N i as the number of visits to state i given X 0 =i,  Theorem: If N i is the number of visits to state i given X 0 =i, then  Proof Transition Probability from state i to state i after n steps

10 Transient and Recurrent States  The probability of reaching state j for first time in n-steps starting from X 0 = i.  The probability of ever reaching j starting from state i is

11 Three Theorems  If a Markov Chain has finite state space, then: at least one of the states is recurrent.  If state i is recurrent and state j is reachable from state i then: state j is also recurrent.  If S is a finite closed irreducible set of states, then: every state in S is recurrent.

12 Positive and Null Recurrent States  Let M i be the mean recurrence time of state i  A state is said to be positive recurrent if M i <∞.  If M i =∞ then the state is said to be null-recurrent.  Three Theorems  If state i is positive recurrent and state j is reachable from state i then, state j is also positive recurrent.  If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient.  If S is a finite closed irreducible set of states, then every state in S is positive recurrent.

13 Example p 01 p 12 p 00 p 10 p 14 p 22 4 p 23 p 32 p Recurrent State Transient States Positive Recurrent States

14 Periodic and Aperiodic States  Suppose that the structure of the Markov Chain is such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d.  If no such integer exists (i.e., d =1 ) then the state is called aperiodic.  Example Periodic State d = 2

15 Steady State Analysis  Recall that the state probability, which is the probability of finding the MC at state i after the k th step is given by: An interesting question is what happens in the “long run”, i.e., Questions:  Do these limits exists?  If they exist, do they converge to a legitimate probability distribution, i.e.,  How do we evaluate π j, for all j. This is referred to as steady state or equilibrium or stationary state probability

16 Steady State Analysis  Recall the recursive probability If steady state exists, then π ( k+1 )  π ( k ), and therefore the steady state probabilities are given by the solution to the equations If an Irreducible Markov Chain, then the presence of periodic states prevents the existence of a steady state probability Example: periodic.m and

17 Steady State Analysis THEOREM: In an irreducible aperiodic Markov chain consisting of positive recurrent states a unique stationary state probability vector π exists such that π j > 0 and where M j is the mean recurrence time of state j The steady state vector π is determined by solving and Ergodic Markov chain.

18 Discrete Birth-Death Example 1-p p p p 01i p Thus, to find the steady state vector π we need to solve and

19 Discrete Birth-Death Example In other words Solving these equations we get In general Summing all terms we get

20 Discrete Birth-Death Example Therefore, for all states j we get If p<1/2, then All states are transient If p>1/2, then All states are positive recurrent

21 Discrete Birth-Death Example If p=1/2, then All states are null recurrent

22 Reducible Markov Chains In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state. The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set Transient Set T Irreducible Set S 1 Irreducible Set S 2

23 Transient Set T Reducible Markov Chains Suppose we start from state i. Then, there are two ways to go to S.  In one step or  Go to r  T after k steps, and then to S. Define Irreducible Set S i r s1s1 snsn

24 Reducible Markov Chains Next consider the general case for k=2,3,… First consider the one-step transition