Download presentation
1
Discrete Time Markov Chains
2
Discrete Time Markov Chain (DTMC)
Consider a system where transitions take place at discrete time instants, and where the state of the system at time k is denoted as Xk Markov property states that Future only depends on the present… The pij‘s are the state transitions probabilities System evolution is fully specified by its state transition probability matrix [P]ij = Pij
3
State Transition Probability Matrix
P is a stochastic matrix All entries are positive and sum of entries in a row add up to 1 n-step transition probability matrix Pijn is the probability of being in state j after n steps when starting in state i Limiting probabilities: the odds of being in a given state after a very large (infinite) number of transitions
4
Chapman-Kolmogorov Equations
Track state evolution based on transition probabilities between all states is the probability of being in state j after n transitions when starting in state i This provides a simple recursion to compute higher order transition probabilities (C-K)
5
More on C-K Equations In matrix form, where P is the one-step transition probability matrix The Chapman-Kolmogorov equations state Pn = Pm Pn-m, 0 m n
6
Limiting Distribution
From P we know that pij is the probability of being in state j after step 1 given that we start in state i In general, let be the initial sate probability vector Then the state probability vector 1 after the first step is given by 1 = 0P And in general after n steps, we have n = 0Pn Does n converges to a limit as n ? (limiting distribution)
7
A Simple 3-State Example
1 2 1-p p2 p(1-p) p Note that the relation n = 0Pn implies that conditioned on starting in state i, the value of n is simply the ith row of Pn
8
A Simple 3-State Example
Appears independent of the starting state i
9
Stationary Distribution
A Markov chain with M states admits a stationary probability distribution = [0, 1,…, M-1] if P = and {i=0 to M-1}i =1 In other words {i=0 to M-1}i pij = j , j and {i=0 to M-1}i =1
10
Stationary vs. Limiting Distributions
Assume that the limiting distribution defined by = [0, 1,…, M-1], where j = limnPijn > 0, exists and is a distribution, i.e., {i=0 to M-1}i =1 Then is also a stationary distribution, i.e., = P and no other distribution exists We can find the limiting distribution of a DTMC either by solving the stationary equations or by raising the matrix P to some large power
11
Summary: Computing DTMC Probabilities
Use the stationary equations: = P and ii = 1 Numerically compute successive values of Pn (and stop when all rows are approximately equal) The limit converges to a matrix with identical rows, where each row is equal to Guess a solution to the recurrence relation
12
Back to Our Simple 3-State Example
1 2 1-p p2 p(1-p) p
13
The Umbrella Example For p=0.6, this gives
1 2 1 (1-p) 1-p p For p=0.6, this gives = [ , , ] Probability Pwet that professor gets wet = Prob[zero umbrella and rain] They are independent, so that Pwet = 0p = 0.6 = 0.1 Average number of umbrellas at a location: E[U] = 1 + 22 = 1.25
14
Infinite DTMC Handling chains with an infinite state space
We cannot anymore use matrix multiplication But the result that if the limiting distribution exists it is the only stationary distribution still holds So we can still use the stationary equations, provided they have “some structure” we can exploit
15
A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s The (infinite) transition probability matrix and correspondingly the stationary equations are of the form
16
A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s The stationary equations can be rewritten as follows 0 = 0(1-r)+1s 1 = (r/s)0 1 = 0r+1(1-r-s)+2s 2 = r/s1 = (r/s)20 2 = 1r+2(1-r-s)+3s 3 = r/s2 = (r/s)30 We can then show by induction that i = (r/s)i0, i 0 The normalization condition {i=0 to ∞}i =1 gives 0 = (1- r/s), and hence i = (r/s)i(1- r/s)
17
A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s Defining = r/s (a natural definition of “utilization” consistent with Little’s Law), the stationary probabilities are of the form 0 = (1- ) and i = i(1- ) From those, we readily obtain E[N]=Σiii = /(1- )
18
Putting Things on a Firmer Footing
When does the limiting distribution exist? How does the limiting distribution compare to time averages (fraction of time spent in state j)? The limiting distribution is an ensemble average How does the average time between successive visits to sate j compare to j (the probability of being in state j)?
19
Some Definitions & Properties
Period (of state j): GCD of integers n s.t. Pjjn>0 State is aperiodic if period is 1 A Markov chain is aperiodic if all states are aperiodic A Markov chain is irreducible if all states communicate For all i and j, Pijn>0 for some n State j is recurrent (transient) if the probability fj of starting at j and ever returning to j is =1 (<1) Number of visits to a recurrent (transient) state is infinite (finite) with probability 1 If state j is recurrent (transient), then ΣnPjjn = (< ) In an irreducible Markov chain, states are either all transient or all recurrent A transient Markov chain does not have a limiting distribution Positive recurrence and null recurrence: A Markov chain is positive recurrent (null recurrent) if the mean time between returning to a state is finite (infinite) An ergodic Markov chain is aperiodic, irreducible and positive recurrent
20
Existence of the Limiting Distribution
An irreducible, aperiodic DTMC (finite or infinite) is in either of the following two classes: States are either all transient or all null recurrent, in which case j = limnPijn= 0 for all j, and the stationary distribution does NOT exists States are all positive recurrent, the limiting distribution exists and is equal to the stationary distribution, with a positive probability for each state. In addition, j = 1/mjj, where mjj is the average time between successive visits to state j
21
Existence of Limiting Distribution
8 7 6 5 9 4 3 2 1 10 11 Absorbing Periodic Transient Positive recurrent, irreducible, aperiodic ergodic Existence of limiting distribution depends on the structure of the underlying Markov chain Stationary state probabilities exist for ergodic Markov chains Many practical systems of interest give rise to ergodic Markov chains Stationary probabilities are of the form independent of 0
22
Time Averages For a positive recurrent, irreducible Markov chain, with probability 1 pj = limtNj(t)/t = 1/mjj > 0 Where Nj(t) is the number of transitions to state j by time t, and mjj is the average time between visits to state j pj is the time-average fraction of time is state j For an ergodic DTMC, with probability 1, pj = j = 1/mjj Time-average and stationary probabilities are equal
23
Probabilities and Rates
For an ergodic DTMC j is the limiting probability of being in state j as well as the long-run fraction of time the chain is in state j i Pij can also be interpreted as the rate of transitions from state i to state j The stationary equation for state i gives us i = Σjj Pji but we also know i = i ΣjPij = Σji Pij So that we have Σjj Pji = Σji Pij or in other words, the total rate leaving state i equals the total rate entering state i = 1
24
Balance Equations For an ergodic Markov chain, the rate leaving any set of states equals the rate of entering that set of states Application to our earlier example We immediately get r1 = s2 1 2 3 4 1-r r s 1-r-s
25
Conservation Law – Single State
k3 k3 pjk3 pk3j k4 k2 k4 k2 pjk4 pjk2 pk4j pk2j pjk5 pjk1 pk5j pk1j j j k5 k1 k5 k1 pjk6 pjk8 pk6j pk8j k6 k8 k6 k8 pjk7 pk7j k7 k7
26
Conservation Law – Set of States
k4 k3 k2 k8 k7 k6 k1 k5 pj1k1 pj1k2 pj1k3 pj3k4 pj3k5 pj2k6 pj2k7 pj2k8 j3 j2 j1 S k4 k3 k2 k8 k7 k6 k1 k5 pk1j1 pk2j1 pk3j1 pk4j3 pk5j3 pk6j2 pk7j2 pk8j2 j3 j2 j1 S
27
Back to our Simple Queue
1 2 1-p p p(1-q) q(1-p) (1-p)(1-q)+qp n n+1 S Applying the machinery we have just developed to the “right” set of states, we get where as before = p(1-q) and β=q(1-p) Basically, it directly identifies the right recursive expression for us
28
Extending To Simple Chains
Birth-death process: One-dimensional Markov chain with transitions only between neighbors Main difference is that we now allow arbitrary transition probabilities The balance equation for S gives us npn(n+1) = n+1p(n+1)n , for n 0 You could actually derive this directly by solving the balance equations progressively from the “left,” i.e., other terms would eventually cancel out, but after quite a bit of work… 1 2 p00 p01 p10 p11 n n+1 p22 pnn p(n+1)(n+1) p21 p12 p(n-1)n pn(n+1) p(n+1)n pn(n-1) p(n+2)(n+1) p(n+1)(n+2) S This generalizes our simple queue to allow transition rates between states to vary.
29
Solving Birth-Death Chains
By induction on npn(n+1) = n+1p(n+1)n we get The unknown 0 can be determined from ii = 1, so that we finally obtain What is induction? Induction hypothesis is what you want to establish, e.g., above expression for \pi_n Step 1 verify it is true initially, i.e., for n=0 or n=1 Step 2 Assume it is true for a given value of n Step 3 prove it is then true for n+1 using known properties, e.g., relationship between \pi_n and \pi_{n+1}
30
Time Reversibility Another option for computing state probabilities exists (though not always) for aperiodic, irreducible Markov chains, namely, If x1, x2, x3,… exists s.t. for all i and j Σixi = 1 and xiPij = xj Pji Then i = xi (the xi’s are the limiting probabilities) and the Markov chain is called time-reversible To compute the i‘s we first assume time-reversibility, and check if we can find xi’s that work. If yes, we are done. If no, we fall back on the stationary and/or balance equations
31
Periodic Chains In an irreducible, positive recurrent periodic chain, the limiting distribution does not exist, but the stationary distribution does P = and ii =1 and represents the time-average fraction of time spent in each state Conversely, if the stationary distribution of an irreducible periodic DTMC exists, then the chain is positive recurrent
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.