Download presentation
Presentation is loading. Please wait.
Published byOphelia Marshall Modified over 6 years ago
1
Ergodicity, Balance Equations, and Time Reversibility
2
Putting Things on a Firmer Footing
When does the limiting distribution exist? How does the limiting distribution compare to time averages (fraction of time spent in state j)? The limiting distribution is an ensemble average How does the average time between successive visits to sate j compare to j (the probability of being in state j)?
3
Some Definitions & Properties (Recall our Earlier Notion of Ergodicity)
Period (of state j): GCD of integers n s.t. Pjjn>0 State is aperiodic if period is 1 A Markov chain is aperiodic if all states are aperiodic A Markov chain is irreducible if all states communicate For all i and j, Pijn>0 for some n State j is recurrent (transient) if the probability fj of starting at j and ever returning to j is =1 (<1) Number of visits to a recurrent (transient) state is infinite (finite) with probability 1 If state j is recurrent (transient), then ΣnPjjn = (< ) In an irreducible Markov chain, states are either all transient or all recurrent A transient Markov chain does not have a limiting distribution Positive recurrence and null recurrence: A Markov chain is positive recurrent (null recurrent) if the mean time between returning to a state is finite (infinite) An ergodic Markov chain is aperiodic, irreducible and positive recurrent
4
Discrete Time Markov Chains
For a recurrent, aperiodic, and irreducible DTMC πj = limnPijn = 1/mjj, j, where mij is the mean number of steps between visits to state j If the chain is positive recurrent, then πj > 0, j For a positive recurrent, irreducible Markov chain pj = limtNj(t)/t = 1/mjj > 0 j, with probability 1, where pj is the time-average fraction of time in state j (Note: aperiodicity is not required) An irreducible and aperiodic DTMC satisfies either All states are transient or null recurrent, so that πj = limnPijn = 0 and a stationary distribution does not exist; or All states are positive recurrent, in which the limiting distribution π = (π0,π1,π2,…) exists with πj = limnPijn = 1/mij > 0 j. Furthermore, π is also a stationary distribution and no other stationary distribution exists. Finally, πj = pj, the time-average fraction of time in state j
5
Null Recurrence? 1 2 3 4 0.5 Above chain is null recurrent (infinite expected number of steps to return to a given state). Let’s see why Proof by contradiction: Assume expected number of steps to return to state 0 from 0 is finite, i.e., m00 is finite m00 = ½ 0 + ½ m m10 = 2m00 – 2, so that m10 is also finite This gives (1) m10 = 1 + ½ 0 + ½ m20 and by location invariance (expected time to take one step to the left does not depend on where we are), we also have (2) m20 = 2m10 Combining (1) and (2) yields m10 = 1 + m10, a contradiction
6
Implications for DTMCs
We don’t need to check for positive recurrence; only that the chain is irreducible (all states can communicate) and aperiodic (GCD of all possible number of steps for returning to the state) and then solve the stationary equations to see if they yield a distribution
7
Probabilities and Rates
For an ergodic DTMC j is the limiting probability of being in state j as well as the long-run fraction of time the chain is in state j i Pij can also be interpreted as the rate of transitions from state i to state j Chain is in state i for i fraction of time steps, and Pij is fraction of such steps that result in a move to state j. Over a large number t of steps i Pij t is the number of steps starting in i and ending in j. Dividing by t gives the rate of transitions from i to j The stationary equation for state i gives us i = Σjj Pji but we also know i = i ΣjPij = Σji Pij Σjj Pji = Σji Pij Balance Equation – In other words, the total rate leaving state i equals the total rate entering state i = 1
8
General Balance Equations
For ergodic Markov chains, balance equations apply to any set S of states, i.e., the rate leaving a set of states equals the rate of entering S Application to our earlier example We immediately get r1 = s2 1 2 3 4 1-r r s 1-r-s
9
Conservation Law – Single State
k3 k3 pjk3 pk3j k4 k2 k4 k2 pjk4 pjk2 pk4j pk2j pjk5 pjk1 pk5j pk1j j j k5 k1 k5 k1 pjk6 pjk8 pk6j pk8j k6 k8 k6 k8 pjk7 pk7j k7 k7
10
Conservation Law – Set of States
k4 k3 k2 k8 k7 k6 k1 k5 pj1k1 pj1k2 pj1k3 pj3k4 pj3k5 pj2k6 pj2k7 pj2k8 j3 j2 j1 S k4 k3 k2 k8 k7 k6 k1 k5 pk1j1 pk2j1 pk3j1 pk4j3 pk5j3 pk6j2 pk7j2 pk8j2 j3 j2 j1 S
11
Back to our Simple Queue
1 2 1-p p p(1-q) q(1-p) (1-p)(1-q)+qp n n+1 S Applying the machinery we have just developed to the “right” set of states, we get where as before = p(1-q) and β=q(1-p) Basically, it directly identifies the right recursive expression for us
12
Extending To Simple Chains
Birth-death process: One-dimensional Markov chain with transitions only between neighbors Main difference is that we now allow arbitrary transition probabilities The balance equation for S gives us npn(n+1) = n+1p(n+1)n , for n 0 You could actually derive this directly by solving the balance equations progressively from the “left,” i.e., other terms would eventually cancel out, but after quite a bit of work… 1 2 p00 p01 p10 p11 n n+1 p22 pnn p(n+1)(n+1) p21 p12 p(n-1)n pn(n+1) p(n+1)n pn(n-1) p(n+2)(n+1) p(n+1)(n+2) S This generalizes our simple queue to allow transition rates between states to vary.
13
Solving Birth-Death Chains
By induction on npn(n+1) = n+1p(n+1)n we get The unknown 0 can be determined from ii = 1, so that we finally obtain What is induction? Induction hypothesis is what you want to establish, e.g., above expression for \pi_n Step 1 verify it is true initially, i.e., for n=0 or n=1 Step 2 Assume it is true for a given value of n Step 3 prove it is then true for n+1 using known properties, e.g., relationship between \pi_n and \pi_{n+1}
14
Truncating Infinite Chains
Consider again a one-dimensional birth-death Markov chain with transitions only between neighbors Truncate the chain at state n and update pnn to p’nn to ensure that the transition probability out of state n add-up to 1 The balance equation for all states remain identical ipi(i+1) = i+1p(i+1)i , for 0 i < n The only change is in the normalization equation 1 2 p00 p01 p10 p11 n p22 p'nn p21 p12 p(n-1)n pn(n-1) This generalizes our simple queue to allow transition rates between states to vary.
15
Time Reversibility Another option for computing state probabilities exists (though not always) for aperiodic, irreducible Markov chains, namely, If x1, x2, x3,… exists s.t. for all i and j Σixi = 1 and xiPij = xj Pji Then i = xi (the xi’s are the limiting probabilities) and the Markov chain is called time-reversible To compute the i‘s we first assume time-reversibility, and check if we can find xi’s that work. If yes, we are done. If no, we fall back on the stationary and/or balance equations
16
Periodic Chains In an irreducible, positive recurrent periodic chain, the limiting distribution does not exist, but the stationary distribution does P = and ii =1 and represents the time-average fraction of time spent in each state Conversely, if the stationary distribution of an irreducible periodic DTMC exists, then the chain is positive recurrent
17
Summary Stationary probability for state j πj = Σi πjPij
Limiting probability of being in state j πj = limn Pijn Reciprocal of time between visits to state j πj = 1/mjj Time-average fraction of time spent in state j πj = limn Nj(t)/t = pj
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.