Download presentation
Presentation is loading. Please wait.
1
Discrete Time Markov Chains (cont’d)
2
A More Advanced Example: The Aloha Protocol
new packet [Network] m nodes on a common wire Or wireless channel Time is slotted [New Packets] Each node transmits a new packet with probability p (p < 1/m) [Success] If exactly 1 packet is transmitted [Collision] If k>1 (re-)transmissions during a slot Every collided message is stored, to be resent later [Retransmission] Each collided(back-logged) message is retransmitted with prob q old packet p p X √ q p Does Aloha work??
3
Slotted Aloha: A Markov Chain Model
Q: What should we take as the state of the chain Xn? A: The number of backlogged messages (at slot n) Transition probabilities from state 0 (no backlogged msg) P00 = P01 = P0k = (0 or 1 node transmits) (not possible) if 1< k ≤ m (any k of the m nodes transmit) if k > m
4
Slotted Aloha: Transitions From State k
Assume we are now at state k (i.e. k messages backlogged) (0 new, 1 old) (0 new, 0 old) (1 new, 0 old) (r new, any old) (0 new, ≥2 old) (1 new) (>0 old) Pk,k-1 = (better) Pk,k = Pk,k+1 = (worse) Pk,k+r = (1< r ≤ m) (worse) Pk,k+r = 0 (r > m)
5
Slotted Aloha: The Complete Markov Chain
p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : How can we tell if “Aloha works”? Assume 10 nodes and transmission probability p = 0.05 Load = 0.5 Capacity Intuition: (necessary) q should be small (re-Tx) To ensure retransmissions don’t overload medium Let’s assume q = (10 times smaller than p) Q: Is Aloha Stable? If backlog becomes infinite => delay goes to infinity!
6
Slotted Aloha: Stability
p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : Pfwd(k) Pback(k) When at state k: Pback (k) : reduce backlog Pfwd (k) : increase backlog (MHB, Ch.10) (Why?) So???
7
Slotted Aloha: Stability Conclusion
p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : Pfwd(k) Pback(k) For large enough k => states k+1, k+2, …, are transient Markov Chain is transient => Aloha protocol is unstable! Q: would Aloha work if we make q really (REALLY) small? A: No! Intuition: Let E[N] the expected Tx at state k If E[N] ≥ 1 then situation either stays the same or worse But E[N] = mp + kq what happens if k ∞?
8
Improving Aloha: q = f(backlog)
Q: How can we fix the problem and make the chain ergodic (and the system stable)? A1: E[N] = mp + kq < 1 => q < (1-mp)/k i.e. q = f(backlog) A2: or be more aggressive geometric backoff q = a/kn a < (1-mp) A3: or even exponential q = β-k β > 1 Exponential backoff is the basis behind Ethernet Q: Why should q not be too small? A: Retransmission delay goes to infinity
9
Ergodicity: Time Vs. Ensemble Average
Time Averages Ni(t) = number of times in state i by time t pi = (percentage of time in state i) Ensemble Averages mij: expected time to reach j (for 1st time), starting from i mii = expected time between successive visits to i πi = (prob. of being at state i after many steps Theorem: for an ergodic DTMC (proof based on Renewal Theory)
10
(Global) Balance Equations
πi • pij : rate of transitions from state i to state j πi = percentage of time in state i pij = percentage of these times the chain moves next to j Q: What is Σjπj pji ? A: rate into state i π0 p00 p01 π1 π2 p10 p21 p12 rate into 1 π2p21 rate out of 1 π1p12+π1p10 From stationary equation: πi =Σjπj pji πi: rate into i But also πi =Σjπi pij (why?) Theorem: Σjπj pji = Σjπi pij (rate in = rate out) Q: why is this reasonable? A: Cannot make a transition from i, without a transition into i before it (difference in number is at most 1) Assume a subset of states S: rate into S = rate out of S
11
Local Balance Equations and Time-Reversibility
Assume there exist πi: πipij = πjpji (for all i and j) and Σiπi = 1 Then: πi is the stationary distribution for the chain The above equations are called local balance equations The Markov chain is called time-reversible Solving for the Stationary Distribution of a DTMC Stationary Eq. πi = Σjπjpji (not always easy) Global Balance Σj ≠i πipij = Σj ≠i πjpji (a bit easier) Local Balance πipij = πjpji (easiest! try first!)
12
Absorbing Markov Chains
13
A Very Simple Maze Example
1 2 3 exit A mouse is trapped in the above maze with 3 rooms and 1 exit When inside a room with x doors, it chooses any of them with equal probability (1/x) Q: How long will it take it on average to exit the maze, if it starts at room i? Q: How long if it starts from a random room?
14
First Step Analysis 1 2 3 exit Def: Ti = expected time to leave maze, starting from room I T2 = 1/3*1 + 1/3*(1+T1)+1/3*(1+T3) = 1 + 1/3*(T1+T3) T1 = 1 + T2 T3 = 1 + T2 T2 = 5, T3 = 6, T1 = 6 Q: Could you have guessed it directly? A: times room 2 is visited before exiting is geometric(1/3) on average, the wrong exit will be taken twice (each time costing two steps) and the 3rd time the mouse exits
15
“Hot Potato” Routing destination
A packet must be routed towards the destination over the above network “Hot Potato Routing” works as follows: when a router receives a packet, it picks any of its outgoing links randomly (including the incoming link) and send the packet immediately. Q: How long does it take to deliver the packet?
16
“Hot Potato” Routing (2)
destination First Step Analysis: We can still apply it! But it’s a bit more complicated: 9x9 system of linear equations Not easy to guess solution either! We’ll try to model this with a Markov Chain
17
An Absorbing Markov Chain
1/2 1/2 7 8 9 1 1/2 1/2 1/3 1/2 1/4 1/2 1/3 1/3 1/4 4 5 6 A 1/3 1/4 1 1/3 1/2 1/4 1/2 1/3 1/2 1 2 3 1/2 9 transient states: 1-9 1 absorbing state: A Q: Is this chain irreducible? A: No! Q: Hot Potato Routing Delay expected time to asborption?
18
Hot Potato Routing: An Absorbing Markov Chain
1 2 3 4 5 6 7 8 9 A 1/4 1/2 1/3 We can define transition matrix P (10x10) Q: What is P(n) as n∞? A: every row converges to [0,0,…,1] Q: How can we get ETiA? (expected time to absorption starting from i) Q: How about ? A: No, the sum goes to infinity!
19
Absorbing Markov Chain Theory
Transition matrix can be written in canonical form Transient states written first, followed by absorbing ones Calculate P(n) using canonical form Q: Qn as n ∞? A: it goes to O Q: where does the (*) part of the matrix converge to if only one absorbing state? A: to a vector of all 1s
20
Fundamental Matrix Theorem: The matrix (I-Q) has an inverse
N = (I-Q)-1 is called the fundamental matrix N = I + Q + Q2 + … nik : the expected number of times the chain is in state k, starting from state i, before being absorbed Proof:
21
Time to Absorption (using the fundamental matrix)
Theorem: Let Ti be the expected number of steps before the chain is absorbed, given that the chain starts in state i, let T be the column vector whose ith entry is Ti. then T = Nc , where c is a column vector all of whose entries are 1 Proof: Σknik :add all entries in the ith row of N expected number of times in any of the transient states for a given starting state i the expected time required before being absorbed. Ti = Σknik.
22
Absorption Probabilities
Theorem: bij :probability that an absorbing chain will be absorbed in (absorbing) state j, if it starts in (transient) state i. B: (t-by-r) matrix with entries bij . then B = NR , R as in the canonical form. Proof:
23
Back to Hot Potato Routing
Use Matlab to get matrices Matrix N = Vector T =
24
Example: ARQ and End-to-End Retransmission
A wireless path consisting of H hops (links) link success probability p A packet is (re-)transmitted up to M times on each link If it fails, it gets retransmitted from the source (end-to-end) Q: How many transmissions until end-to-end success?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.