Discrete Time Markov Chains (cont’d)

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Channel Allocation Protocols. Dynamic Channel Allocation Parameters Station Model. –N independent stations, each acting as a Poisson Process for the purpose.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
DYNAMIC POWER ALLOCATION AND ROUTING FOR TIME-VARYING WIRELESS NETWORKS Michael J. Neely, Eytan Modiano and Charles E.Rohrs Presented by Ruogu Li Department.
Discrete Time Markov Chains
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Discrete Time Markov Chains.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
1 Cooperative Communications in Networks: Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland October, 2008.
Studying Local Area Networks Via Media Access Control (MAC) SubLayer
Random coding for wireless multicast Brooke Shrader and Anthony Ephremides University of Maryland Joint work with Randy Cogill, University of Virginia.
Cs/ee 143 Communication Networks Chapter 3 Ethernet Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech.
Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Introduction to Continuous-Time Markov Chains and Queueing Theory.
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
Analysis of Ethernet-like protocols Andrey Lukyanenko University of Kuopio.
جلسه دهم شبکه های کامپیوتری به نــــــــــــام خدا.
Propagation Delay and Receiver Collision Analysis in WDMA Protocols I.E. Pountourakis, P.A. Baziana and G. Panagiotopoulos School of Electrical and Computer.
Introduction to Queueing Theory
Network Design and Analysis-----Wang Wenjie Queueing System IV: 1 © Graduate University, Chinese academy of Sciences. Network Design and Analysis Wang.
Discrete Time Markov Chains
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Discrete Time Markov Chains.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Spring Routing: Part I Section 4.2 Outline Algorithms Scalability.
15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Absorbing Markov Chains 1.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Examples of DTMCs.
Markov Chains.
Network Layer.
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Medium Access Control Protocols
Much More About Markov Chains
Markov Chains Mixing Times Lecture 5
CTMCs & N/M/* Queues.
Rip Routing Protocol.
Discrete Time Markov Chains
DTMC Applications Ranking Web Pages & Slotted ALOHA
Distance-Vector Routing Protocols
Services of DLL Framing Link access Reliable delivery
Multiple access.
Routing: Distance Vector Algorithm
Department of Industrial Engineering
Lecture on Markov Chain
Discrete-time markov chain (continuation)
Piyush Kumar (Lecture 2: PageRank)
Markov Chains Part 5.
EE255/CPS226 Discrete Time Markov Chain (DTMC)
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
IT351: Mobile & Wireless Computing
Queueing networks.
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete time Markov Chain
Discrete time Markov Chain
of the IEEE Distributed Coordination Function
Queueing Theory Frank Y. S. Lin Information Management Dept.
Network Layer.
Chapter 6 Multiple Radio Access.
Satellite Packet Communications A UNIT -V Satellite Packet Communications.
Presentation transcript:

Discrete Time Markov Chains (cont’d)

A More Advanced Example: The Aloha Protocol new packet [Network] m nodes on a common wire Or wireless channel Time is slotted [New Packets] Each node transmits a new packet with probability p (p < 1/m) [Success] If exactly 1 packet is transmitted [Collision] If k>1 (re-)transmissions during a slot Every collided message is stored, to be resent later [Retransmission] Each collided(back-logged) message is retransmitted with prob q old packet p p X √ q p Does Aloha work??

Slotted Aloha: A Markov Chain Model Q: What should we take as the state of the chain Xn? A: The number of backlogged messages (at slot n) Transition probabilities from state 0 (no backlogged msg) P00 = P01 = P0k = (0 or 1 node transmits) 0 (not possible) if 1< k ≤ m (any k of the m nodes transmit) 0 if k > m

Slotted Aloha: Transitions From State k Assume we are now at state k (i.e. k messages backlogged) (0 new, 1 old) (0 new, 0 old) (1 new, 0 old) (r new, any old) (0 new, ≥2 old) (1 new) (>0 old) Pk,k-1 = (better) Pk,k = Pk,k+1 = (worse) Pk,k+r = (1< r ≤ m) (worse) Pk,k+r = 0 (r > m)

Slotted Aloha: The Complete Markov Chain p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : How can we tell if “Aloha works”? Assume 10 nodes and transmission probability p = 0.05 Load = 0.5 Capacity Intuition: (necessary) q should be small (re-Tx) To ensure retransmissions don’t overload medium Let’s assume q = 0.005 (10 times smaller than p) Q: Is Aloha Stable? If backlog becomes infinite => delay goes to infinity!

Slotted Aloha: Stability p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : Pfwd(k) Pback(k) When at state k: Pback (k) : reduce backlog Pfwd (k) : increase backlog (MHB, Ch.10) (Why?) So???

Slotted Aloha: Stability Conclusion p00 p01 1 2 p10 p21 p12 … p0m k k-1 pk,k-1 pk-1,k pk-m,k k+1 pk,k+2 pk,k+m k+2 k+m pk+1,k+2 pk+2,k+1 : Pfwd(k) Pback(k) For large enough k => states k+1, k+2, …, are transient Markov Chain is transient => Aloha protocol is unstable! Q: would Aloha work if we make q really (REALLY) small? A: No! Intuition: Let E[N] the expected Tx at state k If E[N] ≥ 1 then situation either stays the same or worse But E[N] = mp + kq --- what happens if k  ∞?

Improving Aloha: q = f(backlog) Q: How can we fix the problem and make the chain ergodic (and the system stable)? A1: E[N] = mp + kq < 1 => q < (1-mp)/k i.e. q = f(backlog) A2: or be more aggressive  geometric backoff q = a/kn a < (1-mp) A3: or even exponential q = β-k β > 1 Exponential backoff is the basis behind Ethernet Q: Why should q not be too small? A: Retransmission delay goes to infinity

Ergodicity: Time Vs. Ensemble Average Time Averages Ni(t) = number of times in state i by time t pi = (percentage of time in state i) Ensemble Averages mij: expected time to reach j (for 1st time), starting from i mii = expected time between successive visits to i πi = (prob. of being at state i after many steps Theorem: for an ergodic DTMC (proof based on Renewal Theory)

(Global) Balance Equations πi • pij : rate of transitions from state i to state j πi = percentage of time in state i pij = percentage of these times the chain moves next to j Q: What is Σjπj pji ? A: rate into state i π0 p00 p01 π1 π2 p10 p21 p12 rate into 1 π2p21 rate out of 1 π1p12+π1p10 From stationary equation: πi =Σjπj pji  πi: rate into i But also πi =Σjπi pij (why?) Theorem: Σjπj pji = Σjπi pij (rate in = rate out) Q: why is this reasonable? A: Cannot make a transition from i, without a transition into i before it (difference in number is at most 1) Assume a subset of states S: rate into S = rate out of S

Local Balance Equations and Time-Reversibility Assume there exist πi: πipij = πjpji (for all i and j) and Σiπi = 1 Then: πi is the stationary distribution for the chain The above equations are called local balance equations The Markov chain is called time-reversible Solving for the Stationary Distribution of a DTMC Stationary Eq. πi = Σjπjpji (not always easy) Global Balance Σj ≠i πipij = Σj ≠i πjpji (a bit easier) Local Balance πipij = πjpji (easiest! try first!)

Absorbing Markov Chains

A Very Simple Maze Example 1 2 3 exit A mouse is trapped in the above maze with 3 rooms and 1 exit When inside a room with x doors, it chooses any of them with equal probability (1/x) Q: How long will it take it on average to exit the maze, if it starts at room i? Q: How long if it starts from a random room?

First Step Analysis 1 2 3 exit Def: Ti = expected time to leave maze, starting from room I T2 = 1/3*1 + 1/3*(1+T1)+1/3*(1+T3) = 1 + 1/3*(T1+T3) T1 = 1 + T2 T3 = 1 + T2 T2 = 5, T3 = 6, T1 = 6 Q: Could you have guessed it directly? A: times room 2 is visited before exiting is geometric(1/3)  on average, the wrong exit will be taken twice (each time costing two steps) and the 3rd time the mouse exits

“Hot Potato” Routing destination A packet must be routed towards the destination over the above network “Hot Potato Routing” works as follows: when a router receives a packet, it picks any of its outgoing links randomly (including the incoming link) and send the packet immediately. Q: How long does it take to deliver the packet?

“Hot Potato” Routing (2) destination First Step Analysis: We can still apply it! But it’s a bit more complicated: 9x9 system of linear equations Not easy to guess solution either! We’ll try to model this with a Markov Chain

An Absorbing Markov Chain 1/2 1/2 7 8 9 1 1/2 1/2 1/3 1/2 1/4 1/2 1/3 1/3 1/4 4 5 6 A 1/3 1/4 1 1/3 1/2 1/4 1/2 1/3 1/2 1 2 3 1/2 9 transient states: 1-9 1 absorbing state: A Q: Is this chain irreducible? A: No! Q: Hot Potato Routing Delay  expected time to asborption?

Hot Potato Routing: An Absorbing Markov Chain 1 2 3 4 5 6 7 8 9 A 1/4 1/2 1/3 We can define transition matrix P (10x10) Q: What is P(n) as n∞? A: every row converges to [0,0,…,1] Q: How can we get ETiA? (expected time to absorption starting from i) Q: How about ? A: No, the sum goes to infinity!

Absorbing Markov Chain Theory Transition matrix can be written in canonical form Transient states written first, followed by absorbing ones Calculate P(n) using canonical form Q: Qn as n  ∞? A: it goes to O Q: where does the (*) part of the matrix converge to if only one absorbing state? A: to a vector of all 1s

Fundamental Matrix Theorem: The matrix (I-Q) has an inverse N = (I-Q)-1 is called the fundamental matrix N = I + Q + Q2 + … nik : the expected number of times the chain is in state k, starting from state i, before being absorbed Proof:

Time to Absorption (using the fundamental matrix) Theorem: Let Ti be the expected number of steps before the chain is absorbed, given that the chain starts in state i, let T be the column vector whose ith entry is Ti. then T = Nc , where c is a column vector all of whose entries are 1 Proof: Σknik :add all entries in the ith row of N  expected number of times in any of the transient states for a given starting state i  the expected time required before being absorbed.  Ti = Σknik.

Absorption Probabilities Theorem: bij :probability that an absorbing chain will be absorbed in (absorbing) state j, if it starts in (transient) state i. B: (t-by-r) matrix with entries bij . then B = NR , R as in the canonical form. Proof:

Back to Hot Potato Routing Use Matlab to get matrices Matrix N = Vector T = 3.2174 2.6957 2.3478 6.6522 4.5652 4.0000 3.8261 3.2174 2.6087 1.3478 3.9130 2.9565 4.0435 4.3043 4.0000 2.5217 2.3478 2.1739 1.1739 2.9565 3.4783 3.5217 3.6522 4.0000 2.2609 2.1739 2.0870 2.2174 2.6957 2.3478 6.6522 4.5652 4.0000 3.8261 3.2174 2.6087 1.5217 2.8696 2.4348 4.5652 4.9565 4.0000 2.7826 2.5217 2.2609 1.0000 2.0000 2.0000 3.0000 3.0000 4.0000 2.0000 2.0000 2.0000 1.9130 2.5217 2.2609 5.7391 4.1739 4.0000 4.8696 3.9130 2.9565 1.6087 2.3478 2.1739 4.8261 3.7826 4.0000 3.9130 4.6087 3.3043 1.3043 2.1739 2.0870 3.9130 3.3913 4.0000 2.9565 3.3043 3.6522 33.1304 27.6087 25.3043 32.1304 27.9130 21.0000 32.3478 30.5652 26.7826

Example: ARQ and End-to-End Retransmission A wireless path consisting of H hops (links) link success probability p A packet is (re-)transmitted up to M times on each link If it fails, it gets retransmitted from the source (end-to-end) Q: How many transmissions until end-to-end success?