Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete Time Markov Chains

Similar presentations


Presentation on theme: "Discrete Time Markov Chains"— Presentation transcript:

1 Discrete Time Markov Chains
TELE4642: Week10 Material from Dr. Scott Midkiff is gratefully acknowledged

2 Motivation Discrete-time queuing systems operate on a slotted time basis Arrival and/or departure events are constrained to take place at integer multiples of a slot time Ts (= 1 typically) Applications: slotted communication systems (slotted ALOHA) Fixed-size packets (ATM – asynchronous transfer mode) Imbedded Markov chains (e.g. M/G/1) Arrival TS slot t Departure Network Performance

3 Definitions A discrete time Markov chain (DTMC) is a stochastic process {Xt, t=0, 1, 2, …}, where Xt denotes the state at time t and such that for all n ≥ 0: where Pij is independent of time and of past history Transition probability matrix P, associated with the DTMC: This is the matrix P whose (i,j)-th entry, Pij, represents the probability of moving to state j on the next transition, given that the current state is i. Note: P is square with dimension equal to number of states (can be infinite) By definition: Network Performance

4 Example 1: Repair Facility
A machine is either “working” or “broken” If working yesterday, β=5% chance of breaking down today If broken yesterday, α=40% chance that repair centre has fixed it today State transition diagram and transition probability matrix States, e.g., “on” or “off” Transition at the end of each slot (could be back to same state) Probabilities (not rates) of transitions between states on 1- 1- off Network Performance

5 Example 2: Drawing balls
An urn initially contains 5 black balls and 5 white balls. The following experiment is repeated indefinitely. A ball is drawn from the urn. If the ball is white it is put back in the urn. If the ball is black it is left out. X(n) is the number of black balls in the urn after n draws from the urn. Network Performance

6 Example 2 (contd.) Markov chain: Transition probability matrix: 5 4 3
1 5/10 4/9 3/8 2/7 1/6 5/9 5/8 5/7 5/6 Network Performance

7 Example 3: Umbrella problem
An absent-minded professor has 2 umbrellas that she uses when commuting from home to work and back If it is raining and an umbrella is available in her location, she takes it If it is not raining, she forgets to take an umbrella Assume it rains with probability p each time she commutes, independent of other times State diagram and transition probability matrix What to choose as the state? Network Performance

8 Powers of P: n-step transition probabilities
Let Pn =P · P · … · P The (i,j)-th entry in Pn represents the probability that in n steps we will be in state j given that we are in state i now. For the umbrella problem with p=0.4: Note: Each row sums to 1 All rows become identical in P30 – why? Network Performance

9 Limiting State Probabilities
Limiting (steady-state) probability of being in state j The limiting probability that the chain is in state j is For a DTMC with M states, the state probability vector is: Network Performance

10 Does Steady-State Exist?
Only if Markov chain is irreducible and aperiodic Irreducible: Every state should be reachable from every other state in a finite number of steps Aperiodic: The probability of getting back to a state in any given (sufficiently large) number of time steps is non-zero An ergodic markov chain is irreducible, aperiodic, and recurrent non-null, and is guaranteed to exhibit steady-state (stationary) behaviour Network Performance

11 Balance Equations The stationary state probability vector of a Markov chain satisfies: Find the steady-state probabilities for the umbrella problem Solving yields: Probability of getting wet on a day is Network Performance


Download ppt "Discrete Time Markov Chains"

Similar presentations


Ads by Google