Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete Time Markov Chains (A Brief Overview)

Similar presentations


Presentation on theme: "Discrete Time Markov Chains (A Brief Overview)"— Presentation transcript:

1 Discrete Time Markov Chains (A Brief Overview)

2 Discrete Time Markov Chain (DTMC)
Consider a system where transitions take place at discrete time instants, and where the state of the system at time k is denoted as Xk Markov property states that Future only depends on the present… The pij‘s are the state transitions probabilities System evolution is fully specified by its state transition probability matrix [P]ij = pij

3 State Transition Probability Matrix
P is a stochastic matrix All entries are positive and sum of entries in a row add up to 1 n-step transition probability matrix pij(n) is the probability of being in state j after n steps when starting in state i Limiting probabilities: the odds of being in a given state after a very large (infinite) number of transitions

4 Chapman-Kolmogorov Equations
Track state evolution based on transition probabilities between all states is the probability of being in state j after n transitions/steps when starting in state i This provides a simple recursion to compute higher order transition probabilities (C-K)

5 More on C-K Equations In matrix form, where P is the one-step transition probability matrix The Chapman-Kolmogorov equations state Pn = Pm Pn-m, 0  m  n

6 Limiting Distribution
From P we know that pij is the probability of being in state j after step 1 given that we start in state i In general, let be the initial sate probability vector Then the state probability vector 1 after the first step is given by 1 = 0P And in general after n steps, we have n = 0Pn Does n converges to a limit as n  ? (limiting distribution)

7 A Simple 3-State Example
1 2 1-p p2 p(1-p) p Note that the relation n = 0Pn implies that conditioned on starting in state i, the value of n is simply the ith row of Pn

8 A Simple 3-State Example
Appears independent of the starting state i

9 Stationary Distribution
A Markov chain with M states admits a stationary probability distribution  = [0, 1,…, M-1] if  P =  and {i=0 to M-1}i =1 In other words {i=0 to M-1}i pij = j ,  j and {i=0 to M-1}i =1

10 Stationary vs. Limiting Distributions
Assume that the limiting distribution defined by  = [0, 1,…, M-1], where j = limnPijn > 0, exists and is a distribution, i.e., {i=0 to M-1}i =1 Then  is also a stationary distribution, i.e.,  = P and no other distribution exists When this holds, we can find the limiting distribution of a DTMC either by solving the stationary equations  = P, or by raising the matrix P to some large power

11 Summary: Computing DTMC Probabilities
Use the stationary equations:  = P and ii = 1 Numerically compute successive values of Pn (and stop when all rows are approximately equal) The limit converges to a matrix with identical rows, where each row is equal to  Guess a solution to the recurrence relation

12 Back to Our Simple 3-State Example
1 2 1-p p2 p(1-p) p

13 The Umbrella Example For probability of rain p=0.6, this gives
1 2 1 (1-p) 1-p p For probability of rain p=0.6, this gives  = [ , , ] Probability Pwet that professor gets wet = Prob[zero umbrella and rain] They are independent, so that Pwet = 0p =  0.6 = 0.1 Average number of umbrellas at a location: E[U] = 11 + 22 = 1.25

14 Infinite DTMC Handling chains with an infinite state space
We cannot anymore use matrix multiplication But the result that if the limiting distribution exists it is the only stationary distribution still holds So we can still use the stationary equations, provided they have “some structure” we can exploit

15 A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s The (infinite) transition probability matrix and correspondingly the stationary equations are of the form

16 A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s The stationary equations can be rewritten as follows 0 = 0(1-r)+1s  1 = (r/s)0 1 = 0r+1(1-r-s)+2s  2 = r/s1 = (r/s)20 2 = 1r+2(1-r-s)+3s  3 = r/s2 = (r/s)30 We can then show by induction that i = (r/s)i0, i  0 The normalization condition {i=0 to ∞}i =1 gives 0 = (1- r/s), and hence i = (r/s)i(1- r/s)

17 A Simple (Birth-Death) Example
1 2 3 4 1-r r s 1-r-s Defining  = r/s (a natural definition of “utilization” consistent with Little’s Law), the stationary probabilities are of the form 0 = (1- ) and i =  i(1- ) From those, we readily obtain E[N]=Σiii =  /(1- )

18 On the Order of Arrivals & Departures
Assume that our system state denotes the number of jobs in a system At every time step a job can arrive with probability p, i.e.,  = p, and can depart with probability q, i.e., E[S] = 1/q This is our previous birth-death example with r = p(1–q) and s = q(1–p) Applying Little’s Law to the server P(server is idle) = 1 – E[# jobs in server] = 1 – E[S] = 1 – p/q But our analysis gives 0 = 1–  = 1– r/s = 1– p(1–q)/q(1–p)  1– p/q ??? System state is based on sampling system at time slot boundary, i.e., 0 is fraction of time system is empty at end/start of time slot and not the fraction of time the server is idle System can be in state 0 at start and end of slot, while server is busy during that lost (arrival and departure in same slot) In order for server to be idle, we need system in state 0 at beginning of slot (probability0) AND no arrival in that slot (probability 1 – p). This implies: P(Server is idle) = 0 (1–p) = [1– p(1–q)/q(1–p)](1–p) = (1–p) – [p(1–q)/q] = [q–pq–p+pq]/q = 1 – p/q 


Download ppt "Discrete Time Markov Chains (A Brief Overview)"

Similar presentations


Ads by Google