Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete time Markov Chain

Similar presentations


Presentation on theme: "Discrete time Markov Chain"— Presentation transcript:

1 Discrete time Markov Chain
Chapter 4 Discrete time Markov Chain Learning objectives : Introduce discrete time Markov Chain Model manufacturing systems using Markov Chain Able to evaluate the steady-state performances Textbook : C. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, Springer, 2007

2 Xn : state of the machine at the begining of day n
An example A company uses a machine with the following state of wear : new, slightly degraded, degraded, and unusable. The degrading process is modeled as a DTMC with day as time unit with the following transition probabilities: new slightly degraded degraded unusable 0,6 0,2 0,1 slight degraded 0,7 0,8 1 Xn : state of the machine at the begining of day n How Xn evolves over time?

3 Basic definitions of discrete time Markov chains
Plan Basic definitions of discrete time Markov chains Classification of discrete time Markov chains Analysis of discrete time Markov chains

4 of discrete time Markov chains
Basic definitions of discrete time Markov chains

5 Discrete Time Markov Chain (DTMC)
Definition : a stochastic process with discrete state space and discrete time {Xn, n > 0} is a discrete time Markov Chain (DTMC) iff P[Xn+1 = j  Xn = in, ..., X0 = i0] = P[Xn+1 = j  Xn = in] = pij(n) with in = i In a DTMC, the past history impacts on the future evolution of the system via the current state of the system pij(n) is called transition probability from state i to state j at time n.

6 Discrete Time Markov Chain (DTMC)
Stochastic process Continuous event Discrete events Continuous time A DTMC is a discrete time and memoriless discrete event stochastic process. Discrete time Memoryless

7 Example: a mouse in a maze (老鼠在迷宫)
1 2 start 5 3 4 exit Which stochastic process can be used to represent the position of the mouse at time t? Under which assumptions, the system can be represented by a discrete time Markov chain?

8 Example: a mouse in a maze
1/2 1/2 1 2 start 1/2 1/4 1/2 1/4 1 1/4 5 3 4 1 1/4 exit 1 Let {Xn}n=0, 1, 2, ... the position of the mouse after n rooms visited Assume that the mouse does not have any memory of rooms visited previously and that she chooses any corridor equiprobably.

9 P[Xn+1 = j  Xn = i] = P[X1 = j  X0 = i] = pij
Homogenuous DTMC A DTMC is said homogenuous iff its transitions probabilities do not depend on the time n, i.e. P[Xn+1 = j  Xn = i] = P[X1 = j  X0 = i] = pij A homogenuous DTMC is then defined by its transition matrix P =[pij]i,jE

10 What is the transition matrix of the process?
1/2 1/2 1 2 start 1/2 1/4 1/2 1/4 1 1/4 5 3 4 1 1/4 exit 1 Let {Xn}n=0, 1, 2, ... the position of the mouse after n rooms visited Assume that the mouse does not have any memory of rooms visited previously and that she chooses any corridor equi- probably.

11 A square matrix is said stochastic iff all entries are non negative
Stochastic Matrix A square matrix is said stochastic iff all entries are non negative each line sums to 1 Properties: A transition matrix is a stochastic matrix If P is stochastic, then Pn is stochastic The eigenvalues of P are all smaller than 1, i.e. |l| ≤1

12 In the remaining of the chapter, we limit ourselves to Markov chain
Assumptions In the remaining of the chapter, we limit ourselves to Markov chain of discrete time defined on a finite state space E homogeneous in time. Note that most results extend to countable state space.

13 Graphic representation of a DTMC

14 Classification of Discrete Time Markov Chains

15 Classification of states
Let fjj be the probability of returning to state j after leaving j. A state j is said transient if fjj < 1 A state j is said recurrent if fjj = 1 A state j is said absorbing if pjj = 1. Let Tjj be the average recurrence time, i.e. time of returning to j A recurrent state j is positive recurrent if E[Tjj] is finite. A recurrent state j is null recurrent if E[Tjj] = .

16 Classify the states of the example
1/2 1/2 1 2 Starting at 3, Probability of returning to 3 f33 = 0.5 In = 1 if returning n times 1/2 1/4 1/2 1/4 1 1/4 5 3 4 1 1/4 exit 1 A transient state can only be reached a finite number of times

17 Classify the states of the example
1/2 start 1/2 1 2 1/2 States 1, 2, 3, 4 are transient States 0, 5 are absorbing and hence recurrent 1/4 1/2 1/4 1 1/4 5 3 4 1 1/4 exit 1 Steady-state (behavior after longer enough time): either 0 or 5

18 Classification of states
State 0 recurrent but null recurrent 1 3 2 1/2 1/3 1/4

19 Classification of states
p > 0.5: transient unstabe: arrival p > departure 1-p p = 0.5: recurrent & null recurrent borderline case p < 0.5: positive recurrent stabe: arrival p < departure 1-p 1 2

20 Irreducible Markov chain
A DTMC is said irreducible iff a state j can be reached in a finite number of steps from any other state i. An irreducible DTMC is a strongly connected graph.

21 Irreducble Markov chain

22 A state j is said aperiodic otherwise
Periodic Markov chain A state j is said periodic if it is visited only in a number of steps which is multiple of an integer d > 1, called period. A state j is said aperiodic otherwise A state with a self-loop transition (i.e. pii > 0) is always aperiodic. All states of an irreducible Markov chain have the same period.

23 Partitionning a DTMC into irreducible sub-chains
A DTMC can be partitionned into strongly connected components, each corresponding to an irreducible sub-chain.

24 Classification of irreducible sub-chains
A sub-chain is said absorbing or closed if there is no arc going out of it. Otherwise, the sub-chain is transient. transcient sub-chain absorbing sub-chain Absorbing (or closed) sub-chain Absorbing subchains = Steady-state

25 Canonic form of transition matrix
Q : transitions of transient sub-chains Pi : transititions between states of aborbing sub-chain i Ri: Transitions toward absorbing sub-chain i

26 Formal definitions A state j is said reachable from a state i if there is a path from i to j in the state transition diagram. Two states that are mutually reachable are said communicating. A subset S of states is said closed if there is no transition leaving S. A strongly connected component is a maximal subset of states that are mutually reachable. A strongly connected component is also said an irreducible set.

27 Verification of irreducibility
A Markov chain is irreducible if and only if it is strongly connected.

28 Verification of transient and recurrent states
State i is recurrent if State i is transient if Implications: State i recurrent = infinitely many returns to i once it is visited. State i transient = Finite number of visits to i.

29 Verification of transient and recurrent states
A Markov chain with a finite state space has at least one positive recurrent state. A state reachable from a (positive) recurrent state is (positive) recurrent. States in a finite closed irreducible set are all positive recurrent. States not belonging to a closed irreducible set are transient. States in a closed irreducible set are (i) all positive recurrent, or (ii) all null recurrent, or (iii) all transient.

30 Verification of periodicity
Two communicating states have the same period. A state with a self-loop transition (i.e. pii > 0) is aperiodic

31 Analysis of DTMC

32 Recall discrete random numbers
Sojourn time in a state Let Ti be the time spent in state i before jumping to other states. Ti is a random variable of geometric distribution. 1-pii = event rate of leaving the state Recall discrete random numbers

33 Properties of geometric distribution
Let X be a random variable of geometric distribution with parameter p, i.e. P{X = n} = (1-p)n-1p. E[X] = 1/p : p = event rate , 1/p = mean time Var(X) = (1-p)/p2 Memoryless (only discrete distribution of this property):

34 m-step transition probabilities
The probability of going from i to j in m steps is pij(m) = P{Xn+m = j|Xn=i} = P{Xm = j|X0=i}. Let P(m) = [pij(m)] be the m-step transition matrix Properties (to prove): P(m) = Pm Chapman-Kolmogorov equation: P(l+m) = P(l)P(m) or

35 Example 1/2 1/2 1 2 start 1/2 1/4 1/2 1/4 1 1/4 5 3 4 1 1/4 exit 1 What is the probability that the mouse is still in room 2 at time 4? (p22(4))

36 Probability of going from i to j in exactly n steps
fij(n) : probability of going from i to j in exactly n steps (without passing j before) fij: probability of going from i to j in a finite number of steps Similar approach can be used to determine the average time Tij it takes for going from i to j

37 Probability distribution of states
pi(n) : probability of being in state i at time n pi(n) = P{Xn = i} p(n) = (p1(n), p2(n), ...) : vector of probability distribution over the state space at time n The probability distribution p(n) depends on the transition matrix P the initial distribution p(0) Remark: if the system is at state i for certainty, then pi(0) = 1 and pj(n) = 0, for j ≠i What is the relation between p(n), p(0), and P?

38 Transient state equations
By conditioning on the state at time n, Property: Let P be the transition matrix of a markov chain and p(0) the initial distribution, then over the state space at time n p(n+1) = p(n)P p(n)= p(0)Pn

39 Transient state equations
Z-transform of an integer-variable function x(n) : Z-transform of the transient distribution p(n+1) = p(n)P  z-1(P(z) - p(0)) = P(z)P P(z) = p(0)(I – zP)-1 Example : p1(n+1) = (1-p) p1(n) + rp0(n) p0(n+1) = (1-r) p0(n) + pp1(n) z-1(P1(z) - p1(0)) = (1-p) P1(z) + rP0(z) z-1(P0(z) – p0(0)) = (1-r) P0(z) + pP1(z) 1-p 1-r p 1 r

40 Transient state equations
p = 0.9, r = 0.8 p = 0.1, r = 0.8 1-p 1-r p 1 r Speed of convergence depends on the value of the Eigen value (1-p-r)

41 Steady-state distribution
Key questions : Is the distribution p(n) converges when n goes to infinity? If the distribution converges, does its limit p = (p1, p2, ...) depend on the initial distribution p(0)? If a state is recurrent, what is the percentage of time spent in this state and what is the number of transitions between two successive visits to the state? If a state is absorbing, what is the probability of ending at this state? What is the average time to this state?

42 Steady state distribution
Theorem : For a irreducible and aperiodic DTMC with positive recurrent states, the distribution p(n) converges to a limit vector p which is independent of p(0) and is the unique solution of the system: pi are also called stationary probabilities (also called steady state or equilibrium distribution). For an irreducible and periodic DTMC, pi are the percentage of time spent in state i Normalization equation balance equation equilibrium equation

43 Flow balance equation Equation can be interpretated as balance equation of probability flow. A probability flow pipij is associated to each transition (i, j). is the sum of probability flow into node j is the sum of flow out of node j The flow balance equation : Outgoing flow = Incoming flow

44 More technical results
Theorem 1 In an irreducible aperiodic Markov chain the limits lim n p(n) always exist and they are independent of the initial distribution. Theorem 2 In an irreducible Markov chain with transient states or null recurrent states lim n p(n) = 0 and no stationary probability exists.

45 A manufaturing system Consider a machine which can be either UP or DOWN. The state of the machine is checked every day. The average time to failure of an UP machine is 10 days. The average time for repair of a DOWN machine is 1.5 days. Determine the conditions for the state of the machine {Xn} at the begining of each day to be a Markov chain. Draw the Markov chain model. Find the transient distribution by starting from state UP and DOWN. Check whether the Markov chain is recurrent and aperiodic. Determine the steady state distribution. Determine the availability of the machine.

46 A telephone call process
Discrete time model with time slots indexed by k = 0, 1, 2, ... At most one telephone call can occur in a single time slot, and there is a probability a that a call occurs in any slot If the phone is busy, the call is lost; otherwise, the call is processed. There is a probability b that a call in process completes in any time slot If both a call arrival and a call completion occur in the same time slot, the new call will be processed. Issues to solve: Markov chain model Loss probability


Download ppt "Discrete time Markov Chain"

Similar presentations


Ads by Google