Download presentation
Presentation is loading. Please wait.
1
Industrial Engineering Dep
Lecture 2: Algorithmic Methods for transient analysis of continuous time Markov Chains Dr. Ahmad Al Hanbali Industrial Engineering Dep University of Twente
2
Lecture 2: transient analysis of continuous time Markov chains
This Lecture deals with continuous time Markov processes as opposed to discrete time Markov chains in Lecture 1 Objectives: Find equilibrium distribution Find transient probabilities Matrix decomposition Uniformization method Find Transient measures Lecture 2: transient analysis of continuous time Markov chains
3
Background (1) Let {π(π‘): π‘ β₯ 0} denote a continuous time stochastic process of state space {0,1,β¦,π} π(π‘) is a Markov chain if the conditional transition probability, for every π‘, π β₯ 0 and π π(π(π +π‘)=π | π(π’); π’β€ π )=π(π(π +π‘)=π | π(π ) ) π(π‘) is homogeneous (or stationary) if π(π(π +π‘)=π | π(π )=π ) = π(π(π‘)=π | π(0)=π) = πππ(π‘) π(π‘) is irreducible if all states can communicate Lecture 2: transient analysis of continuous time Markov chains
4
Background (2) Define (infinitesimal) transition rate from state i to j of a Markov process πππ = lim π‘β0 πππ π‘ π‘ , π β π Let {ππ : π=0,1,.. } denote epochs of transition of CTMC then for πβ₯ 0 (by convention π0=0) π ππβ π πβ1 β€π₯ | π( π πβ1 )=π, π(ππ)=π =1βexpβ‘(βπππ₯), where ππ (= βπβ π πππ ) is the total outgoing rate of state π π π = lim π‘β0 1βπππ π‘ π‘ Lecture 2: transient analysis of continuous time Markov chains
5
Lecture 2: transient analysis of continuous time Markov chains
Background (3) Let πππ =βππ. The matrix π = [πππ]0β€π,πβ€π is called the generator of the continuous time Markov chain (CTMC). Note: βπ πππ = 0 Let π=(π0,β¦,ππ) equilibrium probabilities. The equilibrium equations of CTMC gives π π πβ π π ππ = πβ π π π π ππ , in matrix equation ππ=0, ππ=1 Idea: take advantage of Methods developed for discrete time Markov chains (in Lecture 1) Lecture 2: transient analysis of continuous time Markov chains
6
An equivalent discrete time Markov chain
Equilibrium distribution π can be obtained from an equivalent Markov chain via an elementary transformation. Let β be real number such that 0<ββ€ min π (β1/πππ) , and π=πΌ+βπ π is a stochastic matrix, i.e., its entries are between 0 and 1, and its rows sum to 1. Further, ππ=π βΊ ππ=0 The Markov chain with transition probability π is a discretization of the Markov process of π with time step β Lecture 2: transient analysis of continuous time Markov chains
7
Uniformization of CTMC
To have same mean sojourn time in all states per visit the uniformization of CTMC introduces fictitious transitions from states to themselves Let 0<ββ€ ππππ(β1/πππ), introduce a fictitious transition from state π to itself with rate (πππ+1/β). This yields: Equilibrium distribution of Q doesn't change Outgoing rate from state i becomes (πππ+1/ββπππ=1/β) same for all states Equilibrium distribution of the uniformized Markov process of Q is same as the Markov chain of transition matrix P(=I+βQ) embedded at epoch of transitions (jumps) The transitions of the uniformized process take place according to a Poisson process with rate β Lecture 2: transient analysis of continuous time Markov chains
8
Equilibrium distribution
All methods developed for solving the equilibrium equation for discrete time Markov chain can be applied to the uniformized Markov chain of transition matrix P Lecture 2: transient analysis of continuous time Markov chains
9
Transient Behavior of CTMC
Kolmogorov's equations are needed for the transient analysis. Let define the transient probability πππ(π‘) = π(π(π‘)=π | π(0)=π) Then, for 0 β€ π < π‘ πππ(π‘) =βπ πππ(π ) πππ(π‘βπ ) Kolmogorov's equations are set of differential equations for πππ(π‘) Lecture 2: transient analysis of continuous time Markov chains
10
Lecture 2: transient analysis of continuous time Markov chains
Background (1) Let π(π‘) the matrix of (π,π) entries πππ(π‘) Kolmogorov's forward equations are derived by letting s approaches t from below (backward equations) π β² π‘ =π π‘ π,π 0 =πΌ Hence, π(π‘)=π 0 πβ₯0 ππ‘ π π! = exp ππ‘ Truncating the infinite sum is inefficient since π has positive and negative elements Lecture 2: transient analysis of continuous time Markov chains
11
Matrix decomposition method
Let ππ, π = 0,β¦,π, be the (π+1) eigenvalues of π Let π¦π and π₯π be the left and right eigenvectors corresponding to ππ, such that π¦π π₯π = 1 and π¦π π₯π = 0 for πβ π. The matrix then reads π=ππΏ π β1 , where π β1 is the matrix whose rows are π¦π, πΏ is the diagonal matrix of entries ππ, and π is the matrix whose columns are π₯π Lecture 2: transient analysis of continuous time Markov chains
12
Matrix decomposition method (cnt'd)
The transient probability matrix then reads π(π‘)= πβ₯0 ππ‘ π π! =π.ππ₯πβ‘(πΏπ‘). π β1 =βπ π₯π. ππ₯π π π π‘ .π¦π What is the interpretation of π(β)? What conditions li should satisfy when t tends infinity ? Disadvantage of matrix decomposition? Due to Eingenvalues Gershgorin theorem all eigenvalues of Q have a non-positive real part. Lecture 2: transient analysis of continuous time Markov chains
13
Uniformization method
Let 0<ββ€ min π (β1/πππ) and π=πΌ+βπ. Conditioning on π, number of transitions in (0,π‘) which is Poisson distributed with mean π‘/Ξ, gives π π‘ = πβ₯0 π π=π π π = πβ₯0 exp βπ‘/Ξ π‘/Ξ π π! π π Truncating the latter sum on the first πΎ terms gives a good approximation, πΎ=maxβ‘{20, π‘/β+5 π‘/β } It is better to take the largest possible value of β= min π (β1/πππ) Lecture 2: transient analysis of continuous time Markov chains
14
Lecture 2: transient analysis of continuous time Markov chains
Occupancy Time: mean Occupancy time of a state is the sojourn time in that state during (0,π). Note that depends on state at time 0 Let πππ(π) denote the mean occupancy time in state π during (0,π) given initial state π. Then, π ππ π =πΈ π‘=0 π 1 π π‘ =π|π 0 =π ππ‘ = π‘=0 π πΈ 1 π π‘ =π|π 0 =π ππ‘ = π‘=0 π π ππ π‘ ππ‘ In matrix equation, π π = π ππ (π) = π‘=0 π π ππ π‘ ππ‘ = π‘=0 π π π‘ ππ‘ Lecture 2: transient analysis of continuous time Markov chains
15
Mean occupancy time (cnt'd)
Using the uniformized process (β,P) then, π π = π‘=0 π πβ₯0 exp βπ‘/Ξ π‘/Ξ π π! π π ππ‘ = πβ₯0 π‘=0 π exp βπ‘/Ξ π‘/Ξ π π! ππ‘ π π . Note π‘=0 π exp βπ‘/Ξ π‘/Ξ π π! ππ‘=Ξ 1βπ πβ€π , where Y is a Poisson random variable of mean π‘/Ξ We find that π π =Ξ πβ₯0 1βπ πβ€π π π . Note πβ₯0 π π does not converge so do not split up the latter sum to compute M(T). Lecture 2: transient analysis of continuous time Markov chains
16
Cumulative distribution of occupancy time
Let π(π) denote the total sojourn time during [0,T] in a subset of states, πΊ π . Then, for 0β€π₯<π π π π β€π₯ = π=0 β π βπ/Ξ π/Ξ π π! π=0 π πΌ π,π π=π π π π π₯ π π 1β π₯ π πβπ , π π π =π = π=0 β π βπ/Ξ π/Ξ π π! πΌ π,π+1 , where πΌ(π,π) is the probability that uniformized process visits π times Ξ©π during [0,π] given that it makes π transitions. Proof, see for details Tijms 2003: Condition on Poisson number of transitions of the uniformized process to be n Occupancy time is smaller than x if uniformized process will visit k times πΊ π out of the n visits and at least k of these transitions happens before x. The former probability is πΌ π,π and latter is function of a binomial distribution πΌ π,π can be computed recursively. Note they are dependent on the initial position of the chain at time 0. Lecture 2: transient analysis of continuous time Markov chains
17
Moments of occupancy time
Proposition: The m-th moment of O(T) is given by: πΈ π π π π π = π=0 β π βπ/Ξ π/Ξ π π+π ! π=1 π+1 πΌ π,π π=π π+πβ1 π Proposition: Given that the chain starts in equilibrium the second moment of the occupancy time in the subset Ξ© 0 during [0,T] gives πΈ π π π 2 = π=1 β π βπ/Ξ π/Ξ π π+2 ! π 0 π=1 π πβπ+1 π π π 0 + πβ Ξ© 0 π π π βπ/Ξ +π/Ξβ1 π/Ξ , where ππ is the steady state probability of the Markov chain in state π, π0 is the column vector with i-th entry equal to ππ if πβ Ξ© π and zero otherwise, and π0 is the column vector with i-th entry equal to 1 if πβ Ξ© π and zero otherwise. For proofs see: A. Al Hanbali, M.C. van der Heijden. Interval Availability Analysis of a Two-echelon, Multi-Item System. European Journal of Operational Research (EJOR), vol. 228, issue 3, , 2013 Lecture 2: transient analysis of continuous time Markov chains
18
References V.G. Kulkarni. Modeling, analysis, design, and control of stochastic systems. Springer, New York, 1999 Tijms, H. C. A first course in stochastic models. New York: Wiley, 2003
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.