Industrial Engineering Dep

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Markov Chains 1.
11 - Markov Chains Jim Vallandingham.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Experiments with MATLAB Experiments with MATLAB Google PageRank Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University, Taiwan
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Introduction to stochastic process
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
Eigenvalues and Eigenvectors
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
System of Linear Equations Nattee Niparnan. LINEAR EQUATIONS.
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Discrete Time Markov Chains
Discrete Random Variables. Introduction In previous lectures we established a foundation of the probability theory; we applied the probability theory.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ILAS Threshold partitioning for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Reliability Engineering
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
REVIEW Linear Combinations Given vectors and given scalars
Eigenvalues and Eigenvectors
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Lecture 3 B Maysaa ELmahi.
Markov Chains and Mixing Times
Availability Availability - A(t)
Theory of Capital Markets
Eigenvalues and Eigenvectors
Department of Industrial Engineering
Markov Chains Mixing Times Lecture 5
Section 4.1 Eigenvalues and Eigenvectors
DTMC Applications Ranking Web Pages & Slotted ALOHA
Department of Industrial Engineering
6. Markov Chain.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Stability Analysis of Linear Systems
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Continuous time Markov Chains
Discrete time Markov Chain
Discrete time Markov Chain
Autonomous Cyber-Physical Systems: Probabilistic Models
Vector Spaces RANK © 2012 Pearson Education, Inc..
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Eigenvalues and Eigenvectors
Presentation transcript:

Industrial Engineering Dep Lecture 2: Algorithmic Methods for transient analysis of continuous time Markov Chains Dr. Ahmad Al Hanbali Industrial Engineering Dep University of Twente a.alhanbali@utwente.nl

Lecture 2: transient analysis of continuous time Markov chains This Lecture deals with continuous time Markov processes as opposed to discrete time Markov chains in Lecture 1 Objectives: Find equilibrium distribution Find transient probabilities Matrix decomposition Uniformization method Find Transient measures Lecture 2: transient analysis of continuous time Markov chains

Background (1) Let {𝑋(𝑡): 𝑡 ≥ 0} denote a continuous time stochastic process of state space {0,1,…,𝑁} 𝑋(𝑡) is a Markov chain if the conditional transition probability, for every 𝑡, 𝑠≥ 0 and 𝑗 𝑃(𝑋(𝑠+𝑡)=𝑗 | 𝑋(𝑢); 𝑢≤ 𝑠 )=𝑃(𝑋(𝑠+𝑡)=𝑗 | 𝑋(𝑠) ) 𝑋(𝑡) is homogeneous (or stationary) if 𝑃(𝑋(𝑠+𝑡)=𝑗 | 𝑋(𝑠)=𝑖 ) = 𝑃(𝑋(𝑡)=𝑗 | 𝑋(0)=𝑖) = 𝑝𝑖𝑗(𝑡) 𝑋(𝑡) is irreducible if all states can communicate Lecture 2: transient analysis of continuous time Markov chains

Background (2) Define (infinitesimal) transition rate from state i to j of a Markov process 𝑞𝑖𝑗 = lim 𝑡→0 𝑝𝑖𝑗 𝑡 𝑡 , 𝑖 ≠ 𝑗 Let {𝑇𝑛 : 𝑛=0,1,.. } denote epochs of transition of CTMC then for 𝑛≥ 0 (by convention 𝑇0=0) 𝑃 𝑇𝑛− 𝑇 𝑛−1 ≤𝑥 | 𝑋( 𝑇 𝑛−1 )=𝑖, 𝑋(𝑇𝑛)=𝑗 =1−exp⁡(−𝑎𝑖𝑥), where 𝑎𝑖 (= ∑𝑗≠𝑖 𝑞𝑖𝑗 ) is the total outgoing rate of state 𝑖 𝑎 𝑖 = lim 𝑡→0 1−𝑝𝑖𝑖 𝑡 𝑡 Lecture 2: transient analysis of continuous time Markov chains

Lecture 2: transient analysis of continuous time Markov chains Background (3) Let 𝑞𝑖𝑖 =−𝑎𝑖. The matrix 𝑄 = [𝑞𝑖𝑗]0≤𝑖,𝑗≤𝑁 is called the generator of the continuous time Markov chain (CTMC). Note: ∑𝑗 𝑞𝑖𝑗 = 0 Let 𝑝=(𝑝0,…,𝑝𝑁) equilibrium probabilities. The equilibrium equations of CTMC gives 𝑝 𝑖 𝑗≠𝑖 𝑞 𝑖𝑗 = 𝑗≠𝑖 𝑝 𝑗 𝑞 𝑗𝑖 , in matrix equation 𝑝𝑄=0, 𝑝𝑒=1 Idea: take advantage of Methods developed for discrete time Markov chains (in Lecture 1) Lecture 2: transient analysis of continuous time Markov chains

An equivalent discrete time Markov chain Equilibrium distribution 𝑝 can be obtained from an equivalent Markov chain via an elementary transformation. Let ∆ be real number such that 0<∆≤ min 𝑖 (−1/𝑞𝑖𝑖) , and 𝑃=𝐼+∆𝑄 𝑃 is a stochastic matrix, i.e., its entries are between 0 and 1, and its rows sum to 1. Further, 𝑝𝑃=𝑝 ⟺ 𝑝𝑄=0 The Markov chain with transition probability 𝑃 is a discretization of the Markov process of 𝑄 with time step ∆ Lecture 2: transient analysis of continuous time Markov chains

Uniformization of CTMC To have same mean sojourn time in all states per visit the uniformization of CTMC introduces fictitious transitions from states to themselves Let 0<∆≤ 𝑚𝑖𝑛𝑖(−1/𝑞𝑖𝑖), introduce a fictitious transition from state 𝑖 to itself with rate (𝑞𝑖𝑖+1/∆). This yields: Equilibrium distribution of Q doesn't change Outgoing rate from state i becomes (𝑞𝑖𝑖+1/∆−𝑞𝑖𝑖=1/∆) same for all states Equilibrium distribution of the uniformized Markov process of Q is same as the Markov chain of transition matrix P(=I+∆Q) embedded at epoch of transitions (jumps) The transitions of the uniformized process take place according to a Poisson process with rate ∆ Lecture 2: transient analysis of continuous time Markov chains

Equilibrium distribution All methods developed for solving the equilibrium equation for discrete time Markov chain can be applied to the uniformized Markov chain of transition matrix P Lecture 2: transient analysis of continuous time Markov chains

Transient Behavior of CTMC Kolmogorov's equations are needed for the transient analysis. Let define the transient probability 𝑝𝑖𝑗(𝑡) = 𝑃(𝑋(𝑡)=𝑗 | 𝑋(0)=𝑖) Then, for 0 ≤ 𝑠 < 𝑡 𝑝𝑖𝑗(𝑡) =∑𝑘 𝑝𝑖𝑘(𝑠) 𝑝𝑘𝑗(𝑡−𝑠) Kolmogorov's equations are set of differential equations for 𝑝𝑖𝑗(𝑡) Lecture 2: transient analysis of continuous time Markov chains

Lecture 2: transient analysis of continuous time Markov chains Background (1) Let 𝑃(𝑡) the matrix of (𝑖,𝑗) entries 𝑝𝑖𝑗(𝑡) Kolmogorov's forward equations are derived by letting s approaches t from below (backward equations) 𝑃 ′ 𝑡 =𝑃 𝑡 𝑄,𝑃 0 =𝐼 Hence, 𝑃(𝑡)=𝑃 0 𝑛≥0 𝑄𝑡 𝑛 𝑛! = exp 𝑄𝑡 Truncating the infinite sum is inefficient since 𝑄 has positive and negative elements Lecture 2: transient analysis of continuous time Markov chains

Matrix decomposition method Let 𝑙𝑖, 𝑖 = 0,…,𝑁, be the (𝑁+1) eigenvalues of 𝑄 Let 𝑦𝑖 and 𝑥𝑖 be the left and right eigenvectors corresponding to 𝑙𝑖, such that 𝑦𝑖 𝑥𝑖 = 1 and 𝑦𝑖 𝑥𝑗 = 0 for 𝑖≠𝑗. The matrix then reads 𝑄=𝑋𝐿 𝑋 −1 , where 𝑋 −1 is the matrix whose rows are 𝑦𝑖, 𝐿 is the diagonal matrix of entries 𝑙𝑖, and 𝑋 is the matrix whose columns are 𝑥𝑖 Lecture 2: transient analysis of continuous time Markov chains

Matrix decomposition method (cnt'd) The transient probability matrix then reads 𝑃(𝑡)= 𝑛≥0 𝑄𝑡 𝑛 𝑛! =𝑋.𝑒𝑥𝑝⁡(𝐿𝑡). 𝑋 −1 =∑𝑖 𝑥𝑖. 𝑒𝑥𝑝 𝑙 𝑖 𝑡 .𝑦𝑖 What is the interpretation of 𝑃(∞)? What conditions li should satisfy when t tends infinity ? Disadvantage of matrix decomposition? Due to Eingenvalues Gershgorin theorem all eigenvalues of Q have a non-positive real part. Lecture 2: transient analysis of continuous time Markov chains

Uniformization method Let 0<∆≤ min 𝑖 (−1/𝑞𝑖𝑖) and 𝑃=𝐼+∆𝑄. Conditioning on 𝑌, number of transitions in (0,𝑡) which is Poisson distributed with mean 𝑡/Δ, gives 𝑃 𝑡 = 𝑛≥0 𝑃 𝑌=𝑛 𝑃 𝑛 = 𝑛≥0 exp −𝑡/Δ 𝑡/Δ 𝑛 𝑛! 𝑃 𝑛 Truncating the latter sum on the first 𝐾 terms gives a good approximation, 𝐾=max⁡{20, 𝑡/∆+5 𝑡/∆ } It is better to take the largest possible value of ∆= min 𝑖 (−1/𝑞𝑖𝑖) Lecture 2: transient analysis of continuous time Markov chains

Lecture 2: transient analysis of continuous time Markov chains Occupancy Time: mean Occupancy time of a state is the sojourn time in that state during (0,𝑇). Note that depends on state at time 0 Let 𝑚𝑖𝑗(𝑇) denote the mean occupancy time in state 𝑗 during (0,𝑇) given initial state 𝑖. Then, 𝑚 𝑖𝑗 𝑇 =𝐸 𝑡=0 𝑇 1 𝑋 𝑡 =𝑗|𝑋 0 =𝑖 𝑑𝑡 = 𝑡=0 𝑇 𝐸 1 𝑋 𝑡 =𝑗|𝑋 0 =𝑖 𝑑𝑡 = 𝑡=0 𝑇 𝑝 𝑖𝑗 𝑡 𝑑𝑡 In matrix equation, 𝑀 𝑇 = 𝑚 𝑖𝑗 (𝑇) = 𝑡=0 𝑇 𝑝 𝑖𝑗 𝑡 𝑑𝑡 = 𝑡=0 𝑇 𝑃 𝑡 𝑑𝑡 Lecture 2: transient analysis of continuous time Markov chains

Mean occupancy time (cnt'd) Using the uniformized process (∆,P) then, 𝑀 𝑇 = 𝑡=0 𝑇 𝑛≥0 exp −𝑡/Δ 𝑡/Δ 𝑛 𝑛! 𝑃 𝑛 𝑑𝑡 = 𝑛≥0 𝑡=0 𝑇 exp −𝑡/Δ 𝑡/Δ 𝑛 𝑛! 𝑑𝑡 𝑃 𝑛 . Note 𝑡=0 𝑇 exp −𝑡/Δ 𝑡/Δ 𝑛 𝑛! 𝑑𝑡=Δ 1−𝑃 𝑌≤𝑛 , where Y is a Poisson random variable of mean 𝑡/Δ We find that 𝑀 𝑇 =Δ 𝑛≥0 1−𝑃 𝑌≤𝑛 𝑃 𝑛 . Note 𝑛≥0 𝑃 𝑛 does not converge so do not split up the latter sum to compute M(T). Lecture 2: transient analysis of continuous time Markov chains

Cumulative distribution of occupancy time Let 𝑂(𝑇) denote the total sojourn time during [0,T] in a subset of states, 𝛺 𝑜 . Then, for 0≤𝑥<𝑇 𝑃 𝑂 𝑇 ≤𝑥 = 𝑛=0 ∞ 𝑒 −𝑇/Δ 𝑇/Δ 𝑛 𝑛! 𝑘=0 𝑛 𝛼 𝑛,𝑘 𝑗=𝑘 𝑛 𝑛 𝑗 𝑥 𝑇 𝑗 1− 𝑥 𝑇 𝑛−𝑗 , 𝑃 𝑂 𝑇 =𝑇 = 𝑛=0 ∞ 𝑒 −𝑇/Δ 𝑇/Δ 𝑛 𝑛! 𝛼 𝑛,𝑛+1 , where 𝛼(𝑛,𝑘) is the probability that uniformized process visits 𝑘 times Ω𝑜 during [0,𝑇] given that it makes 𝑛 transitions. Proof, see for details Tijms 2003: Condition on Poisson number of transitions of the uniformized process to be n Occupancy time is smaller than x if uniformized process will visit k times 𝛺 𝑜 out of the n visits and at least k of these transitions happens before x. The former probability is 𝛼 𝑛,𝑘 and latter is function of a binomial distribution 𝛼 𝑛,𝑘 can be computed recursively. Note they are dependent on the initial position of the chain at time 0. Lecture 2: transient analysis of continuous time Markov chains

Moments of occupancy time Proposition: The m-th moment of O(T) is given by: 𝐸 𝑂 𝑇 𝑚 𝑇 𝑚 = 𝑛=0 ∞ 𝑒 −𝑇/Δ 𝑇/Δ 𝑛 𝑛+𝑚 ! 𝑘=1 𝑛+1 𝛼 𝑛,𝑘 𝑙=𝑘 𝑘+𝑚−1 𝑙 Proposition: Given that the chain starts in equilibrium the second moment of the occupancy time in the subset Ω 0 during [0,T] gives 𝐸 𝑂 𝑇 2 2 𝑇 2 = 𝑛=1 ∞ 𝑒 −𝑇/Δ 𝑇/Δ 𝑛 𝑛+2 ! 𝑝 0 𝑖=1 𝑛 𝑛−𝑖+1 𝑃 𝑖 𝑒 0 + 𝑙∈ Ω 0 𝑝 𝑙 𝑒 −𝑇/Δ +𝑇/Δ−1 𝑇/Δ 2 , where 𝑝𝑖 is the steady state probability of the Markov chain in state 𝑖, 𝑝0 is the column vector with i-th entry equal to 𝑝𝑖 if 𝑖∈ Ω 𝑜 and zero otherwise, and 𝑒0 is the column vector with i-th entry equal to 1 if 𝑖∈ Ω 𝑜 and zero otherwise. For proofs see: A. Al Hanbali, M.C. van der Heijden. Interval Availability Analysis of a Two-echelon, Multi-Item System. European Journal of Operational Research (EJOR), vol. 228, issue 3, 494-503, 2013 Lecture 2: transient analysis of continuous time Markov chains

References V.G. Kulkarni. Modeling, analysis, design, and control of stochastic systems. Springer, New York, 1999 Tijms, H. C. A first course in stochastic models. New York: Wiley, 2003 http://www.win.tue.nl/~iadan/algoritme/