Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.

Similar presentations


Presentation on theme: "CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam."— Presentation transcript:

1 CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa http://10.2.230.10:4040/akoubaa/cs433/ 14 Dec 2008 Al-Imam Mohammad Ibn Saud University

2 Goals for Today  Understand the Markov property in the Continuous Case  Understand the difference between continuous time and discrete time Markov Chains  Learn how to use Continuous Markov Chains for modelling stochastic processes

3 3 “Discrete Time” versus “Continuous Time” 0 12 3 4 time Events occur at known points in time Fixed Time Discrete Time u time Events occur at any point in time Variable Time Continuous Time svt 1=u-s1=u-s 2=v-u2=v-u =1=1 =1=1 3=t-v3=t-v

4 4 Definition (WiKi): Continuous-Time Markov Chains  In probability theory, a Continuous-Time Markov Process (CTMC) is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space.  The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t.  In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

5 5 Definition 1: Continuous-Time Markov Chains  A stochastic process {X(t), t  0} is a Continuous-Time Markov Chain (CTMC) if for all 0  s  t and non-negative integers i, j, x(u), such that 0  u < s,  In addition, if this probability is independent from s and t, then the CTMC has stationary transition probabilities:  s X(s)=i الحا ضر t X(t)=j المستق بل u X(u)=x(u) الماض ي مدة زمنية

6 6 Differences between Continuous-Time and Discrete-Time Markov Chains Discrete Markov ChainContinuous Markov Chain Time t k or k ∈ ℕ + s,t ∈ ℝ + Transient Transition Probability P ij (k) for the time interval [k, k+1] P ij (s,t) for the time interval [s,t] Stationary Transition Probability P ij (1)= P ij in the time unit equal to 1 Time duration fixed P ij (  ) for the time duration  t-s dependent on the duration  Transition Probability to the Same State P ii can be different from 0 P ii (  )=0 for any  0 12 3 4 time Events occur at known points in time Fixed Time =1=1 =1=1 Discrete Time u time Events occur at any point in time Variable Time svt 1=u-s1=u-s 2=v-u2=v-u 3=t-v3=t-v Continuous Time

7 7 Definition 2: Continuous-Time Markov Chains  A stochastic process {X(t), t  0} is a Continuous-Time Markov Chain (CTMC) if  T he amount of time spent in state i before making a transition to a different state is exponentially distributed with rate a parameter v i,  When the process leaves state i, it enters state j with a probability p ij, where p ii = 0 and  All transitions and times are independent (in particular, the transition probability out of a state is independent of the time spent in the state). Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed

8 8 Differences between DISCRETE and CONTINOUS Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed CTMC process DTMC process

9 Five Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

10 10 Chapman Kolmogorov: Transition Function Define the Transition Function (like Transition Probability in DTMC) Using the Markov (memoryless) property The Continuous-Time analogue of the Chapman-Kolmogorov equation is دالة الانتقال

11 11  -Time Transition Probability Transition Matrix H(s,t)=[p ij (s,t)], for i,j=1,2,… then Note that H(s, s)= I مصفوفة الانتقال Define the transition matrix between s and t as Homogenous case In the Homogenous case, the CTMC has stationary transition probabilities and is called  -time Transition Probability. p ij (  ) means the probability that the transition from i to j occurs during the time interval . We must have:

12 12 Transition Rate Matrix In the Matrix Form, Chapman-Kolmogorov equation for s ≤ t ≤ t+ Δ t In the Scalar Form, the Chapman-Kolmogorov equation for s ≤ t ≤ t+ Δ t

13 13 Transition Rate Matrix Consider the Chapman-Kolmogorov for s ≤ t ≤ t+ Δ t Subtracting H(s,t) from both sides and dividing by Δ t Taking the limit as Δ t  0 where the Transition Rate Matrix Q(t) is given by (equivalent to one-step transition) q ij (t) is the transition rate that the chain enters state j from state i at time t

14 Transition Matrix State Holding Time Transition Rate Transition Probability Time Homogeneous Case

15 15 Homogeneous Case In the homogeneous case, the transition functions do not depend on s and t, but only on the difference  = t-s thus It follows that and the transition rate matrix q ij is the transition rate that the chain enters state j from state i i =-q ii is the transition rate that the chain leaves state i

16 16 Homogeneous Case The transition rate matrix q ij is the transition rate that the chain enters state j from state i i =-q ii is the transition rate that the chain leaves state i Continuous Markov ChainDiscrete Markov Chain i P ij j P ij : Transition Probability Transition Time is deterministic (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate from i to j, i output rate Transition Time is random

17 17 Continuous Markov Chain Discrete Markov Chain i P ij j P ij : Transition Probability Transition Time is Known (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate of state j from state i,  i output rate from state i for all other neighbor states Transition Time is randoms

18 18 Homogeneous Case Thus, if P(  ) is the Transition Matrix AFTER a time period  p ij (0) is the instantaneous transition function from i to j In the Scalar Form, it is possible to write, with: ForwardEquation BackwardEquation

19 Two Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions. Next: State Holding Time

20 20 State Holding and Transition Time In a CTMC, the process makes a transition from one state to another, after it has spent an amount of time on the state it starts from. This amount of time is defined as the state holding time. Theorem Theorem: State Holding Time of CTMC The state holding time T i := inf {t: X t ≠ i | X 0 = i} in a state i of a Continuous-Time Markov Chain  Satisfies the Memoryless Property  Is Exponentially Distributed with the parameter i Theorem Theorem: Transition Time in a CTMC The time T ij := inf {t: X t = j | X 0 = i} spent in a state i before a transition to state j is exponentially distributed with the parameter q ij

21 21 State Holding Time: Proofs Suppose our continuous time Markov Chain has just arrived in state i. Define the random variable Ti to be the length of time the process spends in state i before moving to a different state. We call Ti the holding time in state i. The Markov Property implies the distribution of how much longer you’ll be in a given state i is independent of how long you’ve already been there. Proof (1) (by contradiction): Suppose it is time s, you are in state i, and i.e., the amount of time you have already been in state i is relevant in predicting how much longer you will be there. Then for any time r < s, whether or not you were in state i at time r is relevant in predicting whether you will be in state i or a different state j at some future time s + t. Thus which violates the Markov Property. Proof (2): The only distribution satisfying the memoryless property is the exponential distribution. Thus, the result in (2).

22 Example: Computer System Assume a computer system where jobs arrive according to a Poisson process with rate λ. Each job is processed using a First In First Out (FIFO) policy. The processing time of each job is exponential with rate μ. The computer has a buffer to store up to two jobs that wait for processing. Jobs that find the buffer full are lost.

23 Example: Computer System Questions Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P

24 24 Example The rate transition matrix is given by a d 0 123 aa a dd The state transition matrix is given by

25 Transient State Probabilities

26 26 State Probabilities and Transient Analysis Similar to the discrete-time case, we define In vector form With initial probabilities Using our previous notation (for homogeneous MC) Obtaining a general solution is not easy! Differentiating with respect to t gives us more “inside”

27 27 “Probability Fluid” view We view π j (t) as the level of a “probability fluid” that is stored at each node j (0=empty, 1=full). Change in the probability fluid inflow outflow ri j q ij … q jr … InflowOutflow

28 Steady State Probabilities

29 29 Steady State Analysis Often we are interested in the “long-run” probabilistic behavior of the Markov chain, i.e., As with the discrete-time case, we need to address the following questions  Under what conditions do the limits exist?  If they exist, do they form legitimate probabilities?  How can we evaluate these limits? These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities

30 30 Steady State Analysis Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector π with These vectors are independent of the initial state probability and can be obtained by solving 0 Change inflow outflow ri j q ij … q jr … InflowOutflow Using the “probability fluid” view

31 31 Example For the previous example, with the above transition function, what are the steady state probabilities a d 0 123 aa a dd Solve

32 32 Example The solution is obtained

33 Uniformization of Makov Chains

34 34 Uniformization of Markov Chains In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time Thus, we need a way to turn continuous-time to discrete-time Markov Chains Uniformization Procedure  Recall that the total rate out of state i is –q ii = (i).  Pick a uniform rate γ such that γ ≥  (i) for all states i.  The difference γ -  (i) implies a “fictitious” event that returns the MC back to state i (self loop).

35 35 Uniformization of Markov Chains Uniformization Procedure  Let P U ij be the transition probability from state I to state j for the discrete- time uniformized Markov Chain, then i j k … … q ij q ik Uniformization i j k … …

36 End of Chapter


Download ppt "CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam."

Similar presentations


Ads by Google