CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
CS433: Modeling and Simulation
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Operations Research: Applications and Algorithms
Discrete Time Markov Chains
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh.
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Continuous Time Markov Chains and Basic Queueing Theory
Operations Research: Applications and Algorithms
Stochastic Processes Dr. Nur Aini Masruroh. Stochastic process X(t) is the state of the process (measurable characteristic of interest) at time t the.
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Performance analysis for high speed switches Lecture 6.
TCOM 501: Networking Theory & Fundamentals
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Flows and Networks Plan for today (lecture 5): Last time / Questions? Waiting time simple queue Little Sojourn time tandem network Jackson network: mean.
Al-Imam Mohammad Ibn Saud University
Introduction to Stochastic Models GSLM 54100
Queueing Theory I. Summary Little’s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K  …
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
CS433 Modeling and Simulation Lecture 13 Queueing Theory Dr. Anis Koubâa 03 May 2009 Al-Imam Mohammad Ibn Saud University.
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
CS433 Modeling and Simulation Lecture 15 Random Number Generator Dr. Anis Koubâa 24 May 2009 Al-Imam Mohammad Ibn Saud Islamic University College Computer.
Stochastic Models Lecture 2 Poisson Processes
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Flows and Networks Plan for today (lecture 4): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Elements of Queuing Theory The queuing model –Core components; –Notation; –Parameters and performance measures –Characteristics; Markov Process –Discrete-time.
CS433 Modeling and Simulation Lecture 12 Queueing Theory Dr. Anis Koubâa 03 May 2008 Al-Imam Mohammad Ibn Saud University.
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
Chapter 01 Probability and Stochastic Processes References: Wolff, Stochastic Modeling and the Theory of Queues, Chapter 1 Altiok, Performance Analysis.
Chapter 01 Probability and Stochastic Processes References: Wolff, Stochastic Modeling and the Theory of Queues, Chapter 1 Altiok, Performance Analysis.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Queuing Theory.  Queuing Theory deals with systems of the following type:  Typically we are interested in how much queuing occurs or in the delays at.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Industrial Engineering Dep
Al-Imam Mohammad Ibn Saud University
CTMCs & N/M/* Queues.
Autonomous Cyber-Physical Systems: Probabilistic Models
Lecture 5 This lecture is about: Introduction to Queuing Theory
Lecture 11 – Stochastic Processes
Presentation transcript:

CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University

Goals for Today  Understand the Markov property in the Continuous Case  Understand the difference between continuous time and discrete time Markov Chains  Learn how to use Continuous Markov Chains for modelling stochastic processes

3 “Discrete Time” versus “Continuous Time” time Events occur at known points in time Fixed Time Discrete Time u time Events occur at any point in time Variable Time Continuous Time svt 1=u-s1=u-s 2=v-u2=v-u =1=1 =1=1 3=t-v3=t-v

4 Definition (WiKi): Continuous-Time Markov Chains  In probability theory, a Continuous-Time Markov Process (CTMC) is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space.  The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t.  In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

5 Definition 1: Continuous-Time Markov Chains  A stochastic process {X(t), t  0} is a Continuous-Time Markov Chain (CTMC) if for all 0  s  t and non-negative integers i, j, x(u), such that 0  u < s,  In addition, if this probability is independent from s and t, then the CTMC has stationary transition probabilities:  s X(s)=i الحا ضر t X(t)=j المستق بل u X(u)=x(u) الماض ي مدة زمنية

6 Differences between Continuous-Time and Discrete-Time Markov Chains Discrete Markov ChainContinuous Markov Chain Time t k or k ∈ ℕ + s,t ∈ ℝ + Transient Transition Probability P ij (k) for the time interval [k, k+1] P ij (s,t) for the time interval [s,t] Stationary Transition Probability P ij (1)= P ij in the time unit equal to 1 Time duration fixed P ij (  ) for the time duration  t-s dependent on the duration  Transition Probability to the Same State P ii can be different from 0 P ii (  )=0 for any  time Events occur at known points in time Fixed Time =1=1 =1=1 Discrete Time u time Events occur at any point in time Variable Time svt 1=u-s1=u-s 2=v-u2=v-u 3=t-v3=t-v Continuous Time

7 Definition 2: Continuous-Time Markov Chains  A stochastic process {X(t), t  0} is a Continuous-Time Markov Chain (CTMC) if  T he amount of time spent in state i before making a transition to a different state is exponentially distributed with rate a parameter v i,  When the process leaves state i, it enters state j with a probability p ij, where p ii = 0 and  All transitions and times are independent (in particular, the transition probability out of a state is independent of the time spent in the state). Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed

8 Differences between DISCRETE and CONTINOUS Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed CTMC process DTMC process

Five Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

10 Chapman Kolmogorov: Transition Function Define the Transition Function (like Transition Probability in DTMC) Using the Markov (memoryless) property دالة الانتقال

Transition Matrix State Holding Time Transition Rate Transition Probability Time Homogeneous Case

12 Homogeneous Case The transition rate matrix q ij is the transition rate that the chain enters state j from state i i =-q ii is the transition rate that the chain leaves state i Continuous Markov ChainDiscrete Markov Chain i P ij j P ij : Transition Probability Transition Time is deterministic (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate from i to j, i output rate Transition Time is random

13 Continuous Markov Chain Discrete Markov Chain i P ij j P ij : Transition Probability Transition Time is Known (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate of state j from state i,  i output rate from state i for all other neighbor states Transition Time is randoms

14 Transition ProbabilityMatrix in Homogeneous Case Thus, if P(  ) is the Transition Matrix AFTER a time period  p ij (0) is the instantaneous transition function from i to j

Two Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions. Next: State Holding Time

16 State Holding and Transition Time In a CTMC, the process makes a transition from one state to another, after it has spent an amount of time on the state it starts from. This amount of time is defined as the state holding time. Theorem Theorem: State Holding Time of CTMC The state holding time T i := inf {t: X t ≠ i | X 0 = i} in a state i of a Continuous-Time Markov Chain  Satisfies the Memoryless Property  Is Exponentially Distributed with the parameter i Theorem Theorem: Transition Time in a CTMC The time T ij := inf {t: X t = j | X 0 = i} spent in a state i before a transition to state j is exponentially distributed with the parameter q ij

17 State Holding Time: Proofs Suppose our continuous time Markov Chain has just arrived in state i. Define the random variable Ti to be the length of time the process spends in state i before moving to a different state. We call Ti the holding time in state i. The Markov Property implies the distribution of how much longer you’ll be in a given state i is independent of how long you’ve already been there. Proof (1) (by contradiction): Suppose it is time s, you are in state i, and i.e., the amount of time you have already been in state i is relevant in predicting how much longer you will be there. Then for any time r < s, whether or not you were in state i at time r is relevant in predicting whether you will be in state i or a different state j at some future time s + t. Thus which violates the Markov Property. Proof (2): The only distribution satisfying the memoryless property is the exponential distribution. Thus, the result in (2).

Example: Computer System Assume a computer system where jobs arrive according to a Poisson process with rate λ. Each job is processed using a First In First Out (FIFO) policy. The processing time of each job is exponential with rate μ. The computer has a buffer to store up to two jobs that wait for processing. Jobs that find the buffer full are lost.

Example: Computer System Questions Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P

20 Example The rate transition matrix is given by a d aa a dd The state transition matrix is given by

Transient State Probabilities

22 State Probabilities and Transient Analysis Similar to the discrete-time case, we define In vector form With initial probabilities Using our previous notation (for homogeneous MC) Obtaining a general solution is not easy!

Steady State Probabilities

24 Steady State Analysis Often we are interested in the “long-run” probabilistic behavior of the Markov chain, i.e., As with the discrete-time case, we need to address the following questions  Under what conditions do the limits exist?  If they exist, do they form legitimate probabilities?  How can we evaluate these limits? These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities

25 Steady State Analysis Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector π with These vectors are independent of the initial state probability and can be obtained by solving

26 Example For the previous example, with the above transition function, what are the steady state probabilities a d aa a dd Solve

27 Example The solution is obtained

Uniformization of Makov Chains

29 Uniformization of Markov Chains In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time Thus, we need a way to turn continuous-time to discrete-time Markov Chains Uniformization Procedure  Recall that the total rate out of state i is –q ii = (i).  Pick a uniform rate γ such that γ ≥  (i) for all states i.  The difference γ -  (i) implies a “fictitious” event that returns the MC back to state i (self loop).

30 Uniformization of Markov Chains Uniformization Procedure  Let P U ij be the transition probability from state I to state j for the discrete- time uniformized Markov Chain, then i j k … … q ij q ik Uniformization i j k … …

End of Chapter