Discrete Time Markov Chains (A Brief Overview)

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Queueing Models and Ergodicity. 2 Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing models.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Continuous Time Markov Chains and Basic Queueing Theory
Operations Research: Applications and Algorithms
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
Single queue modeling. Basic definitions for performance predictions The performance of a system that gives services could be seen from two different.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Network Design and Analysis-----Wang Wenjie Queueing System IV: 1 © Graduate University, Chinese academy of Sciences. Network Design and Analysis Wang.
Flows and Networks Plan for today (lecture 4): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
Queuing Theory.  Queuing Theory deals with systems of the following type:  Typically we are interested in how much queuing occurs or in the delays at.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Discrete Time Markov Chain
Discrete-time Markov chain (DTMC) State space distribution
Ergodicity, Balance Equations, and Time Reversibility
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Availability Availability - A(t)
Courtesy of J. Bard, L. Page, and J. Heyl
Flows and Networks Plan for today (lecture 3):
Much More About Markov Chains
Discrete Time Markov Chains (cont’d)
Flows and Networks Plan for today (lecture 4):
CTMCs & N/M/* Queues.
Discrete Time Markov Chains
Department of Industrial Engineering
Lecture on Markov Chain
Operations Research: Applications and Algorithms
Discrete-time markov chain (continuation)
Introduction to Concepts Markov Chains and Processes
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Discrete-time markov chain (continuation)
Queueing Theory II.
CSE 531: Performance Analysis of Systems Lecture 4: DTMC
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete time Markov Chain
Discrete time Markov Chain
Queueing Theory Frank Y. S. Lin Information Management Dept.
Autonomous Cyber-Physical Systems: Probabilistic Models
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Discrete Time Markov Chains (A Brief Overview)

Discrete Time Markov Chain (DTMC) Consider a system where transitions take place at discrete time instants, and where the state of the system at time k is denoted as Xk Markov property states that Future only depends on the present… The pij‘s are the state transitions probabilities System evolution is fully specified by its state transition probability matrix [P]ij = pij

State Transition Probability Matrix P is a stochastic matrix All entries are positive and sum of entries in a row add up to 1 n-step transition probability matrix pij(n) is the probability of being in state j after n steps when starting in state i Limiting probabilities: the odds of being in a given state after a very large (infinite) number of transitions

Chapman-Kolmogorov Equations Track state evolution based on transition probabilities between all states is the probability of being in state j after n transitions/steps when starting in state i This provides a simple recursion to compute higher order transition probabilities (C-K)

More on C-K Equations In matrix form, where P is the one-step transition probability matrix The Chapman-Kolmogorov equations state Pn = Pm Pn-m, 0  m  n

Limiting Distribution From P we know that pij is the probability of being in state j after step 1 given that we start in state i In general, let be the initial sate probability vector Then the state probability vector 1 after the first step is given by 1 = 0P And in general after n steps, we have n = 0Pn Does n converges to a limit as n  ? (limiting distribution)

A Simple 3-State Example 1 2 1-p p2 p(1-p) p Note that the relation n = 0Pn implies that conditioned on starting in state i, the value of n is simply the ith row of Pn

A Simple 3-State Example Appears independent of the starting state i

Stationary Distribution A Markov chain with M states admits a stationary probability distribution  = [0, 1,…, M-1] if  P =  and {i=0 to M-1}i =1 In other words {i=0 to M-1}i pij = j ,  j and {i=0 to M-1}i =1

Stationary vs. Limiting Distributions Assume that the limiting distribution defined by  = [0, 1,…, M-1], where j = limnPijn > 0, exists and is a distribution, i.e., {i=0 to M-1}i =1 Then  is also a stationary distribution, i.e.,  = P and no other distribution exists When this holds, we can find the limiting distribution of a DTMC either by solving the stationary equations  = P, or by raising the matrix P to some large power

Summary: Computing DTMC Probabilities Use the stationary equations:  = P and ii = 1 Numerically compute successive values of Pn (and stop when all rows are approximately equal) The limit converges to a matrix with identical rows, where each row is equal to  Guess a solution to the recurrence relation

Back to Our Simple 3-State Example 1 2 1-p p2 p(1-p) p

The Umbrella Example For probability of rain p=0.6, this gives 1 2 1 (1-p) 1-p p For probability of rain p=0.6, this gives  = [0.16667, 0.41667, 0.41667] Probability Pwet that professor gets wet = Prob[zero umbrella and rain] They are independent, so that Pwet = 0p = 0.16667  0.6 = 0.1 Average number of umbrellas at a location: E[U] = 11 + 22 = 1.25

Infinite DTMC Handling chains with an infinite state space We cannot anymore use matrix multiplication But the result that if the limiting distribution exists it is the only stationary distribution still holds So we can still use the stationary equations, provided they have “some structure” we can exploit

A Simple (Birth-Death) Example 1 2 3 4 1-r r s 1-r-s The (infinite) transition probability matrix and correspondingly the stationary equations are of the form

A Simple (Birth-Death) Example 1 2 3 4 1-r r s 1-r-s The stationary equations can be rewritten as follows 0 = 0(1-r)+1s  1 = (r/s)0 1 = 0r+1(1-r-s)+2s  2 = r/s1 = (r/s)20 2 = 1r+2(1-r-s)+3s  3 = r/s2 = (r/s)30 We can then show by induction that i = (r/s)i0, i  0 The normalization condition {i=0 to ∞}i =1 gives 0 = (1- r/s), and hence i = (r/s)i(1- r/s)

A Simple (Birth-Death) Example 1 2 3 4 1-r r s 1-r-s Defining  = r/s (a natural definition of “utilization” consistent with Little’s Law), the stationary probabilities are of the form 0 = (1- ) and i =  i(1- ) From those, we readily obtain E[N]=Σiii =  /(1- )

On the Order of Arrivals & Departures Assume that our system state denotes the number of jobs in a system At every time step a job can arrive with probability p, i.e.,  = p, and can depart with probability q, i.e., E[S] = 1/q This is our previous birth-death example with r = p(1–q) and s = q(1–p) Applying Little’s Law to the server P(server is idle) = 1 – E[# jobs in server] = 1 – E[S] = 1 – p/q But our analysis gives 0 = 1–  = 1– r/s = 1– p(1–q)/q(1–p)  1– p/q ??? System state is based on sampling system at time slot boundary, i.e., 0 is fraction of time system is empty at end/start of time slot and not the fraction of time the server is idle System can be in state 0 at start and end of slot, while server is busy during that lost (arrival and departure in same slot) In order for server to be idle, we need system in state 0 at beginning of slot (probability0) AND no arrival in that slot (probability 1 – p). This implies: P(Server is idle) = 0 (1–p) = [1– p(1–q)/q(1–p)](1–p) = (1–p) – [p(1–q)/q] = [q–pq–p+pq]/q = 1 – p/q 