Much More About Markov Chains

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
Discrete Time Markov Chains
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 5.The Gamma Function (Factorial Function ) 5.1 Definition, Simple Properties At least three different, convenient definitions of the gamma function are.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Partial Fractions Day 2 Chapter 7.4 April 3, 2006.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Lecture 14 – Queuing Networks
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Advanced Statistical Computing Fall 2016
Discrete-time markov chain (continuation)
Buffer Management in a Switch
Markov Chains Mixing Times Lecture 5
CTMCs & N/M/* Queues.
Section 4.1 Eigenvalues and Eigenvectors
Class Notes 7: High Order Linear Differential Equation Homogeneous
Pole Placement and Decoupling by State Feedback
Discrete Time Markov Chains
Chapter 8: Linear Systems of Equations
Pole Placement and Decoupling by State Feedback
CHE 391 T. F. Edgar Spring 2012.
Department of Industrial Engineering
Finite M/M/1 queue Consider an M/M/1 queue with finite waiting room.
Hidden Markov Models Part 2: Algorithms
Chapter 5 Markov Analysis
Markov Chains Carey Williamson Department of Computer Science
Numerical Analysis Lecture 16.
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Queueing Theory II.
Solutions Markov Chains 1
Queueing networks.
Lecture 14 – Queuing Networks
September 1, 2010 Dr. Itamar Arel College of Engineering
Signals and Systems Lecture 3
Carey Williamson Department of Computer Science University of Calgary
Discrete time Markov Chain
Discrete time Markov Chain
Differential Equations
Lecture 5 This lecture is about: Introduction to Queuing Theory
Presentation transcript:

Much More About Markov Chains Calculating Pn – how do we raise a matrix to the nth power? Ergodicity in Markov Chains. When does a chain have equilibrium probabilities? Balance Equations Calculating equilibrium probabilities without the fuss. The leaky bucket queue Finally an example which is to do with networks. For more information: Norris: Markov Chains (Chapter 1) Bertsekas: Appendix A and Section 6.3

How to calculate Pn If P is diagonalisable (3x3) then we can find some invertible matrix such that: where i are the eigenvalues Therefore pij(n)=A1n+B2n+ C3n assuming the eigenvalues are distinct

General Procedure For an M state chain. Compute the eigenvalues 1,2,.. M If the eigenvalues are distinct then pij(n) has the general form: If an eigenvalue  is repeated once then the general form includes a term (an+b)n As roots of a polynomial with real coefficients, complex eigenvalues come in conjugate pairs and can be written as sin and cosine pairs. The coefficients of the general form can be found by calculating pij(n) by hand for n= 0...M-1 and solving.

Example of Pn (where states are no’s 1, 2 and 3) find p11(n) Eigenvalues are 1, i/2, -i/2. Therefore p11(n) has the form: where the substitution can be made since p11(n) must be real we can calculate that p11(0)=1, p11(1)=0 and p11(2)=0

Example of Pn (2) We now have three simultaneous equations in ,  and . Solving we get =1/5, =4/5 and =-2/5.

Equilibrium Probabilities Recall the distribution vector  of equilibrium probabilities. If n is the distribution vector after n steps  is given by: This is also the distribution which solves: When does this limit exist? When is there a unique solution to the equation? This is when the chain is ergodic: Irreducible Recurrent non-null (also called positive recurrent) Aperiodic

Irreducible A chain is irreducible if any state can be reached from any other. More formally for all i and j: 1 2  1-  1- For what values of  and  is this chain irreducible?

Aperiodic chains A state i is periodic if it is returned to after a time period > 1. Formally, it is periodic if there exists an integer k > 1 for which, for all n we can find an integer j: Equivalently, a state is aperiodic if there is always a sufficiently large n that for all m > n:

A useful aperiodicity lemma If P is irreducible and has one aperiodic state i then all states are aperiodic. Proof: By irreducibility there exists r, s  0 with pji(r),pik(s) > 0 Therefore there is an n such that for all m > n: And therefore all the states are aperiodic (consider j=k in the above equation).

Return (Recurrence) Time If a chain is in state i when will it next return to state i? This is known as “return time”. First we must define the probability that the first return to state i is after n steps: fi(n) The probability that we ever return is: A state where fi = 1 is recurrent fi < 1 is called transient. The expectation of this is the “mean recurrence time” or “mean return time”. Mi= recurrent null Mi< recurrent non-null

Return (Recurrence) Time A finite chain is always recurrent non null. In an irreducible aperiodic Markov Chain the limiting probabilities always exist and are independent of the starting distribution. Either: All states are transient or recurrent null in which case j=0 for all states and no stationary distribution exists. All states are recurrent non null and a unique stationary distribution exists with:

Ergodicity (summary) A chain which is irreducible, aperiodic and recurrent non-null is ergodic. If a chain is ergodic, then there is a unique invariant distribution which is equivalent to the limit: In Markov Chain theory, the phrases invariant, equilibrium and stationary are often used interchangeably.

Invariant Density in Periodic Chains It is worth noting that an irreducible, recurrent non null chain which is periodic, has a solution to the invariant density equation but the limit distribution does not exist. Consider: However, it should be clear that does not exist in general though it may for specific starting distributions 1 1 1  =( ½ , ½ ) solves =P

Balance Equations Sometimes it is not practical to calculate the equilibrium probabilities using the limit. If a distribution is invariant then at every iteration, the inputs to a state must add up to its starting probability. The inputs to a state i are the probabilities of each state j (j) which leads into it multiplied by the probability pij

Balance Equations (2) More formally if i is the probability of state i : And to ensure it is a distribution: Which, for an n state chain gives us n+1 equations for n unknowns.

Queuing Analysis of the Leaky Bucket A “leaky bucket” is a mechanism for managing buffers to smooth the downstream flow. What is described here is what is sometimes called a “token bucket”. A queue holds a stock of “permits” which arrive at a rate r (one every 1/r seconds) up to W permits may be held. A packet cannot leave the queue if there is no permit stored. The idea is that the scheme limits downstream flow but can deal with bursts of traffic.

Modelling the Leaky Bucket Let us assume that the arrival process is a Poisson process with a rate  Consider how many packets arrive in 1/r seconds. The prob ak that k packets arrive is: Queue of permits (arrive at 1/r seconds) Exit of buffer Queue of packets (Poisson) Exit queue for packets with permits

A Markov Model Model this as a Markov Chain which changes state every 1/r seconds. States 0iW represent no packets waiting and i-W permits available. States W+i (where i > 1) represent 0 permits and i packets waiting. Transition probabilities: a2 a2 a2 . . . . . . 1 2 W W+1 a0 a0 a0 a1 a0+a1 a1 a1 a1

Solving the Markov Model By solving the balance equations we get: Similarly, we can get expressions for 3 in terms of 2 ,1 and 0. And so on...

Solving the Markov Model (2) Normally we would solve this using the remaining balance equation: This is difficult analytically in this case. Instead we note that permits are generated every step except when we are in state 0 and no packets arrive (W permits none used). This means permits are generated at a rate (1-0a0)r This must be equal to  since each packet gets a permit (assume none dropped while waiting).

And Finally The average delay for a packet to get a permit is given by: Of course this is not a closed form expression. To complete this analysis, look at Bertsekas P515 No of iterations taken to get out of queue from state j Amount of time spent in given state Time taken for each iteration of chain For those states with queue