048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Continuous Time Markov Chains and Basic Queueing Theory
Lecture 13 – Continuous-Time Markov Chains
PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2004 Modeling Sequential Processes.
TCOM 501: Networking Theory & Fundamentals
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Input-Queued.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Statistical.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
Entropy Rate of a Markov Chain
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Much More About Markov Chains
CTMCs & N/M/* Queues.
Discrete Time Markov Chains
Lecture on Markov Chain
Markov Chains Carey Williamson Department of Computer Science
Randomized Algorithms Markov Chains and Random Walks
September 1, 2010 Dr. Itamar Arel College of Engineering
Carey Williamson Department of Computer Science University of Calgary
Discrete time Markov Chain
CS723 - Probability and Stochastic Processes
Presentation transcript:

048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review #3: Discrete-Time Markov Chains

Spring – Packet Switch Architectures2 Simple DTMCs  “States” can be labeled (0,)1,2,3,…  At every time slot a “jump” decision is made randomly based on current state  1 0 p q 1-q 1-p a d f c b e (Sometimes the arrow pointing back to the same state is omitted)

Spring – Packet Switch Architectures3 1-D Random Walk  Time is slotted  The walker flips a coin every time slot to decide which way to go  X(t) p 1-p

Spring – Packet Switch Architectures4 Single Server Queue  Consider a queue at a supermarket  In every time slot:  A customer arrives with probability p  The HoL customer leaves with probability q Bernoulli(p) Geom(q)

Spring – Packet Switch Architectures5 Birth-Death Chain  Can be modeled by a Birth-Death Chain (aka. Geom/Geom/1 queue)  Want to know:  Queue size distribution  Average waiting time, etc

Spring – Packet Switch Architectures6 Discrete Time Markov Chains  Markov property (memoryless): “Future” is independent of “Past” given “Present”  A sequence of random variables {X n } is called a Markov chain if it has the Markov property:  States are usually labeled {0,1,2,…}  State space can be finite or infinite

Spring – Packet Switch Architectures7 Transition Probability  Probability to jump from state i to state j  Assume stationary: independent of time  Transition probability matrix: P = (p ij )  Two state MC:

Spring – Packet Switch Architectures8 Stationary Distribution Define Then  k+1 =  k P (  is a row vector) Stationary Distribution: if the limit exists. If  exists, we can solve it by

Spring – Packet Switch Architectures9 Balance Equations  These are called balance equations  Transitions in and out of state i are balanced

Spring – Packet Switch Architectures10 In General  If we partition all the states into two sets, then transitions between the two sets must be “balanced”.  Equivalent to a bi-section in the state transition graph  This can be easily derived from the Balance Equations

Spring – Packet Switch Architectures11 Conditions for  to Exist (I)  Definitions:  State j is reachable by state i if  State i and j communicate if they are reachable by each other  The Markov chain is irreducible if all states communicate

Spring – Packet Switch Architectures12 Conditions for  to Exist (I) (cont’d)  Condition: The Markov chain is irreducible  Counter-examples: p=1 1

Spring – Packet Switch Architectures13 Conditions for  to Exist (II)  The Markov chain is aperiodic:  Counter-example:

Spring – Packet Switch Architectures14 Conditions for  to Exist (III)  The Markov chain is positive recurrent:  State i is recurrent if it will be re-entered infinitely often:  Otherwise transient  If recurrent State i is positive recurrent if E(T i )<1, where T i is time between visits to state i Otherwise null recurrent

Spring – Packet Switch Architectures15 Irreducible Ergodic Markov Chain  The Markov chain is ergodic if it is positive recurrent and aperiodic.  In an irreducible ergodic Markov chain, if  k+1 =  k P, then:   is independent of the initial conditions   (j) is the limiting probability that the process will be in state j at time n.  It is also equal to the long-run proportion of time that the process will be in state j (ergodicity).  It is called the stationary probability.

Spring – Packet Switch Architectures16 Irreducible Ergodic Markov Chain  If f is a bounded function on the state space:  Let m jj be the expected number of transitions until the Markov chain, starting in state j, returns to state j. Then m jj =1/  (j) References: books on stochastic processes (e.g., Ross)