1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.

Slides:



Advertisements
Similar presentations
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Advertisements

CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Topics Review of DTMC Classification of states Economic analysis
The loss function, the normal equation,
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Continuous Time Markov Chains and Basic Queueing Theory
Operations Research: Applications and Algorithms
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
TCOM 501: Networking Theory & Fundamentals
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Input-Queued.
Link Analysis, PageRank and Search Engines on the Web
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 The Designs and Analysis of a Scalable Optical Packet Switching Architecture Speaker: Chia-Wei Tuan Adviser: Prof. Ho-Ting Wu 3/4/2009.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Group exercise For 0≤t 1
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Spring 2013 Solving a System of Linear Equations Matrix inverse Simultaneous equations Cramer’s rule Second-order Conditions Lecture 7.
Stochastic Processes A stochastic process is a model that evolves in time or space subject to probabilistic laws. The simplest example is the one-dimensional.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
Probability Review CSE430 – Operating Systems. Overview of Lecture Basic probability review Important distributions Poison Process Markov Chains Queuing.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
 In this lesson we will go over how to solve a basic matrix equation such as the following: These are matrices, not variables.
Chapter 6 Product-Form Queuing Network Models Prof. Ali Movaghar.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Throughput of Internally Buffered Crossbar Switch Saturday, February 20, 2016 Mingjie Lin
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Markov Games TCM Conference 2016 Chris Gann
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and.
Courtesy of J. Bard, L. Page, and J. Heyl
Lecture on Markov Chain
Solutions Markov Chains 1
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Discrete-time markov chain (continuation)
Queueing networks.
September 1, 2010 Dr. Itamar Arel College of Engineering
The Geometric Distributions
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Presentation transcript:

1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk 0 p q house N S EW

2 One-dimensional random walk with reflective barriers Hypothesis Probability (object moves to the right) = p Probability (object moves to the left) = q Rule Object at position 2 (resp. -2) takes a step to the right  hits the reflective wall and bounces back to 2 (resp. -2) 0 p q

3 Discrete state space and discrete time Let X t = r.v. indicating the position of the object at time t Where t takes on multiples of the time unit value  i.e., t = 0, 1, 2, 3, …..  => discrete time And X t belongs to {-2, -1, 0, 1, 2}  => discrete state space Discrete time + discrete state space + other conditions Markov chain  Example: random walk

4 Discrete state space and continuous time In this case State space is discrete But shifts from one value to the other occurs continuously Example # packets in the output buffers of a router  The number of packets is discrete and changes  Whenever you have an arrival or departure All the queues done so far fall under this category Discrete state space + continuous time + other conditions  => Markov process (Ex: M/M queues)

5 Random walk: one-step transition probability X t = i => X t+1 = i +/- 1 One step probability indicates where the object is going to be in one step => P ij (1) Example:  P[X t+1 = 1 | X t = 0] = p  P[X t+1 = 1 | X t = 0] = q  P[X t+1 = 0 | X t = 1] = q  P[X t+1 = 2 | X t = 1] = p

6 One step transition matrix One step transition matrix P (1) States at time t States at time t

7 2-step transition probability P ij (2) = 2-step transition probability Given that at time t, the object is in state i With what probability it gets to j in exactly 2 steps P ij (2) =P[X t+2 =j | X t =i]  P[X t+2 =2|X t =0] = p 2  P[X t+2 =2|X t =1] = p 2  P[X t+2 =0|X t =-1] = 0 Next, we will populate the 2-step transition matrix P (2)

8 2-step transition matrix 2-step transition matrix P (2) Observation: 2-step transition matrix can be obtained By multiplying 2 1-step transition matrices

9 3-step transition probability P ij (3) may be derived as follows

10 3-step transition probability: example For instance Once you construct P (3) (3-step transition matrix) p q p2p2 0

11 Chapman-Kolmogrov equation Let P ij (n) be the n steps transition probability It depends on The probability of jumping to an intermediate state k  In v steps And then in the remaining steps to go from k to j n-step transition matrix P (n) ikj v steps n-v steps

12 Markov chain: main feature Markov chain Discrete state space Discrete time structure Assumption P[X t+1 =j | X t =i]=P[X t+1 =j | X 0 =k, X 1 =k’,…, X t =i] In other words, probability that the object is in position j  At time t+1, given that it was in position i at time t  Is independent of the entire history

13 Markov chain: main objective Objective Obtain The long term probabilities also called  Equilibrium or stationary probabilities In the case of a random walk  The probability to be at position i on the long run P ij (n) becomes less dependent on i when n is very large  It will only depend on the destination state j

14 n-step transition matrix: long run π j = Prob [system will be in state j in the long run, i.e., after a large number of transitions]

15 Random walk: application Prob[ at time 0, the object will be in state i] = ? 0 p q

16 Initial states are equiprobable If all states are equiprobable at time 0 =>

17 Object initially at a specific position As you move along, you get away from The original vector => behavior independent from initial position

18 The power method Assume a Markov chain with m+1 states 1, 2, 3, …,m

19 Long term probabilities: system of equations

20 Solving the system of equations So we have m+1 equations and m unknowns You get rid of one of the equations While keeping the normalizing equation

21 The long term probabilities: solution Application to the random walk Try to find the long term probabilities

22 Markov process Discrete state space Continuous time structure Example M/M/1 queue X t =number of customers in the queue at time t= {0,1,…} p ij (s,t) = P[X t =j | X s = i] => p ij (ζ) = P[X t+ζ =j | X t = i]

23 Rate matrix Stationary probability X = (X 0, X 1, X 2, …) Rate matrix M/M/1 queue Solution