IENG 362 Markov Chains.

Slides:



Advertisements
Similar presentations
TM 732 Markov Chains. First Passage Time Consider (s,S) inventory system: first passage from 3 to 1 = 2 weeks recurrence time (3 to 3) = 5 weeks.
Advertisements

Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Queueing Models and Ergodicity. 2 Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing models.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
Matrix Analytic methods in Markov Modelling. Continous Time Markov Models X: R -> X µ Z (integers) X(t): state at time t X: state space (discrete – countable)
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Continuous Time Markov Chains and Basic Queueing Theory
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Module C10 Simulation of Inventory/Queuing Models.
Markov Chains Chapter 16.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Limiting probabilities. The limiting probabilities P j exist if (a) all states of the Markov chain communicate (i.e., starting in state i, there is.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 31 Alternative System Description If all w k are given initially as Then,
On the generality of binary tree-like Markov chains K. Spaey - B. Van Houdt - C. Blondia Performance Analysis of Telecommunication Systems (PATS) Research.
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Sample Paths, Convergence, and Averages. Convergence Definition: {a n : n = 1, 2, …} converges to b as n  ∞ means that  > 0,  n 0 (  ), such that.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
1 Resource-Constrained Multiple Product System & Stochastic Inventory Model Prof. Yuan-Shyi Peter Chiu Feb Material Management Class Note #4.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Problems Markov Chains 1 1) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains.
Courtesy of J. Bard, L. Page, and J. Heyl
Sequences.
Discrete-time markov chain (continuation)
Much More About Markov Chains
Simulation Examples STOCHASTIC PROCESSES.
Discrete Time Markov Chains
,..,_ -, =.. t :: ;... t---aM 0 d -...c­ " ss ''- !j!j ! )
IENG 362 Markov Chains.
Solutions Markov Chains 1
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
Markov Chains Carey Williamson Department of Computer Science
IENG 362 Markov Chains.
IENG 362 Markov Chains.
EE255/CPS226 Discrete Time Markov Chain (DTMC)
IENG 486: Statistical Quality & Process Control
Chapman-Kolmogorov Equations
Solutions Markov Chains 1
Solutions Markov Chains 1
Solutions Markov Chains 2
I ;J\;J\ _\_\ -
Carey Williamson Department of Computer Science University of Calgary
Discrete time Markov Chain
Discrete time Markov Chain
Presentation transcript:

IENG 362 Markov Chains

Steady State Conditions Proposition: For any irreducible ergodic M.C., lim Pij(n) exists and is independent of i. lim ( ) n ij j P ® ¥ = > p j i ij M P = å p j M = å p 1

Steady State Conditions Heuristic p j n i M ij P X = å { } | ( )

Steady State Conditions Now, P ij n ik k M kj ( ) + = å 1

Steady State Conditions Now, P ij n ik k M kj ( ) + = å 1 Taking the limit as n ® ¥ p j k M kj P = å

å Example; Inventory p P = p 080 632 264 08 184 368 = + . P = 080 184 080 184 368 632 264 . p j k M kj P = å p 1 2 3 080 632 264 08 184 368 = + .

Example; Inventory p 1 2 3 080 632 264 08 184 368 = + . 1 2 3 = + p

Example; Inventory p 368 1 - = . ( ) p 582 = . p 080 632 264 08 184 1 2 3 080 632 264 08 184 368 = + . Choose p0 = 1 p 3 368 1 - = . ( ) p 3 582 = .

Example; Inventory p 080 632 264 08 184 368 = + . p 368 1 582 - = + . 1 2 3 080 632 264 08 184 368 = + . p0 = 1, p3 = 0.582 p 2 368 1 582 - = + . ( ) p 2 9212 = .

Example; Inventory p 080 632 264 08 184 368 = + . p 368 184 582 - = + 1 2 3 080 632 264 08 184 368 = + . p0 = 1, p3 = 0.582, p2 = 0.9212 p 1 368 184 582 - = + . ( ) + 0.368(.9212) ) p 1 1.0 = .

Example; Inventory p 1 2 3 000 921 582 503 = å .

å Example; Inventory p 000 921 582 503 = . p 503 285 263 166 / . Þ = 1 1 2 3 000 921 582 503 = å . p 1 2 3 503 285 263 166 / . Þ =

Example; Inventory p 1 2 3 285 263 166 . = j i ij M P = å p 1

Example; Inventory å p 285 263 166 . = P p 1 lim P = > p P . 286 1 2 3 285 263 166 . = j i ij M P = å p 1 P ( ) . 16 286 263 166 = lim ( ) n ij j P ® ¥ = > p

Special Cases lim P d = m p Irreducible positive recurrent periodic Markov Chain, period = d lim ( ) n jj nd j P d ® ¥ = m p

Special Cases Ex: 1 = P If we start in state 0, P does not exist , . 00 1 2 3 4 ( ) , . lim = ® ¥ Ex: 1 P = 1 1 1

Special Cases Ex: lim P = 1 p = 1 2 p 1 = P However, 1 1 1 ( ) n jj j ® ¥ = 2 1 p j = 1 2 p Ex: 1 P = 1 1 1

{ Cost Functions in M.C. C X ( ) , = 2 1 4 6 3 Suppose C(Xt) = cost of being in state Xt. In our inventory example, suppose it costs $2 / unit for each time period inventory is carried over. Then C X t ( ) , = 2 1 4 6 3 {

å Cost Functions ] [ E n C X ( ) 1 In general, the expected cost over all states and n time periods then is given by E n C X t ( ) 1 = å ] [

Cost Functions Consider the first 4 time periods, with some arbitrary path. For the path given, 3 2 1 t = 0 t = 1 t = 2 t = 3 Cost C X = + ( ) 1 2 3 6 4

Cost Functions But the probability of incurring this cost is the probability associated with this path; i.e. P Cost C X j { } ( ) = + 14 3 2 32 20 1 02 3 2 1 t = 0 t = 1 t = 2 t = 3

Cost Functions The expected cost is the cost associated with each path x the probability associated with each path. E Cost P X k i C M [ ] { | } ( ) = å 1 3 2 1 t = 0 t = 1 t = 2 t = 3

Cost Functions The expected cost is the cost associated with each path x the probability associated with each path. E Cost P X k i C M [ ] { | } ( ) = + å 1 2 3 4 3 2 1 t = 0 t = 1 t = 2 t = 3

Cost Functions But we know how to consider all possible paths to a state. Consider Cost x Prob P C X i = 2 ( ) 3 2 1 E Cost P C X j t ij [ ] ( ) = å 2 3 t = 0 t = 1 t = 2 t = 3

Cost Functions But we know how to consider all possible paths to a state. Consider E n C X t [ ( ) ] 1 = å 3 2 1 n P C X j ij t M ( ) 1 = å = å 1 n P C X ij t j M ( ) t = 0 t = 1 t = 2 t = 3

Cost Functions The long run average cost then may be given by 1 n å E [ Cost ] = lim E [ C ( X = j ) ] n t n ® ¥ j = 1 n P C X j M ij t ( ) = å 1 M å = p C ( j ) j j =

{ Example; Inventory C X ( ) , = 2 1 4 6 3 Suppose C(Xt) = cost of being in state Xt. In our inventory example, suppose it costs $2 / unit for each time period inventory is carried over. Then C X t ( ) , = 2 1 4 6 3 { t = 1 t = 

Example; Inventory Suppose C(Xt) = cost of being in state Xt. In our inventory example, suppose it costs $2 / unit for each time period inventory is carried over. Then E Cost [ ] C j M ( ) = å p t =  2 4 6 t = 1

{ Complex Costs Inventory; Xt-1 C X D ( , ) = + < £ > ³ 25 10 3 Suppose an order for x units is made at a cost of $25 + 10x. For unsatisfied demand, cost of a lost sale is $50. If (s,S) = (0,3) inventory policy is used, Xt-1 C X D t ( , ) - = + < £ > ³ 1 25 10 3 50 {

Complex Costs E Costs n P C X D ij t [ ] ( , ) lim = ® ¥ - å 1

Complex Costs å å lim E Costs n P C X D [ ] ( , ) = 1 k j ( ) = p ij t [ ] ( , ) lim = ® ¥ - å 1 a miracle occurs k j M ( ) = å p

Example; Inventory å k E C D P ( ) [ , ] { } . 55 50 4 100 5 56 = + E Cost [ ] k j M ( ) = å p t =  2 4 6 p3 = .166 p2 = .264 p1 = .285 p0 = .285 k E C D P t ( ) [ , ] { } . 55 50 4 100 5 56 = + t = 1

Example; Inventory å k E C D ( ) [ , ] 1 = = + 50 2 100 3 150 4 18 P D Cost [ ] k j M ( ) = å p k E C D t ( ) [ , ] 1 = t =  2 4 6 p3 = .166 p2 = .264 p1 = .285 p0 = .285 = + - 50 2 100 3 150 4 18 1 P D e t { } . ! 56 t = 1

Example; Inventory å k E C D ( ) [ , ] 2 = = + 50 3 100 4 150 5 5.2 P Cost [ ] k j M ( ) = å p k E C D t ( ) [ , ] 2 = t =  2 4 6 p3 = .166 p2 = .264 p1 = .285 p0 = .285 = + - 50 3 100 4 150 5 5.2 1 P D e t { } . ! 18.4 56 t = 1

Example; Inventory å k E C D ( ) [ , ] 3 = = + 50 4 100 5 150 6 1.2 P Cost [ ] k j M ( ) = å p k E C D t ( ) [ , ] 3 = t =  2 4 6 p3 = .166 p2 = .264 p1 = .285 p0 = .285 = + - 50 4 100 5 150 6 1.2 1 P D e t { } . ! 5.2 18.4 56 t = 1

Example; Inventory å = + 56 285 18 4 5 2 264 1 166 70 ( . ) $22 E Cost [ ] k j M ( ) = å p 1.2 t =  2 4 6 p3 = .166 p2 = .264 p1 = .285 p0 = .285 = + 56 285 18 4 5 2 264 1 166 70 ( . ) $22 5.2 18.4 56 t = 1