1 Introduction to Stochastic Models GSLM 54100. 2 Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Continuous Time Markov Chains and Basic Queueing Theory
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
CSE 322: Software Reliability Engineering Topics covered: Architecture-based reliability analysis.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Analysis of software reliability and performance.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  transient behavior.
Introduction to Stochastic Models GSLM 54100
Entropy Rate of a Markov Chain
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
EE6610: Week 6 Lectures.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
1 Introduction to Stochastic Models GSLM Outline  transient behavior  first passage time  absorption probability  limiting distribution 
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete-time Markov chain (DTMC) State space distribution
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Much More About Markov Chains
Discrete Time Markov Chains
Randomized Algorithms Markov Chains and Random Walks
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete time Markov Chain
Discrete time Markov Chain
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Presentation transcript:

1 Introduction to Stochastic Models GSLM 54100

2 Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient, recurrent, positive recurrent, null recurrent  periodicity  limiting behavior of irreducible chains

3 Connectivity

4 Connectivity of a DTMC  connectivity: one factor that determines the limiting behavior of a DTMC A B 0.01

5 Connectivity of a DTMC A

6 Connectivity of a DTMC B 0.01

7 Connectivity of a DTMC B A

8 Connectivity of a DTMC C

9 Connectivity of a DTMC A D 0.2

10 Connectivity of a DTMC  rows of P (  ) may not be the same as in the previous examples

11 Connectivity of a DTMC F 1  limit of P (m) may not exist

12 Connectivity of a DTMC

13 Types of States and of Irreducible DTMCs

14 Limiting Results for a DTMC  depending on the type of states and the chain  type: transient, positive recurrent, null recurrent  connectivity and periodicity

15 Transient State A

16 Recurrent State  state i is recurrent if P(return to i|X 0 = i) = 1

17 Recurrent State  two types of recurrent states  positive recurrent: E(# of transitions to return to i|X 0 = i) <   null recurrent: E(# of transitions to return to i|X 0 = i) = 

18 Periodicity

19 Periodicity  state i is of period d if X n can return to state i in multiples of d  states 1, 2, 3 are of period 2  state i of period d

20 Periodicity  period of states 1, 2, 3, and 4 = ?  state 4 of period 2  state 4 can return to itself in 2 steps 

21 Communicating States  communicating states are of the same type  transient, positive recurrent, null recurrent at the same time  of the same period  states in an irreducible chain are of the same type  transient, positive recurrent, null recurrent at the same time  of the same period

22 Limiting Behavior of Irreducible Chains

23 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j  N: a very large positive integer  # of periods at state j   j N  balance of flow   j N   i (  i N)p ij   j =  i  i p ij

24 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j   j =  i  i p ij   1 = 0.9   2   2 = 0.1   2  linearly dependent  normalization equation:  1 +  2 = 1  solving:  1 = 2/3,  2 = 1/ C 0.2

25 Limiting Behavior of a Positive Irreducible Chain   1 = 0.75   3   3 = 0.25  2   1 +  2 +  3 = 1   1 = 301/801,  2 = 400/801,  3 = 100/

26 Limiting Behavior of a Positive Irreducible Chain  an irreducible DTMC {X n } is positive  there exists a unique nonnegative solution to    j : stationary (steady-state) distribution of {X n }

27 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j   j = fraction of expected time at state j  average cost  c j for each visit at state j  random i.i.d. C j for each visit at state j  for aperiodic chain:

28 Limiting Behavior of a Positive Irreducible Chain   1 = 301/801,  2 = 400/801,  3 = 100/801  profit per state: c 1 = 4, c 2 = 8, c 3 = -2  average profit

29 Limiting Behavior of a Positive Irreducible Chain   1 = 301/801,  2 = 400/801,  3 = 100/801  C 1 ~ unif[0, 8], C 2 ~ Geo(1/8), C 3 = -4 w.p. 0.5; and = 0 w.p. 0.5  E(C 1 ) = 4, E(C 2 ) = 8, E(C 3 ) = -2  average profit

30 Different Interpretations of  balance equations: balance of rates  total rate into a group of states = total rate out of a group of states  {0}:  0 = q  1  {0, 1}: p  1 = q  2  {0, 1, …, n}: p  n = q  n+1, n  q 2 3 p q p q p 1 … q

31 Example: Condition for the Following Chain to be Positive  {0}:  0 = q  1   {0, 1}: p  1 = q  2 1   {0, 1, …, n}: p  n = q  n+1, n  1 1   positive  {  j } exists   0 +  1 +  2 + … = 1 has solution ..  p < q 0 1 q 2 3 p q p q p 1 … q

32 Example 4.24 of Ross  four-state production process  states  {1, 2, 3, 4}  up states  {3, 4}, down states  {1, 2}  find E(up time) & E(down time) time 1 state time down state up

33 Example 4.24 of Ross   1 = 3/16,  2 = 1/4,  3 = 7/24,  4 = 13/48  how to find  E(up time)  E(down time)

34 Example 4.24 of Ross  fraction of up time =  3 +  4 =  rate of turning from up to down = (p 31 +p 32 )  3 + (p 41 +p 42 )  4 = rate of turning from down to up = p 13  1 + (p 23 +p 24 )  2 =