Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3
Inventory System : Inventory at a store is reviewed daily. If inventory drops below 3 units, an order is placed with the supplier which is delivered the next day. The order size should bring inventory position to 6 units. Daily demand D is i.i.d. with distribution P ( D = 0) =1/3 P ( D = 1) =1/3 P(D = 2) =1/3. Let X n describe inventory level on the nth day. Is the process { X n } a Markov chain? Assume we start with 6 units. Example
{ X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n Markov Chains
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space Markov Chains
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC). Markov Chains
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC). P ij is the transition probability from state i to state j Markov Chains
P : transition matrix
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) =
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = State 0 = rain State 1 = no rain
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = State 0 = rain State 1 = no rain
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds.
P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M)
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds. P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M) P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M)
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds. P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M) P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M) P i, i +1 =P ( X n =i +1 |X n -1 =i }; P i, i -1 =P ( X n =i -1 |X n -1 =i }
P i, i +1 = p ; P i, i -1 = 1- p for i≠ 0, M P 0,0 = 1; P M, M = 1 for i≠ 0, M (0 and M are called absorbing states) P i, j = 0, otherwise
random walk: A Markov chain whose state space is 0, 1, 2,..., and P i,i +1 = p = 1 - P i,i -1 for i =0, 1, 2,..., and 0 < p < 1 is said to be a random walk.
Chapman-Kolmogorv Equations
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = What is the probability that it will rain four days from today given that it is raining today? Let = 0.7 and = 0.4. State 0 = rain State 1 = no rain
Unconditional probabilities
Classification of States
Communicating states
Proof
Classification of States (continued)
The Markov chain with transition probability matrix P is irreducible.
The classes of this Markov chain are {0, 1}, {2}, and {3}.
f i : probability that starting in state i, the process will eventually re-enter state i. Recurrent and transient states
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. Recurrent and transient states
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Recurrent and transient states
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Probability the process will be in state i for exactly n periods is f i n -1 (1- f i ), n ≥ 1. Recurrent and transient states
Proof
Not all states can be transient.
If state i is recurrent, and state i communicates with state j, then state j is recurrent.
Proof
Not all states can be transient.
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient.
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property.
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property. All states in an irreducible Markov chain are recurrent.
All states communicate. Therefore all states are recurrent.
There are three classes {0, 1}, {2, 3} and {4}. The first two are recurrent and the third is transient