Download presentation
Presentation is loading. Please wait.
Published byEric Ford Modified over 9 years ago
1
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3
2
Inventory System : Inventory at a store is reviewed daily. If inventory drops below 3 units, an order is placed with the supplier which is delivered the next day. The order size should bring inventory position to 6 units. Daily demand D is i.i.d. with distribution P ( D = 0) =1/3 P ( D = 1) =1/3 P(D = 2) =1/3. Let X n describe inventory level on the nth day. Is the process { X n } a Markov chain? Assume we start with 6 units. Example
3
{ X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains
4
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n Markov Chains
5
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space Markov Chains
6
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC). Markov Chains
7
{ X n : n =0, 1, 2,...} is a discrete time stochastic process If X n = i the process is said to be in state i at time n { i : i =0, 1, 2,...} is the state space If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC). P ij is the transition probability from state i to state j Markov Chains
8
P : transition matrix
9
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) =
10
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = State 0 = rain State 1 = no rain
11
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = State 0 = rain State 1 = no rain
12
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds.
13
P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M)
14
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds. P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M) P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M)
15
Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds. P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M) P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M) P i, i +1 =P ( X n =i +1 |X n -1 =i }; P i, i -1 =P ( X n =i -1 |X n -1 =i }
16
P i, i +1 = p ; P i, i -1 = 1- p for i≠ 0, M P 0,0 = 1; P M, M = 1 for i≠ 0, M (0 and M are called absorbing states) P i, j = 0, otherwise
17
random walk: A Markov chain whose state space is 0, 1, 2,..., and P i,i +1 = p = 1 - P i,i -1 for i =0, 1, 2,..., and 0 < p < 1 is said to be a random walk.
18
Chapman-Kolmogorv Equations
29
Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) = P (rain tomorrow|no rain today) = What is the probability that it will rain four days from today given that it is raining today? Let = 0.7 and = 0.4. State 0 = rain State 1 = no rain
35
Unconditional probabilities
39
Classification of States
40
Communicating states
42
Proof
43
Classification of States (continued)
47
The Markov chain with transition probability matrix P is irreducible.
49
The classes of this Markov chain are {0, 1}, {2}, and {3}.
50
f i : probability that starting in state i, the process will eventually re-enter state i. Recurrent and transient states
51
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. Recurrent and transient states
52
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Recurrent and transient states
53
f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Probability the process will be in state i for exactly n periods is f i n -1 (1- f i ), n ≥ 1. Recurrent and transient states
55
Proof
58
Not all states can be transient.
59
If state i is recurrent, and state i communicates with state j, then state j is recurrent.
60
Proof
61
Not all states can be transient.
62
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient.
63
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property.
64
If state i is recurrent, and state i communicates with state j, then state j is recurrent recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient transience is also a class property. All states in an irreducible Markov chain are recurrent.
66
All states communicate. Therefore all states are recurrent.
68
There are three classes {0, 1}, {2, 3} and {4}. The first two are recurrent and the third is transient
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.