Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete-time markov chain (continuation)

Similar presentations


Presentation on theme: "Discrete-time markov chain (continuation)"β€” Presentation transcript:

1 Discrete-time markov chain (continuation)

2 CHAPMAN-KOLMOGOROV EQUATIONS IN MATRIX FORM
Starting from the transition matrix 𝐏, we have 𝐏 (𝟐) =𝐏𝐏= 𝐏 𝟐 𝐏 (πŸ‘) =𝐏 𝐏 (𝟐) = 𝐏 πŸ‘ 𝐏 (πŸ’) =𝐏 𝐏 (πŸ‘) = 𝐏 πŸ’ In general, 𝐏 (𝒏) =𝐏 𝐏 (π’βˆ’πŸ) = 𝐏 𝒏

3 CHAPMAN-KOLMOGOROV EQUATIONS IN MATRIX FORM
Recall our example with 𝐏= 𝟎 𝟎.πŸ• 𝟎 𝟎.πŸ‘ 𝟎.𝟐 𝟎 𝟎.πŸ– 𝟎 𝟎.πŸ— 𝟎 𝟎.𝟏 𝟎 𝟎 𝟎.πŸŽπŸ“ 𝟎 𝟎.πŸ—πŸ“ Refer to the MS Excel file.

4 Unconditional state probabilities
If we start with 𝑿 𝟎 =𝟐, what are the probabilities 𝑷 𝑿 𝟏𝟎 =𝟏 , 𝑷 𝑿 𝟏𝟎 =𝟐 , 𝑷 𝑿 𝟏𝟎 =πŸ‘ and 𝑷{ 𝑿 𝟏𝟎 =πŸ’}? After 10 time periods

5 Unconditional state probabilities
After 10 matrix multiplications: 𝐏 (𝟏𝟎) β‰ˆ 𝟎.𝟏𝟐 𝟎.πŸπŸ• 𝟎.𝟏𝟐 𝟎.πŸ”πŸŽ 𝟎.πŸπŸ” 𝟎.𝟏𝟐 𝟎.πŸπŸ• 𝟎.πŸ“πŸ“ 𝟎.πŸπŸ— 𝟎.πŸπŸ‘ 𝟎.𝟏𝟏 𝟎.πŸ“πŸ” 𝟎.πŸŽπŸ— 𝟎.𝟏𝟎 𝟎.πŸŽπŸ– 𝟎.πŸ•πŸ‘

6 Unconditional state probabilities
If we start with 𝑿 𝟎 =𝟐, what are the probabilities 𝑷 𝑿 𝟏𝟎 =𝟏 , 𝑷 𝑿 𝟏𝟎 =𝟐 , 𝑷 𝑿 𝟏𝟎 =πŸ‘ and 𝑷{ 𝑿 𝟏𝟎 =πŸ’}? 𝟎 𝟏 𝟎 𝟎 𝟎.𝟏𝟐 𝟎.πŸπŸ• 𝟎.𝟏𝟐 𝟎.πŸ”πŸŽ 𝟎.πŸπŸ” 𝟎.𝟏𝟐 𝟎.πŸπŸ• 𝟎.πŸ“πŸ“ 𝟎.πŸπŸ— 𝟎.πŸπŸ‘ 𝟎.𝟏𝟏 𝟎.πŸ“πŸ” 𝟎.πŸŽπŸ— 𝟎.𝟏𝟎 𝟎.πŸŽπŸ– 𝟎.πŸ•πŸ‘

7 Unconditional state probabilities
If we start with 𝑿 𝟎 =𝟐, the probabilities are 𝑷 𝑿 𝟏𝟎 =𝟏 =𝟎.πŸπŸ”, 𝑷 𝑿 𝟏𝟎 =𝟐 =𝟎.𝟏𝟐, 𝑷 𝑿 𝟏𝟎 =πŸ‘ =𝟎.πŸπŸ• and 𝑷 𝑿 𝟏𝟎 =πŸ’ =𝟎.πŸ“πŸ“.

8 STEADY-STATE PROBABILITIES
After 50 matrix multiplications: 𝐏 (πŸ“πŸŽ) β‰ˆ 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.πŸ”πŸ• 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.πŸ”πŸ• 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.πŸ”πŸ• 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.πŸ”πŸ• What is the meaning of this?

9 STEADY-STATE PROBABILITIES
𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ‘ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟎 𝒑 π’ŠπŸ’ (πŸ“πŸŽ) β‰ˆπŸŽ.πŸ”πŸ• This is called the steady-state probabilities of the Markov Chain.

10 STEADY-STATE PROBABILITIES
𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ‘ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟎 𝒑 π’ŠπŸ’ (πŸ“πŸŽ) β‰ˆπŸŽ.πŸ”πŸ• Note: Do not be confused with steady-state probabilities and stationary transition probabilities.

11 STEADY-STATE PROBABILITIES
𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟏 𝒑 π’ŠπŸ‘ (πŸ“πŸŽ) β‰ˆπŸŽ.𝟏𝟎 𝒑 π’ŠπŸ’ (πŸ“πŸŽ) β‰ˆπŸŽ.πŸ”πŸ• Can we derive them directly without doing too many matrix multiplications?

12 STEADY-STATE PROBABILITIES
In some cases, we can use the concept of fixed- point iteration. 𝛑=𝛑𝐏 The steady-state probabilities: 𝛑= πœ‹ 𝟎 πœ‹ 𝟏 πœ‹ 𝟐 … πœ‹ 𝑴 .

13 STEADY-STATE PROBABILITIES
Recall again our example: πœ‹ 𝟏 πœ‹ 𝟐 πœ‹ πŸ‘ πœ‹ πŸ’ = πœ‹ 𝟏 πœ‹ 𝟐 πœ‹ πŸ‘ πœ‹ πŸ’ 𝟎 𝟎.πŸ• 𝟎 𝟎.πŸ‘ 𝟎.𝟐 𝟎 𝟎.πŸ– 𝟎 𝟎.πŸ— 𝟎 𝟎.𝟏 𝟎 𝟎 𝟎.πŸŽπŸ“ 𝟎 𝟎.πŸ—πŸ“ We also include in our equations: πœ‹ 𝟏 + πœ‹ 𝟐 +πœ‹ πŸ‘ + πœ‹ πŸ’ =𝟏

14 STEADY-STATE PROBABILITIES
Solving it will result in: πœ‹ 𝟏 β‰ˆπŸŽ.πŸπŸπŸπŸ“ πœ‹ 𝟐 β‰ˆπŸŽ.πŸπŸπŸπŸ“ πœ‹ πŸ‘ β‰ˆπŸŽ.𝟏 πœ‹ πŸ’ β‰ˆπŸŽ.πŸ”πŸ•πŸ“


Download ppt "Discrete-time markov chain (continuation)"

Similar presentations


Ads by Google