Discrete-time markov chain (continuation)
Probability of absorption If state j is an absorbing state, what is the probability of going from state i to state j? Let us denote the probability as 𝐴 𝑖𝑗 . Finding the probabilities is not straightforward, especially when there are two or more absorbing states in a Markov chain.
Probability of absorption What we can do is to consider all the possibilities for the first transition and then, given the first transition, we consider the conditional probability of absorption into state j. 𝐴 𝑖𝑗 = 𝑘=0 𝑀 𝑝 𝑖𝑘 𝐴 𝑘𝑗
Probability of absorption We can obtain the probabilities by solving a system of linear equations 𝐴 𝑖𝑗 = 𝑘=0 𝑀 𝑝 𝑖𝑘 𝐴 𝑘𝑗 for 𝑖=0,1,…,𝑀 subject to 𝐴 𝑗𝑗 =1 𝐴 𝑘𝑗 =0 if state 𝑘 is recurrent and 𝑘≠𝑗
Exercise: Find 𝑨 𝟏𝟑 1 .3 1 0.3 p=0.7 p=0.7 State 0 State 3 State 1
Ending Slides about Markov Chains
Time reversible markov chains Consider a stationary (i.e., has been in operation for a long time) ergodic Markov Chain having transition probabilities 𝑝 𝑖𝑗 and stationary probabilities 𝜋 𝑖 . Suppose that starting at some time we trace the sequence of states going backward in time.
Time reversible markov chains Starting at time n, the stochastic process 𝑋 𝑛 , 𝑋 𝑛−1 , 𝑋 𝑛−2 ,…, 𝑋 0 is also a Markov Chain! The transition probabilities are 𝑞 𝑖𝑗 = 𝜋 𝑗 𝑝 𝑗𝑖 𝜋 𝑖 . If 𝑞 𝑖𝑗 = 𝑝 𝑖𝑗 for all 𝑖,𝑗 then the Markov Chain is time reversible. Or 𝜋 𝑗 𝑝 𝑗𝑖 = 𝜋 𝑖 𝑝 𝑖𝑗 which means the rate at which the process goes from i to j is equal to the rate at which it goes from j to i.
For Reporting: Hidden Markov Chains applied to data analytics/mining Markov Chain Monte Carlo in data fitting