Markov Regime Switching Models Generalisation of the simple dummy variables approach Allow regimes (called states) to occur several periods over time In each period t the state is denoted by s t. There can be m possible states: s t = 1,..., m
Markov Regime Switching Models Example: Dating Business Cycles –Let y t denote the GDP growth rate - Simple model with m= 2 (only 2 regimes) y t = 1 + e t when s t = 1 y t = 2 + e t when s t = 2 e t i.i.d. N(0, 2 ) - Notation using dummy variables: y t = 1 D 1t + 2 (1-D 1t ) + e t where D 1t = 1 when s t = 1, = 0 when s t = 2
Markov Regime Switching Models How does s t evolve over time? –Model: Markov switching : P[s t s 1, s 2,..., s t-1 ] = P[s t s t-1 ] –Probability of moving from state i to state j: p ij = P[s t =j s t-1 =i]
Markov Regime Switching Models Example with only 2 possible states: m=2 Switching (or conditional) probabilities: Prob[s t = 1 s t-1 = 1] = p 11 Prob[s t = 2 s t-1 = 1] = 1 - p 11 Prob[s t = 2 s t-1 = 2] = p 22 Prob[s t = 1 s t-1 = 2] = 1 – p 22
Markov Regime Switching Models Unconditional probabilities: Prob[s t = 1] = (1-p 22 )/(2 - p 11 - p 22 ) Prob[s t = 2] = 1 - Prob[s t = 1] Examples: If p 11 =0.9 and p 22 =0.7 then Prob[s t = 1] =0.75 If p 11 =0.9 and p 22 =0.9 then Prob[s t = 1] =0.5
Markov Regime Switching Models Estimation: Maximize Log-Likelihood Hamilton filter - Filtered probabilities: 1, t = Prob[s t = 1 | y 1, y 2, …, y t ] 2, t = Prob[s t = 2 | y 1, y 2, …, y t ] = 1- 2, t for t=1, …, T
Markov Regime Switching Models GDP growth rate
Markov Regime Switching Models Estimated model : y t = e t when s t = 1 y t = e t when s t = 2 e t i.i.d. N(0, ) p 11 = 0.95 p 22 = 0.79
Markov Regime Switching Models 2, t : filtered probability of being in state 2