Markov Regime Switching Models Generalisation of the simple dummy variables approach Allow regimes (called states) to occur several periods over time In.

Slides:



Advertisements
Similar presentations
NORMAL OR GAUSSIAN DISTRIBUTION Chapter 5. General Normal Distribution Two parameter distribution with a pdf given by:
Advertisements

Occupancy Problems m balls being randomly assigned to one of n bins. (Independently and uniformly) The questions: - what is the maximum number of balls.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Markov Analysis Jørn Vatn NTNU.
Queueing Models for P2P Systems.  Extend classical queuing theory for P2P systems.  Develop taxonomy for different variations of these queuing models.
Lecture 12 – Discrete-Time Markov Chains
Hidden Markov Models M. Vijay Venkatesh. Outline Introduction Graphical Model Parameterization Inference Summary.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
Hidden Markov Models First Story! Majid Hajiloo, Aria Khademi.
Bayesian Model Selection and Multi-target Tracking Presenters: Xingqiu Zhao and Nikki Hu Joint work with M. A. Kouritzin, H. Long, J. McCrosky, W. Sun.
1 Software Testing and Quality Assurance Lecture 36 – Software Quality Assurance.
Hidden Markov Model Special case of Dynamic Bayesian network Single (hidden) state variable Single (observed) observation variable Transition probability.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Doug Downey, adapted from Bryan Pardo,Northwestern University
The moment generating function of random variable X is given by Moment generating function.
1 CHAPTER 16 FORECASTING WITH NONLINEAR MODELS: AN INTRODUCTION Figure 16.1 Time Series of a SETAR Process with Two Regimes and Threshold Zero González-Rivera:
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Group exercise For 0≤t 1
9. Binary Dependent Variables 9.1 Homogeneous models –Logit, probit models –Inference –Tax preparers 9.2 Random effects models 9.3 Fixed effects models.
Combined Lecture CS621: Artificial Intelligence (lecture 25) CS626/449: Speech-NLP-Web/Topics-in- AI (lecture 26) Pushpak Bhattacharyya Computer Science.
Interest Rates and Business Cycles Fluctuations: a Focus on Higher Moments By Andrea Beccarini, University of L’Aquila, Italy.
Factor Graphs Young Ki Baik Computer Vision Lab. Seoul National University.
Chapter 5 Sampling and Statistics Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Likelihood probability of observing the data given a model with certain parameters Maximum Likelihood Estimation (MLE) –find the parameter combination.
R. Bhar, A. G. Malliariswww.bhar.id.au1 Deviation of US Equity Return Are They Determined by Fundamental or Behavioral Variables? R. Bhar (UNSW), A. G.
Permutation Groups Part 1. Definition A permutation of a set A is a function from A to A that is both one to one and onto.
Sec 15.6 Directional Derivatives and the Gradient Vector
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Intro. ANN & Fuzzy Systems Lecture 23 Clustering (4)
EE 3220: Digital Communication
Probability: Terminology  Sample Space  Set of all possible outcomes of a random experiment.  Random Experiment  Any activity resulting in uncertain.
Nowcasting of Gross Regional Product and Analyzing Regional Business Cycle Nariyasu Yamasawa.
Week 41 How to find estimators? There are two main methods for finding estimators: 1) Method of moments. 2) The method of Maximum likelihood. Sometimes.
Maximum Likelihood Estimation
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
Hidden Markov Models Sean Callen Joel Henningsen.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Table of Contents Matrices - Definition and Notation A matrix is a rectangular array of numbers. Consider the following matrix: Matrix B has 3 rows and.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Other Models for Time Series. The Hidden Markov Model (HMM)
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Probability Distribution. Probability Distributions: Overview To understand probability distributions, it is important to understand variables and random.
Hidden Markov Models Sean Callen Joel Henningsen.
親愛的吉姆舅舅: 今天吃完晚餐後,奶奶說,在家 裡情況變好以前,您要我搬到城裡跟 您住。奶奶有沒有跟您說,爸爸已經 好久沒有工作,也好久沒有人請媽媽 做衣服了? 我們聽完都哭了,連爸爸也哭了, 但是媽媽說了一個故事讓我們又笑了。 她說:您們小的時候,她曾經被您追 得爬到樹上去,真的嗎? 雖然我個子小,但是我很強壯,
浙江省. 浙江省位于中国 东南沿海东濒东海 ,南界福建,西与 江西、安徽相连, 北与上海,江苏为 邻。境内最大的河 流钱塘江,因江流 曲折,又称浙江, 省以江名,简称为 浙。
“ 东方明珠 ”── 香港和澳门 东北师大附中 王 瑶. 这是哪个城市的夜景? 香港 这是哪个城市的夜景? 澳门.
Hidden Markov Models – Concepts 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Today.
ERGM conditional form Much easier to calculate delta (change statistics)
In-Class Exercise: Discrete Distributions
Probabilistic Reasoning over Time
Statistics and Shape Analysis
IENG 362 Markov Chains.
CSE-490DF Robotics Capstone
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
EC 331 The Theory of and applications of Maximum Likelihood Method
10:00.
EDLC(Embedded system Development Life Cycle ).
Pairwise Markov Networks
Thing / Person:____________________ Dates:_________________
Discussion of the paper What Do European Stock Markets Prefer
THE GROWTH ENVIRONMENT.
Presentation transcript:

Markov Regime Switching Models Generalisation of the simple dummy variables approach Allow regimes (called states) to occur several periods over time In each period t the state is denoted by s t. There can be m possible states: s t = 1,..., m

Markov Regime Switching Models Example: Dating Business Cycles –Let y t denote the GDP growth rate - Simple model with m= 2 (only 2 regimes) y t =  1 + e t when s t = 1 y t =  2 + e t when s t = 2 e t i.i.d. N(0,  2 ) - Notation using dummy variables: y t =  1 D 1t +  2 (1-D 1t ) + e t where D 1t = 1 when s t = 1, = 0 when s t = 2

Markov Regime Switching Models How does s t evolve over time? –Model: Markov switching : P[s t  s 1, s 2,..., s t-1 ] = P[s t  s t-1 ] –Probability of moving from state i to state j: p ij = P[s t =j  s t-1 =i]

Markov Regime Switching Models Example with only 2 possible states: m=2 Switching (or conditional) probabilities: Prob[s t = 1  s t-1 = 1] = p 11 Prob[s t = 2  s t-1 = 1] = 1 - p 11 Prob[s t = 2  s t-1 = 2] = p 22 Prob[s t = 1  s t-1 = 2] = 1 – p 22

Markov Regime Switching Models Unconditional probabilities: Prob[s t = 1] = (1-p 22 )/(2 - p 11 - p 22 ) Prob[s t = 2] = 1 - Prob[s t = 1] Examples: If p 11 =0.9 and p 22 =0.7 then Prob[s t = 1] =0.75 If p 11 =0.9 and p 22 =0.9 then Prob[s t = 1] =0.5

Markov Regime Switching Models Estimation: Maximize Log-Likelihood Hamilton filter - Filtered probabilities:  1, t = Prob[s t = 1 | y 1, y 2, …, y t ]  2, t = Prob[s t = 2 | y 1, y 2, …, y t ] = 1-  2, t for t=1, …, T

Markov Regime Switching Models GDP growth rate

Markov Regime Switching Models Estimated model : y t = e t when s t = 1 y t = e t when s t = 2 e t i.i.d. N(0, ) p 11 = 0.95 p 22 = 0.79

Markov Regime Switching Models  2, t : filtered probability of being in state 2