Ecocomic Modelling: PG1 Markov Chain and Its Use in Economic Modelling Markov process Transition matrix Convergence Likelihood function Expected values.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Dynamic Programming Lecture 13 (5/21/2014). - A Forest Thinning Example
Grover Cleveland by Sydney Fogarty Sydney FogartyBusch Presention Prepared For Mr. Berg’s 5 th grade class 2008.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Hidden Markov Models (1)  Brief review of discrete time finite Markov Chain  Hidden Markov Model  Examples of HMM in Bioinformatics  Estimations Basic.
Hybrid System Verification Synchronous Workshop 2003 A New Verification Algorithm for Planar Differential Inclusions Gordon Pace University of Malta December.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
Markov Regime Switching Models Generalisation of the simple dummy variables approach Allow regimes (called states) to occur several periods over time In.
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
An Introduction to Markov Decision Processes Sarah Hickmott
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Natural Language Processing Spring 2007 V. “Juggy” Jagannathan.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Markov Chains Lecture #5
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Class 5 Hidden Markov models. Markov chains Read Durbin, chapters 1 and 3 Time is divided into discrete intervals, t i At time t, system is in one of.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Interest Rates and Business Cycles Fluctuations: a Focus on Higher Moments By Andrea Beccarini, University of L’Aquila, Italy.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
DSGE Models and Optimal Monetary Policy Andrew P. Blake.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 31 Alternative System Description If all w k are given initially as Then,
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
1 Economics 331b Treatment of Uncertainty in Economics (I)
S TOCHASTIC M ODELS L ECTURE 3 P ART II C ONTINUOUS -T IME M ARKOV P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Stochastic Models Lecture 3 Continuous-Time Markov Processes
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Markov Chains and Random Walks
Courtesy of J. Bard, L. Page, and J. Heyl
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Advanced Statistical Computing Fall 2016
Much More About Markov Chains
Dynamic Programming Lecture 13 (5/31/2017).
6. Markov Chain.
Discrete-time markov chain (continuation)
1.
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
CS 188: Artificial Intelligence Fall 2007
Markov Decision Problems
Hidden Markov Models (cont.) Markov Decision Processes
Hidden Markov Models By Manish Shrivastava.
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
Presentation transcript:

Ecocomic Modelling: PG1 Markov Chain and Its Use in Economic Modelling Markov process Transition matrix Convergence Likelihood function Expected values and Policy Decision

Ecocomic Modelling: PG2 A stochastic processhas the Markov process if for all and all t A Markov process is characterised by three elements:

Ecocomic Modelling: PG3 A typical Transition matrix

Ecocomic Modelling: PG4 Chapman-Kolmogorov Equations

Ecocomic Modelling: PG5 Likelihood Function for a Markov Chain Two uses of likelihood function to study alternative histories of a Markov Chain to estimate the parameter

Ecocomic Modelling: PG6 Convergence of Markov Process with Finite States Reference: Stokey and Lucas (page 321) A Markov Process Converges when each element of the of the transition matrix approaches to a limit like this. Process is stationary in this example.

Ecocomic Modelling: PG7 Recurrent or absorbing State or Transient State in a Markov Chain S1 is the recurrent state whenever the process leaves, re-enters in it and stays there forever. It is transient when it does not return to S1 when it leaves it. Here S1 is the recurrent state whenever the process leaves, re-enters in it. S2 and S3 are transient.

Ecocomic Modelling: PG8 Converging and Non-converging Sequences Even Odd

Ecocomic Modelling: PG9 One Example of Markov Chain Stochastic life cycle optimisation model (preliminary version of Bhattarai and Perroni) Probability of recurrent state Prob of Transient state If transient High incomeLow income Probability of being in Ambiguous state

Ecocomic Modelling: PG10 Impact of Risk Aversion and Ambiguity in Expected Wealth with Markov Process

Ecocomic Modelling: PG11 Markov Decision problem (refer Ross (187)).

Ecocomic Modelling: PG12 Markov perfect equilibrium is the pair of value functions and a pair of policy functions for i=1,2 that satisfies the above Bellman equation. Equilibrium is computed by backward induction and he optimising behaviours of firms by iterating forward for all conceivable future states. Use of Markov Chain in analysis of Duopoly Sargent and Ljungqvist (133)

Ecocomic Modelling: PG13 Other Application of Markov Process Regime -Switch analysis in economic time series (Hamilton pp ; Harvey (285)) Industry investment under uncertainty (SL chap 10) Stochastic dynamic programming (SL chapter 8,9) Weak and strong convergence analysis (SLChap 11-13) Arrow Securities (Ljungqvist and Sargent Chapter 7). Life cycle consumption and saving: An example Precautionary saving

Ecocomic Modelling: PG14 References:

Ecocomic Modelling: PG15 Markov Chain Example in GAMS

Ecocomic Modelling: PG16 Markov Chain Example in GAMS

Ecocomic Modelling: PG17 Markov Chain Example in GAMS: Model Equations variables u(t,z) c(t,z) v(t,z) w(t,z) obj; equations defu(t,z) defc(t,z) defwl(t,z) defwn(t,z) defwt(t,z) dobj;

Ecocomic Modelling: PG18 Calculation of Weight Among Various States