Taylor Collins 1 RECURSIVE MACROECONOMIC THEORY, LJUNGQVIST AND SARGENT, 3 RD EDITION, CHAPTER 19 DYNAMIC STACKELBER G PROBLEMS.

Slides:



Advertisements
Similar presentations
Chapter 14 : Economic Growth
Advertisements

Market Institutions: Oligopoly
Economic Simulations Using Mathematica Kota Minegishi.
Markov Decision Process
Appendix to Chapter 4 Demand Theory: A Mathematical Treatment.
Separating Hyperplanes
Solving Dynamic Stochastic General Equilibrium Models Eric Zwick ’07 Swarthmore College, Department of Mathematics & Statistics References Boyd and Smith.
An Introduction to Markov Decision Processes Sarah Hickmott
Markov Decision Processes
The Simplex Algorithm An Algorithm for solving Linear Programming Problems.
Differential Game Theory Notes by Alberto Bressan.
Optimization in Engineering Design 1 Lagrange Multipliers.
SVM QP & Midterm Review Rob Hall 10/14/ This Recitation Review of Lagrange multipliers (basic undergrad calculus) Getting to the dual for a QP.
Economics 214 Lecture 37 Constrained Optimization.
Chapter 2 Section 2 Solving a System of Linear Equations II.
Lecture 10: Support Vector Machines
5.6 Maximization and Minimization with Mixed Problem Constraints
Chapter 21. Stabilization policy with rational expectations
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
THE MATHEMATICS OF OPTIMIZATION
Normalised Least Mean-Square Adaptive Filtering
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
MAKING COMPLEX DEClSlONS
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Corporate Banking and Investment Risk tolerance and optimal portfolio choice Marek Musiela, BNP Paribas, London.
Chapter 6 Linear Programming: The Simplex Method
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 3 Differential Game Zhu Han, Dusit Niyato, Walid Saad, Tamer.
Imperfect Common Knowledge, Price Stickiness, and Inflation Inertia Porntawee Nantamanasikarn University of Hawai’i at Manoa November 27, 2006.
CAPRI Mathematical programming and exercises Torbjörn Jansson* *Corresponding author Department for Economic and Agricultural.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
DSGE Models and Optimal Monetary Policy Andrew P. Blake.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Chapter 6 Linear Programming: The Simplex Method Section R Review.
Chapter 2 Section 2.4 Lines and Planes in Space. x y z.
Practical Dynamic Programming in Ljungqvist – Sargent (2004) Presented by Edson Silveira Sobrinho for Dynamic Macro class University of Houston Economics.
Solving Linear Systems by Substitution O Chapter 7 Section 2.
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Learning in Macroeconomics Yougui Wang Department of Systems Science School of Management, BNU.
D Nagesh Kumar, IIScOptimization Methods: M2L4 1 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints.
Investment Performance Measurement, Risk Tolerance and Optimal Portfolio Choice Marek Musiela, BNP Paribas, London.
Rational Equations Section 8-6.
Support Vector Machines Project מגישים : גיל טל ואורן אגם מנחה : מיקי אלעד נובמבר 1999 הטכניון מכון טכנולוגי לישראל הפקולטה להנדסת חשמל המעבדה לעיבוד וניתוח.
5-5 Solving Quadratic Equations Objectives:  Solve quadratic equations.
Mathe III Lecture 8 Mathe III Lecture 8. 2 Constrained Maximization Lagrange Multipliers At a maximum point of the original problem the derivatives of.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Introduction to Operations Research
Bayesian Brain: Probabilistic Approaches to Neural Coding Chapter 12: Optimal Control Theory Kenju Doya, Shin Ishii, Alexandre Pouget, and Rajesh P.N.Rao.
Chapter 4: Systems of Equations and Inequalities Section 4.3: Solving Linear Systems Using Graphs.
Economics 2301 Lecture 37 Constrained Optimization.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Chapter 8 Systems of Linear Equations in Two Variables Section 8.3.
5 minutes Warm-Up Solve. 2) 1).
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Chapter 4 The Simplex Algorithm and Goal Programming
Excel’s Solver Use Excel’s Solver as a tool to assist the decision maker in identifying the optimal solution for a business decision. Business decisions.
Large Margin classifiers
Chapter 12 Section 1.
Systems of Equations and Inequalities
Solving Linear Equations and Inequalities
Markov Decision Processes
Synaptic Dynamics: Unsupervised Learning
Expectations and Macroeconomic Stabilization Policies Adaptive and Rational Expectations.
6-3 Solving Systems Using Elimination
Solve System by Linear Combination / Addition Method
Miss Zeuggin November 13th 2008!
Chapter 2. Simplex method
Presentation transcript:

Taylor Collins 1 RECURSIVE MACROECONOMIC THEORY, LJUNGQVIST AND SARGENT, 3 RD EDITION, CHAPTER 19 DYNAMIC STACKELBER G PROBLEMS

BACKGROUND INFORMATION A new type of problem Optimal decision rules are no longer functions of the natural state variables A large agent and a competitive market A rational expectations equilibrium Recall Stackelberg problem from Game Theory The cost of confirming past expectations Taylor Collins2

THE STACKELBERG PROBLEM Solving the problem – general idea Defining the Stackelberg leader and follower Defining the variables: Z t is a vector of natural state variables X t is a vector of endogenous variables U t is a vector of government instruments Y t is a stacked vector of Z t and X t Taylor Collins3

THE STACKELBERG PROBLEM The government’s one period loss function is Government wants to maximize subject to an initial condition for Z 0, but not X 0 Government makes policy in light of the model The government maximizes (1) by choosing subject to (2) Taylor Collins4 (1) (2)

PROBLEM S “The Stackelberg Problem is to maximize (2) by choosing an X 0 and a sequence of decision rules, the time t component of which maps the time t history of the state Z t into the time t decision of the Stackelberg leader.” The Stackelberg leader commits to a sequence of decisions The optimal decision rule is history dependent Two sources of history dependence Government’s ability to commit at time 0 Forward looking ability of the private sector Dynamics of Lagrange Multipliers The multipliers measure the cost today of honoring past government promises Set multipliers equal to zero at time zero Multipliers take nonzero values thereafter Taylor Collins5

SOLVING THE STACKELBERG PROBLEM 4 Step Algorithm Solve an optimal linear regulator Use stabilizing properties of shadow prices Convert Implementation multipliers into state variables Solve for X 0 and μ x0 Taylor Collins6

STEP 1: SOLVE AN O.L.R. Assume X 0 is given This will be corrected for in step 3 With this assumption, the problem has the form of an optimal linear regulator The optimal value function has the form where P solves the Riccati Equation The linear regulator is subject to an initial Y 0 and the law of motion from (2) Then, the Bellman Equation is Taylor Collins7 (3)

STEP 1: SOLVE AN O.L.R. Taking the first order condition of the Bellman equation and solving gives us Plugging this back into the Bellman equation gives us such that ū is optimal, as described by (4) Rearranging gives us the matrix Riccati Equation Denote the solution to this equation as P * Taylor Collins8 (4)

STEP 2: USE THE SHADOW PRICE Decode the information in P * Adapt a method from 5.5 that solves a problem of the form (1),(2) Attach a sequence of Lagrange multipliersto the sequence of constraints (2) and form the following Lagrangian Partition μ t conformably with our partition of Y Taylor Collins9

STEP 2: USE THE SHADOW PRICE Want to maximize L w.r.t. U t and Y t+1 Solving for U t and plugging into (2) gives us Combining this with (5), we can write the system as Taylor Collins10 (5 ) (6 )

STEP 2: USE THE SHADOW PRICE We now want to find a stabilizing solution to (6) ie, a solution that satisfies In section 5.5, it is shown that a stabilizing solution satisfies Then, the solution replicates itself over time in the sense that Taylor Collins11 (7)

STEP 3: CONVERT IMPLEMENTATION MULTIPLIERS We now confront the inconsistency of our assumption on Y 0 Forces multiplier to be a jump variable Focus on partitions of Y and μ Convert multipliers into state variables Write the last n x equations of (7) as Pay attention to partition of P Solving this for X t gives us Taylor Collins12 (8)

STEP 3: CONVERT IMPLEMENTATION MULTIPLIERS Using these modifications and (4) gives us We now have a complete description of the Stackelberg problem Taylor Collins13 (9) (9’’) (9’)

STEP 4: SOLVE FOR X 0 AND The value function satisfies Now, choose X 0 by equating to zero the gradient of V(Y 0 ), w.r.t. X 0 Then, recall (8) Finally, the Stackelberg problem is solved by plugging in these initial conditions to (9), (9’), and (9’’) and iterating the process to get Taylor Collins14 μ x0

CONCLUSION Brief Review Setup and Goal of problem 4 step Algorithm Questions, Comments, or Feedback Taylor Collins15