Lecture 3 Stephen G Hall Dynamic Modelling. The process of dynamic modelling has become such a central part of Econometrics that it is worth treating.

Slides:



Advertisements
Similar presentations
Cointegration and Error Correction Models
Advertisements

Further Inference in the Multiple Regression Model Hill et al Chapter 8.
Applied Econometrics Second edition
Econometric Modeling Through EViews and EXCEL
The Simple Regression Model
Structural Equation Modeling
The Multiple Regression Model.
Structural modelling: Causality, exogeneity and unit roots Andrew P. Blake CCBS/HKMA May 2004.
Hypothesis Testing Steps in Hypothesis Testing:
Time Varying Coefficient Models; A Proposal for selecting the Coefficient Driver Sets Stephen G. Hall, P. A. V. B. Swamy and George S. Tavlas,
Vector Autoregressive Models
Unit Roots & Forecasting
Instrumental Variables Estimation and Two Stage Least Square
10 Further Time Series OLS Issues Chapter 10 covered OLS properties for finite (small) sample time series data -If our Chapter 10 assumptions fail, we.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Chapter 10 Simple Regression.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Prof. Dr. Rainer Stachuletz
Specific to General Modelling The traditional approach to econometrics modelling was as follows: 1.Start with an equation based on economic theory. 2.Estimate.
Chapter 9 Simultaneous Equations Models. What is in this Chapter? In Chapter 4 we mentioned that one of the assumptions in the basic regression model.
1 Ka-fu Wong University of Hong Kong Forecasting with Regression Models.
Chapter 11 Multiple Regression.
Topic 3: Regression.
Econ 140 Lecture 191 Autocorrelation Lecture 19. Econ 140 Lecture 192 Today’s plan Durbin’s h-statistic Autoregressive Distributed Lag model & Finite.
14 Vector Autoregressions, Unit Roots, and Cointegration.
Linear Regression Models Powerful modeling technique Tease out relationships between “independent” variables and 1 “dependent” variable Models not perfect…need.
Autocorrelation Lecture 18 Lecture 18.
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Hypothesis Testing. Distribution of Estimator To see the impact of the sample on estimates, try different samples Plot histogram of answers –Is it “normal”
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Regression Method.
What does it mean? The variance of the error term is not constant
How do Lawyers Set fees?. Learning Objectives 1.Model i.e. “Story” or question 2.Multiple regression review 3.Omitted variables (our first failure of.
Instrumental Variables: Problems Methods of Economic Investigation Lecture 16.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
The Examination of Residuals. Examination of Residuals The fitting of models to data is done using an iterative approach. The first step is to fit a simple.
12.1 Heteroskedasticity: Remedies Normality Assumption.
Centre of Full Employment and Equity Slide 2 Short-run models and Error Correction Mechanisms Professor Bill Mitchell Director, Centre of Full Employment.
1Spring 02 Problems in Regression Analysis Heteroscedasticity Violation of the constancy of the variance of the errors. Cross-sectional data Serial Correlation.
Cointegration in Single Equations: Lecture 6 Statistical Tests for Cointegration Thomas 15.2 Testing for cointegration between two variables Cointegration.
Problems with the Durbin-Watson test
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS 1. After identifying and estimating a time series model, the goodness-of-fit of the model and validity of the.
Principles of Econometrics, 4t h EditionPage 1 Chapter 8: Heteroskedasticity Chapter 8 Heteroskedasticity Walter R. Paczkowski Rutgers University.
Correlation & Regression Analysis
Chap 6 Further Inference in the Multiple Regression Model
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Cointegration in Single Equations: Lecture 5
Univariate Time series - 2 Methods of Economic Investigation Lecture 19.
Maximum Likelihood. Much estimation theory is presented in a rather ad hoc fashion. Minimising squared errors seems a good idea but why not minimise the.
Lecture 5 Stephen G. Hall COINTEGRATION. WE HAVE SEEN THE POTENTIAL PROBLEMS OF USING NON-STATIONARY DATA, BUT THERE ARE ALSO GREAT ADVANTAGES. CONSIDER.
The Simple Linear Regression Model: Specification and Estimation  Theory suggests many relationships between variables  These relationships suggest that.
Time Series Econometrics
Heteroscedasticity Chapter 8
Financial Econometrics Lecture Notes 4
Regression Analysis AGEC 784.
ECO 400-Time Series Econometrics VAR MODELS
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS.
Chapter 9 Hypothesis Testing.
Serial Correlation and Heteroskedasticity in Time Series Regressions
Serial Correlation and Heteroscedasticity in
Simple Linear Regression
Tutorial 1: Misspecification
Heteroskedasticity.
Chapter 7: The Normality Assumption and Inference with OLS
Product moment correlation
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Lecture 3 Stephen G Hall Dynamic Modelling

The process of dynamic modelling has become such a central part of Econometrics that it is worth treating it as a topic in its own right. Dynamic modelling is a largely intuitive and simple process but it has become surrounded by a specialised language, DGP, parsimonious encompassing, conditioning, marginalising etc. This lecture attempts to explain this jargon and why it is useful.

let xt be a vector of observations on all variables in period t,and let X t-1 =(x t-1... x 0 ), then the Joint probability of the sample x t, the DGP, may be stated as, Where is a vector of unknown parameters. The Philosophy underlying this approach is that all models are misspecified. The issue is to understand the misspecification and to build useful and adequate models.

The process of model reduction consists principally of the following four steps. 1.Marginalise the DGP. We select a set of 'variables of interest' and relegate all the rest of the variables to the set which are of no interest. 2. Conditioning assumptions. Given the choice of variables of interest we must now select a subset of these variables to be treated as endogenous 3. Selection of functional form. The DGP is a completely general functional specification and before any estimation can be undertaken a specific functional form must be assumed. 4. Estimation. The final stage involves assigning values to the unknown parameters of the system, this is the process of econometric estimation.

given the general DGP it is possible to represent the first two stages in the model reduction process by the following factorisation. These steps are all crucial in the formulation of an adequate model. If the marginalisation is incorrect then this implies that some important variable has been relegated to the set of variables of no interest. If the conditioning assumptions are incorrect then we have falsely assumed that an endogenous variable is exogenous. If the functional form or estimation is invalid then obvious bias results

Exogeneity Conditioning is basically about getting the determination of exogeneity right, there are three main concepts of exogeneity Weak exogeneity Z is weakly exogenous if it is a function of only lagged Ys and the parameters which determine Y are independent of those determining Z. Strong exogeneity here in addition we assume that Z is not a function of lagged Y. this is weak exogeneity plus non granger causality Super exogeneity here in addition we assume that the parameters which determine Z are independent of the parameters which determine Y. Weak exogeneity is needed for estimation. Strong exogeneity is needed for forecasting. Super exogeneity is needed for simulation and policy analysis.

Before the development of cointegration the dynamic modelling approach in practise began from a general statement of the DGP suitably marginalised and conditioned, This general form (the ADL) may be reparameterised into many different representations which are all either equivalent or are nested within it as restrictions. eg the Bewley transformation, the common factor restriction. A particularly useful for is the Error Correction Mechanism (ECM)

The basic idea in dynamic modelling is that the General model should be set up in such a way that it passes a broad range of tests, in particular that it should have constant parameters and a `well' behaved error process. The model is then reduced or simplified applying a broad range of tests at each stage to try and find an acceptable parsimonious (minimum number of parameters) representation. This is the process of model reduction. In practise the real issue is to understand the tests used.

The general F test. The general test used for testing a group of restrictions is the F-test, this tests any restricted model against a less restricted model. T- sample size k- number of parameters in unrestricted model m- number of restrictions.

The lagrange multiplier test for serial correlation if u is the residual from an OLS regression then perform Then an LM test of the assumption that there is no serial correlation up to order m is given by LM(m)=TR 2

Instrument validity test when estimating an IV equation we should test that the instruments are weakly exogenous, this may be done by performing the following auxiliary regression where W is a set of variables which includes both the independent variables in the equation and the full set of instruments. The test is (T-k)R 2 which is, where r is the number of instruments minus the number of endogenous variables in the equation,

The Box-Pierce and Ljung-Box test This is based on the correlogram and is a general test of mth order serial correlation This is again a chi sq test with m degrees of freedom

ARCH test TR 2 from this regression is a test of an autoregressive variance process of order m. It again is a chi sq test with m degrees of freedom

Parameter Stability The Chow test is a test of parameter constancy which is a special form of F test which is distributed as F(k,T-2k).

In practise we tend to give greater weight to recursive estimation. This is a series of estimates where the sample size is increased by one period in each estimation. If we define as the estimate of the vector of parameters based on the period 1 to t. then we can define the recursive residuals as we can then standardise these for the degrees of freedom so that now they have the same properties as the OLS residuals except that they are not forced to sum to zero and they are much more sensitive to model misspecification.

Formal tests based on the recursive residuals are; The CUSUM test, where s is the full sample estimate of the standard error. But in practise plots of the recursive residuals and parameters are often much more informative. THE IMPORTANCE OF GRAPHS IS CRUCIAL The CUSUMSQ test

Testing Functional Form The Ramsey RESET test checks the possibility of higher polynomial terms again this is an LM test based on TR 2

Testing for Normality Normality of the residuals is an important property in terms of justifying the whole inference procedures, typical test is the Bera-Jarque test where SK is the measure of skewness and EK is the measure of excess kurtosis

Encompassing This is a general principal of testing which allows us to interpret and construct tests of one model against another A model (M1) encompasses another model (M2) if it can explain the results of that model. Many standard tests (eg F or LM) can be interpreted as encompassing tests. Parsimonious encompassing a large model will allays encompass a smaller nested model, this is not interesting. If a smaller model contains all the important information of a larger model this is important and we then say that it parsimoniously encompasses the larger model Variance encompassing asymptotically a true model will always have a lower variance than a false model, so the finding of a smaller standard error is evidence of variance encompassing.

Why Variance encompassing is better than using the R 2 The R 2 statistic is not invariant to the way we write an equation If Y is trended it will generally have very high R 2 Exactly the same equation, just a reparameterisation, exactly the same errors BUT a completely different R 2 as we have changed the dependent variable. The errors and the error variance are unchanged.

Example Davidson, Hendry, Srba and Yeo (DAISY)

Note:strong seasonality, upward near proportional trend

Note: Annual changes: consumption much smoother than income

NOTE: The APC is not constant but changes systematically

Note: The seasons are different, so seasonality is important

Note: notice the scale, changes in income much bigger than consumption

Note: the seasonal pattern is changing through time.

Start by considering the best existing models and what is wrong with them Older Hendry model, very low long run MPC LBS Ball model, Low long run MPC no seasonality Wall model; no long run at all

Difference model or ECM, the first difference is only valid under testable restrictions

General to Specific: tests on an invalid model are themselves invalid Insignificantsignificant

So start from a general model and nest down to a specific one But final model has no long run and fails to forecast

So set up an ECM to impose the long run proportionality Seasonally adjusted data BUT: both fail to forecast so back to the beginning

A possible missing variable; inflation may explain the movement in the APC

So start again, and eventually 2 models one without a long run one in ECM form ECM passes the forecast test

Final validation, out of sample forecasting performance