Tutorial 2: Autocorrelation

Slides:



Advertisements
Similar presentations
Functional Form and Dynamic Models
Advertisements

Regression Analysis.
Heteroskedasticity Hill et al Chapter 11. Predicting food expenditure Are we likely to be better at predicting food expenditure at: –low incomes; –high.
Applied Econometrics Second edition
Properties of Least Squares Regression Coefficients
Autocorrelation Lecture 20 Lecture 20.
Econometric Modeling Through EViews and EXCEL
Multivariate Regression
Economics 310 Lecture 16 Autocorrelation Continued.
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Chapter 13 Additional Topics in Regression Analysis
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Additional Topics in Regression Analysis
1 MF-852 Financial Econometrics Lecture 6 Linear Regression I Roy J. Epstein Fall 2003.
Review.
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
What does it mean? The variance of the error term is not constant
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Twelve.
1 MF-852 Financial Econometrics Lecture 10 Serial Correlation and Heteroscedasticity Roy J. Epstein Fall 2003.
Pure Serial Correlation
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Chap 14-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 14 Additional Topics in Regression Analysis Statistics for Business.
1 Lecture 4 Main Tasks Today 1. Review of Lecture 3 2. Accuracy of the LS estimators 3. Significance Tests of the Parameters 4. Confidence Interval 5.
Autocorrelation in Time Series KNNL – Chapter 12.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Thirteen.
Problems with the Durbin-Watson test
EC 532 Advanced Econometrics Lecture 1 : Heteroscedasticity Prof. Burak Saltoglu.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
1 Regression Review Population Vs. Sample Regression Line Residual and Standard Error of Regression Interpretation of intercept & slope T-test, F-test.
Quantitative Methods. Bivariate Regression (OLS) We’ll start with OLS regression. Stands for  Ordinary Least Squares Regression. Relatively basic multivariate.
The Instrumental Variables Estimator The instrumental variables (IV) estimator is an alternative to Ordinary Least Squares (OLS) which generates consistent.
11.1 Heteroskedasticity: Nature and Detection Aims and Learning Objectives By the end of this session students should be able to: Explain the nature.
Heteroscedasticity Heteroscedasticity is present if the variance of the error term is not a constant. This is most commonly a problem when dealing with.
Heteroscedasticity Chapter 8
Ch5 Relaxing the Assumptions of the Classical Model
REGRESSION DIAGNOSTIC III: AUTOCORRELATION
Dynamic Models, Autocorrelation and Forecasting
Kakhramon Yusupov June 15th, :30pm – 3:00pm Session 3
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
Multivariate Regression
Fundamentals of regression analysis 2
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Pure Serial Correlation
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS.
HETEROSCEDASTICITY: WHAT HAPPENS IF THE ERROR VARIANCE IS NONCONSTANT?
STATIONARY AND NONSTATIONARY TIME SERIES
I271B Quantitative Methods
Chapter 12 Inference on the Least-squares Regression Line; ANOVA
Chapter 12 – Autocorrelation
Autocorrelation.
Serial Correlation and Heteroskedasticity in Time Series Regressions
Lecturer Dr. Veronika Alhanaqtah
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Introduction to Econometrics, 5th edition Chapter 12: Autocorrelation
Serial Correlation and Heteroscedasticity in
HETEROSCEDASTICITY: WHAT HAPPENS IF THE ERROR VARIANCE IS NONCONSTANT?
Tutorial 4 For the Seat Belt Data, the Death Penalty Data, and the University Admission Data, (1). Identify the response variable and the explanatory.
Tutorial 1: Misspecification
REGRESSION DIAGNOSTIC III: AUTOCORRELATION
Heteroskedasticity.
Autocorrelation Dr. A. PHILIP AROKIADOSS Chapter 5 Assistant Professor
Autocorrelation.
Lecturer Dr. Veronika Alhanaqtah
Autocorrelation MS management.
Heteroskedasticity.
Financial Econometrics Fin. 505
Serial Correlation and Heteroscedasticity in
Presentation transcript:

Tutorial 2: Autocorrelation Matthew Robson University of York Econometrics 2

Autocorrelation Autocorrelation emerges when the errors in different time-periods are correlated. When 𝑐𝑜𝑣 𝑢 𝑖 , 𝑢 𝑗 ≠0, 𝑖≠𝑗 Positive Autocorrelation Negative Autocorrelation (Gujarati and Porter, 2009)

Assignment 7 Estimate the log-linear consumption function: 𝐿𝑜𝑔 𝐶 𝑡 = 𝛽 0 + 𝛽 1 𝐿𝑜𝑔 𝐼 𝑡 + 𝛽 2 𝐿𝑜𝑔 𝑊 𝑡 + 𝛽 4 𝑟 𝑡 + 𝑢 𝑡 Where: 𝐶 𝑡 = consumption, 𝐼 𝑡 = real disposable income 𝑊 𝑡 = wealth, 𝑟 𝑡 = interest rate For the period 1967q1 – 2002q4. (1)

Descriptive Statistics

Results 𝐿𝑜𝑔 𝐶 𝑡 =0.8559+0.9165 𝐿𝑜𝑔 𝐼 𝑡 +0.0138 𝐿𝑜𝑔 𝑊 𝑡 +−0.0022 𝑟 𝑡 + 𝑢 𝑡

Predicted Values

Autocorrelation

Question a) Test for autocorrelation using the Durbin-Watson test statistic given by PC-GIVE. What are the limitations of this test? How does the Breusch-Godfrey test overcome some of these limitations?

Durbin-Watson Test Defined as: 𝑑= 𝑡=2 𝑡=𝑛 𝑢 𝑡 − 𝑢 𝑡−1 2 𝑡=1 𝑡=𝑛 𝑢 𝑡 2

Durbin-Watson Test Durbin-Watson Statistic: 𝑑 = 0.439 𝑁=144, 𝑘 = 3 𝑀𝑜𝑑𝑒𝑙→ 𝑇𝑒𝑠𝑡→ 𝑇𝑒𝑠𝑡…→𝑅𝑒𝑠𝑖𝑑𝑢𝑎𝑙 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛𝑠, 𝑃𝑜𝑟𝑡𝑚𝑎𝑛𝑡𝑒𝑎𝑢 𝑎𝑛𝑑 𝐷𝑊 Durbin-Watson Statistic: 𝑑 = 0.439 𝑁=144, 𝑘 = 3 𝛼=0.05 → 𝑑 𝑢 =1.774, 𝑑 𝐿 =1.693 𝛼=0.01→ 𝑑 𝑢 =1.665, 𝑑 𝐿 =1.584 𝐻 0 : no +ve autocorrelation, 𝐻 0 ∗ : no -ve autocorrelation The 𝑑 statistic is less than the critical 𝑑 𝐿 value ∴ we reject the null hypothesis ( 𝐻 0 ) of no +ve correlation at both 5% and 1% levels.

Question a) Limitations of Durbin-Watson statistic Lagged residuals only to first order Zones of indecision Not appropriate when lagged dependant variable is included The Breusch-Godfrey test allows: Higher order autocorrelation Still appropriate when a lagged dependant variable is included

Question b) Test for autocorrelation using the Breusch-Godfrey test statistic given by PC-GIVE. What (default) order of autocorrelation is being tested for here?

Question b) 𝑀𝑜𝑑𝑒𝑙→ 𝑇𝑒𝑠𝑡→ 𝑇𝑒𝑠𝑡…→ 𝐸𝑟𝑟𝑜𝑟 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 𝑡𝑒𝑠𝑡 The order of the autocorrelation being tested here is 5th, e.g… 𝑢 𝑡 = 𝜌 1 𝑢 𝑡−1 + 𝜌 2 𝑢 𝑡−2 + 𝜌 3 𝑢 𝑡−3 + 𝜌 4 𝑢 𝑡−4 + 𝜌 5 𝑢 𝑡−5 + 𝜀 𝑡 Test statistic is 97.171~ 𝜒 5 2 𝜒 5 2 0.05 =11.0705, 𝜒 5 2 0.01 =15.0863 ∴ We reject 𝐻 0 (of no autocorrelation) at both 5% and 1% levels.

Question c) Construct the Breusch-Godfrey test for up to second order autocorrelation and test using the F statistic.

Breusch-Godfrey Test Method Estimate the model and save the residuals ( 𝑢 𝑡 ) Estimate: 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝛼 4 𝑢 𝑡−1 + 𝛼 5 𝑢 𝑡−2 + 𝜀 𝑡 Note the 𝑅 2 from Step 2 and calculate the 𝜒 2 test statistic as: 𝑁−𝑞 𝑅 2 ~ 𝜒 𝑞 2 Where: 𝑁 = 𝑓𝑢𝑙𝑙 𝑛𝑜. 𝑜𝑓 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠 𝑞 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑙𝑎𝑔𝑠 (𝑜𝑟 𝑡ℎ𝑒 𝑜𝑟𝑑𝑒𝑟 𝑜𝑓 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛) Compare the test statistic from Step 3 with the 𝜒 2 critical values at the 5% and 10% levels.

Breusch-Godfrey Test Construct the Breusch-Godfrey test, for second order autocorrelation, e.g. 𝑢 𝑡 = 𝜌 1 𝑢 𝑡−1 + 𝜌 2 𝑢 𝑡−2 𝐻 0 : 𝑛𝑜 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝜌 1 = 𝜌 2 =0 𝐻 1 : 𝐻 0 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒 Test Statistic: 𝑁−𝑞 𝑅 2 = 44−2 0.669224 95.0298~ 𝜒 2 2 𝜒 2 2 0.05 =5.991 𝜒 2 2 0.01 =9.210 ∴ We reject the null hypothesis of no autocorrelation at both 5% and 1% 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝛼 4 𝑢 𝑡−1 + 𝛼 5 𝑢 𝑡−2 + 𝜀 𝑡

Breusch-Godfrey Test (F-test) Method Estimate the model and save the residuals ( 𝑢 𝑡 ) Estimate two factor models: RES: 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝜀 𝑡 UNRES: 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝛼 4 𝑢 𝑡−1 + 𝛼 5 𝑢 𝑡−2 + 𝜀 𝑡 Over the same sample i.e. 𝑁−2 = 𝑁−𝑞 = 144−2 Undertake the F-test for: 𝐻 0 : 𝛼 4 = 𝛼 5 =0 𝐹= 𝑅 𝑈𝑁𝑅𝐸𝑆 2 − 𝑅 𝑅𝐸𝑆 2 𝑞 1− 𝑅 𝑈𝑁𝑅𝐸𝑆 2 𝑁 ∗ −𝐾 Where: 𝑞 =𝑛𝑜. 𝑜𝑓 𝑟𝑒𝑠𝑡𝑟𝑖𝑐𝑡𝑖𝑜𝑛𝑠 𝑘 = 𝑛𝑜. 𝑜𝑓 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠 𝑖𝑠 𝑈𝑁𝑅𝐸𝑆 𝑁 ∗ = 𝑛𝑜. 𝑜𝑓 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑈𝑁𝑅𝐸𝑆 𝑠𝑎𝑚𝑝𝑙𝑒 (𝑁−𝑞)

Breusch-Godfrey Test (F-test) Construct the Breusch-Godfrey test, for second order autocorrelation. 𝐻 0 : 𝑛𝑜 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝛼 4 = 𝛼 5 =0 𝐻 1 : 𝐻 0 𝑖𝑠 𝑓𝑎𝑙𝑠𝑒 Test Statistic: 𝐹= 𝑅 𝑈𝑁𝑅𝐸𝑆 2 − 𝑅 𝑅𝐸𝑆 2 𝑞 1− 𝑅 𝑈𝑁𝑅𝐸𝑆 2 𝑁 ∗ −𝐾 𝐹= 0.669224−7.266996× 10 −6 2 1−0.669224 142−6 𝐹=137.57567~ 𝐹 𝑞, 𝑁 ∗ −𝐾 𝐹 2, 136 0.05 ≈3.07, 𝐹 2, 136 0.01 ≈4.79 ∴ We reject the null hypothesis of no autocorrelation at both 5% and 1% 𝑅𝐸𝑆: 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝜀 𝑡 𝑈𝑁𝑅𝐸𝑆: 𝑢 𝑡 = 𝛼 0 + 𝛼 1 log 𝐼 𝑡 + 𝛼 2 log 𝑊 𝑡 + 𝛼 3 𝑟 𝑡 + 𝛼 4 𝑢 𝑡−1 + 𝛼 5 𝑢 𝑡−2 + 𝜀 𝑡

Question c) 𝑀𝑜𝑑𝑒𝑙→ 𝑇𝑒𝑠𝑡→ 𝑇𝑒𝑠𝑡…→ 𝐸𝑟𝑟𝑜𝑟 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 𝑡𝑒𝑠𝑡 (′𝑡𝑜 𝑙𝑎𝑔′ = 2) Test Statistic: 96.262 ~ 𝜒 2 2 𝜒 2 2 0.05 =5.991 𝜒 2 2 0.01 =9.210 ∴ We reject the null hypothesis of no autocorrelation at both 5% and 1%

Question d) What are the consequences of your findings for the usefulness of the standard Ordinary Least Squares results for the consumption function above?

Question d) Consequences of autocorrelation OLS estimators are LUE but not BLUE (most efficient and unbiased) The estimated variances of OLS estimators are biased Usual 𝑡 and 𝐹 tests are unreliable The usual formula to compute the error variance is a biased estimator of the true 𝜎 2 The conventionally computed 𝑅 2 may be an unreliable measure of the true 𝑅 2