Autocorrelation Dr. A. PHILIP AROKIADOSS Chapter 5 Assistant Professor Department of Statistics St. Joseph’s College (Autonomous) Tiruchirappalli-620 002.
Aims and Learning Objectives By the end of this session students should be able to: Explain the nature of autocorrelation Understand the causes and consequences of autocorrelation Perform tests to determine whether a regression model has autocorrelated disturbances
Nature of Autocorrelation Autocorrelation is a systematic pattern in the errors that can be either attracting (positive) or repelling (negative) autocorrelation. For efficiency (accurate estimation/prediction) all systematic information needs to be incor-porated into the regression model.
Yt = 1 + 2X2t + 3X3t + Ut Cov (Ui, Uj) or E(Ui, Uj) = 0 Regression Model Yt = 1 + 2X2t + 3X3t + Ut Cov (Ui, Uj) or E(Ui, Uj) = 0 No autocorrelation: Autocorrelation: Cov (Ui, Uj) 0 or E(Ui, Uj) 0 Note: i j In general E(Ut, Ut-s) 0
Ut Attracting . . Postive Auto. . . . . . . . . . . . . . . . . . . . . t Ut Random . . . . . . . . . . . . No Auto. . . . . . . . . . . . . . . . . t . Ut . Repelling . . . . . . . Negative Auto. . . . . . . . t . .
Order of Autocorrelation Yt = 1 + 2X2t + 3X3t + Ut 1st Order: Ut = Ut1 + t 2nd Order: Ut = 1Ut1 + 2Ut2 + t 3rd Order: Ut = 1Ut1 + 2Ut2 + 3Ut3 + t Where -1 < < +1 We will assume First Order Autocorrelation: Ut = Ut1 + t AR(1) :
Causes of Autocorrelation Indirect Omitted Variables Functional form Seasonality Direct Inertia or persistence Spatial correlation Cyclical Influences
Consequences of Autocorrelation 1. Ordinary least squares still linear and unbiased. 2. Ordinary least squares not efficient. 3. Usual formulas give incorrect standard errors for least squares. 4. Confidence intervals and hypothesis tests based on usual standard errors are wrong.
Yt = 1 + 2Xt + et E(et, et-s) 0 ^ ^ Autocorrelated disturbances: Formula for ordinary least squares variance (no autocorrelation in disturbances): Formula for ordinary least squares variance (autocorrelated disturbances): Therefore when errors are autocorrelated ordinary least squares estimators are inefficient (i.e. not “best”)
Detecting Autocorrelation et provide proxies for Ut Preliminary Analysis (Informal Tests) Data - autocorrelation often occurs in time-series (exceptions: spatial correlation, panel data) Graphical examination of residuals - plot et against time or et-1 to see if there is a relation
Formal Tests for Autocorrelation Runs Test: analyse the uninterrupted sequence of the residuals Durbin-Watson (DW) d test: ratio of the sum of squared differences in successive residuals to the residual sum of squares Breusch-Godfrey LM test: A more general test which does not assume the disturbances are AR(1).
et et-1 et Ho: = 0 vs. H1: = 0 , > 0, or < 0 Durbin-Watson d Test Ho: = 0 vs. H1: = 0 , > 0, or < 0 The Durbin-Watson Test statistic, d, is : et et-1 et n t = 2 t = 1 2 d= Ratio of the sum of squared differences in successive residuals to the residual sum of squares
The test statistic, d, is approximately related to as: ^ d 2(1) ^ When = 0 , the Durbin-Watson statistic is d 2. ^ When = 1 , the Durbin-Watson statistic is d 0. ^ When = -1 , the Durbin-Watson statistic is d 4. ^
DW d Test 4 Steps Step 1: Estimate And obtain the residuals Step 2: Compute the DW d test statistic Step 3: Obtain dL and dU: the lower and upper points from the Durbin-Watson tables
Step 4: Implement the following decision rule:
Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut Restrictive Assumptions: There is an intercept in the model X values are non-stochastic Disturbances are AR(1) Model does not include a lagged dependent variable as an explanatory variable, e.g. Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut
Breusch-Godfrey LM Test This test is valid with lagged dependent variables and can be used to test for higher order autocorrelation Suppose, for example, that we estimate: Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut And wish to test for autocorrelation of the form:
Breusch-Godfrey LM Test 4 steps Step 1. Estimate Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut obtain the residuals (et) Step 2. Estimate the following auxiliary regression model:
Breusch-Godfrey LM Test Step 3. For large sample sizes, the test statistic is: Step 4. If the test statistic exceeds the critical chi-square value we can reject the null hypothesis of no serial correlation in any of the terms
Summary In this lecture we have: 1. Analysed the theoretical causes and consequences of autocorrelation 2. Described a number of methods for detecting the presence of autocorrelation