Download presentation
Presentation is loading. Please wait.
1
Chapter 12 – Autocorrelation
Econometrics Chapter 12 – Autocorrelation
2
Autocorrelation In the classical regression model, it is assumed that E(utus) = 0 if t is not equal to s. What happens when this assumption is violated?
3
First-order autocorrelation
4
Positive first-order autocorrelation (r > 0)
5
Negative first-order autocorrelation (r < 0)
6
Incorrect model specification and apparent autocorrelation
7
Violation of assumption of classical regression model
8
Consequences of first-order autocorrelation
OLS estimators are unbiased and consistent OLS estimators are not BLUE Estimated variances of residuals is biased Biased estimator of standard errors of residuals (usually a downward bias) Biased t-ratios (usually an upward bias)
9
Detection Durbin-Watson statistic
10
Acceptance and rejection regions for DW statistic
11
AR(1) correction: known r
Lagging this relationship 1 period: Multiplying this by -r With a little bit of algebra:
12
AR(1) correction: known r
Solution? quasi-difference each variable: Regress:
13
AR(1) correction: known r
This procedure provides unbiased and consistent estimates of all model parameters and standard errors. If r = 1, a unit root is said to exist. In this case, quasi-differencing is equivalent to differencing:
14
Generalized least squares
This approach is referred to as: Generalized Least Squares (GLS) GLS estimation strategy: If one of the assumptions of the classical regression model is violated, transform the model so that the transformed model satisfies these assumptions. Estimate the transformed model
15
AR(1) correction: unknown r
Cochrane-Orcutt procedure: Estimate the original model using OLS. Save the error terms Regress saved error term on lagged error term (without a constant) to estimate r Estimate a quasi-differenced version of original model. Use the estimated parameters to generate new estimate of error term. Go to step 2. Repeat this process until change in parameter estimates become less than selected threshold value. This results in unbiased and consistent estimates of all model parameters and standard errors.
16
Prais-Winsten estimator
Cochrane-Orcutt method involves the loss of 1 observation. Prais-Winsten estimator is similar to Cochrane-Orcutt method, but applies a different transformation to the first observation (see text, p. 444). Monte Carlo studies indicate substantial efficiency gain from the use of the Prais-Winsten estimator (relative to the Cochrane-Orcutt method)
17
Hildreth-Lu estimator
A grid search algorithm – helps ensure that the estimator reaches a global minimum sum of squared error terms rather than a local minimum sum of squared error terms
18
Maximum likelihood estimator
selects parameter values that maximize the computed probability of observing the realized outcomes for the dependent and independent variables an asymptotically efficient estimator
19
Higher-order autocorrelation
AR(p):
20
Detection of AR(p) error process
Breusch-Godfrey test: Estimate the parameters of original model and save error term Regress estimated error term on all independent variables in the original model and the first p lagged error terms Compute the Breusch-Godfrey Lagrange Multiplier test statistic: NR2 An AR(p) process is found to exist if the LM statistic exceeds the critical value for a c2 variate with p degrees of freedom. Use Box-Pierce or Ljung-Box statistic (see p. 451)
21
Lagged dependent variable as regressor
Durbin-Watson statistic is biased downward when a lagged dependent variable is used as a regressor. Use Durbin’s h test or Lagrange Multiplier test (test statistic = (N-1)R2 in this case). Correction: Hatanaka’s estimator (on pp of the text)
22
Correction of AR(p) process
Use Prais-Winsten (modified for an AR(p) process) or maximum likelihood estimator
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.