Download presentation
Presentation is loading. Please wait.
Published byRalf Small Modified over 9 years ago
1
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity fails if the variance of the error term varies in the sample (ie: varies with the x variables) -We used Homoskedasticity for t tests, F test, and confidence intervals, even with large samples
2
8. Heteroskedasticity 8.1 Consequences of Heteroskedasticity for OLS
8.2 Heteroskedasticity-Robust Inference after OLS Estimation 8.3 Testing for Heteroskedasticity 8.4 Weighted Least Squares Estimation 8.5 The Linear Probability Model Revisited
3
8.1 Consequences of Heteroskedasticity
We have already seen that Heteroskedasticity: Does not cause bias or inconsistency (this depends on MLR. 1 through MLR. 4) Does not affect R2 or adjusted R2 (since these estimate the POPULATION variances which are not conditional on X) Heteroskedasticity does: Make Var(Bjhat) biased, and therefore invalidate typical OLS standard errors (and therefore tests) Make OLS no longer BLUE (a better estimator may exist)
4
8.2 Heteroskedasticity-Robust Inference after OLS Estimation
-Because testing hypothesis is a key element of econometrics, we need to obtain accurate standard errors in the presence of heteroskedasticity -in the last few decades, econometricians have learned how to adjust standard errors when HETEROSKEDASTICITY OF UNKNOWN FORM exists -these heteroskedasticity-robust procedures are valid (in large samples) regardless of eror variance
5
8.2 Het Fixing 1 -Given a typical single independent variable model, heteroskedasticity implies a varying variance: -Rewriting the OLS slope estimator, we can obtain a formula for its variance:
6
8.2 Het Fixing 1 -Recall that
-Also notice that given homoskedasticity, -While we don’t know σi2, White (1980) showed that a valid estimator is:
7
8.2 Het Fixing 1 -Given a multiple independent variable model:
-The valid estimator of Var(Bjhat) becomes: -where rijhat2 is the ith residual of a regression of xj on all other x variables -where SSRj is the sum of the squared residuals from that regression
8
8.2 Het Fixing 1 -The square root of this estimate of variance is commonly called the HETEROSKEDASTICITY-ROBUST STANDARD ERROR, but is also called the White, Huber, or Eickert standard errors due to its founders -there are a variety of slight adjustments to this standard error, but economists generally simply use the values reported by their program -this se adjustment gives us HETEROSKEDASTICITY-ROBUST T STATISTICS:
9
8.2 Why Bother with Normal Errors?
-One may ask why we bother with normal OLS errors when heteroskedasticity-robust standard errors are valid more often: Normal OLS t stats have an exact t distribution, regardless of sample size Robust t statistics are valid only for large sample sizes Note that HETEROSKEDASTICITY-ROBUST F STATISTICS also exist, often called the HETEROSKEDASTICITY-ROBUST WALD STATISTIC and reported by most econ programs.
10
8.3 Testing for Heteroskedasticity
-In this chapter we will cover a variety of modern tests for heteroskedasticity -It is important to know if heteroskedasticity exists, as its existence means OLS is no longer the BEST estimator -Note that while other tests for heteroskedasticity exist, the test presented here are preferred due to their more DIRECT testing for heteroskedasticity
11
8.3 Testing for Het -Consider our typical linear model and a null hypothesis suggesting homoskedasticity: Since we know that Var(u|X)=E(u2|X), we can rewrite the null hypothesis to read:
12
8.3 Testing for Het -As we are testing whether u2 is related to any explanatory variables, we can use the linear model: -where v is an error term with mean zero given the x’s -note that the dependent variable is SQUARED -this changes our null hypothesis to:
13
8.3 Testing for Het -Since we don’t know the true error of the regression, but only the residual, our estimation becomes: -Which is valid for large sample distributions -The R2 from the above regression is used to construct an F statistic:
14
8.3 Testing for Het -This test F statistic is compared to a critical F* with k, n-k-1 degrees of freedom -If the null hypothesis is rejected, there is evidence to conclude that heteroskedasticity exists at a given α -If the null hypothesis is not rejected, there is insufficient evidence to conclude that heteroskedasticity exists at a given α -this is sometimes called the BREUCH-PAGAN TEST FOR HETEROSKEDASTICITY (BP TEST)
15
8.3 BP HET TEST In order to conduct a BP test for het
Run a normal OLS regression (y on x’s) and obtain the square of the residuals, uhat2 Run a regression of uhat2 on all independent variables and save the R2 Obtain a test F statistic and compare it to the critical F* If F>F*, reject the null hypothesis of homoskedasticity and start correcting for heteroskedasticity
16
8.3 BP HET TEST If we suspect that our model’s heteroskedasticity depends on only certain x variables, Only regress uhat2 on those variables -Keep in mind that the K in the R2 formula and in the degrees of freedom comes from the number of independent variables in the uhat2 regression An alternate test for het is the white test:
17
8.3 White Test for Het -Given the statistical modifications covered in chapter 5, White (1980) proposed another test for heteroskedasticity -With 3 independent variables, White proposed a linear regression with 9 regressors: -The null hypothesis (homoskedasticity) now sets all δ (except the intercept) equal to zero
18
8.3 White Test for Het -Unfortunately this test involves MANY regressors (27 regressors for 6 x variables) and as such may have degrees of freedom issues -one special case of the White test is to estimate the regression: -since this preserves the “squared” concept of the White test and is particularly useful when het is suspected to be connected to the level of the expected value E(y|X) -this test has a F distribution w/2,n-3 df
19
8.3 Special White HET TEST In order to conduct a special White test for het Run a normal OLS regression (y on x’s) and obtain the square of the residuals, uhat2 and the predicted values, yhat Run the regression of uhat2 on both yhat and yhat2 (including an intercept). Record the R2 values Using these R2 values, compute a test F statistic as in the BP test If F>F*, reject the null hypothesis (homoskedasticity)
20
8.3 Heteroskedasticity Note
-Our decision to REJECT the null hypothesis and suspect heteroskedasticity is only valid if MLR.4 is valid -if MLR.4 is violated (ie: bad funcitonal form or omitted variables), one can reject the null hypothesis even if het doesn’t actually exist -Therefore always chose functional form and all variables before testing for heteroskedasticity
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.