8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.

Slides:



Advertisements
Similar presentations
Heteroskedasticity Hill et al Chapter 11. Predicting food expenditure Are we likely to be better at predicting food expenditure at: –low incomes; –high.
Advertisements

Econometric Modeling Through EViews and EXCEL
The Simple Regression Model
Hypothesis Testing Steps in Hypothesis Testing:
Inference for Regression
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
CHAPTER 24: Inference for Regression
4.3 Confidence Intervals -Using our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: -Given a significance.
Objectives (BPS chapter 24)
8.4 Weighted Least Squares Estimation Before the existence of heteroskedasticity-robust statistics, one needed to know the form of heteroskedasticity -Het.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
HETEROSKEDASTICITY Chapter 8.
Cross section and panel method
Chapter 10 Simple Regression.
2.5 Variances of the OLS Estimators
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
12.3 Correcting for Serial Correlation w/ Strictly Exogenous Regressors The following autocorrelation correction requires all our regressors to be strictly.
Multiple Regression Analysis
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
Chapter 11 Multiple Regression.
Econ 140 Lecture 191 Heteroskedasticity Lecture 19.
Topic 3: Regression.
EC Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
6.4 Prediction -We have already seen how to make predictions about our dependent variable using our OLS estimates and values for our independent variables.
Economics Prof. Buckles
Simple Linear Regression Analysis
Multiple Linear Regression Analysis
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
Inference for regression - Simple linear regression
Chapter 13: Inference in Regression
Hypothesis Testing in Linear Regression Analysis
Returning to Consumption
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Chapter 10 Hetero- skedasticity Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
12.1 Heteroskedasticity: Remedies Normality Assumption.
Lesson Inference for Regression. Knowledge Objectives Identify the conditions necessary to do inference for regression. Explain what is meant by.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
1 Copyright © 2007 Thomson Asia Pte. Ltd. All rights reserved. CH5 Multiple Regression Analysis: OLS Asymptotic 
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
1 Javier Aparicio División de Estudios Políticos, CIDE Primavera Regresión.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
7.4 DV’s and Groups Often it is desirous to know if two different groups follow the same or different regression functions -One way to test this is to.
Principles of Econometrics, 4t h EditionPage 1 Chapter 8: Heteroskedasticity Chapter 8 Heteroskedasticity Walter R. Paczkowski Rutgers University.
6. Simple Regression and OLS Estimation Chapter 6 will expand on concepts introduced in Chapter 5 to cover the following: 1) Estimating parameters using.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
8-1 MGMG 522 : Session #8 Heteroskedasticity (Ch. 10)
Heteroscedasticity Chapter 8
Lecture #25 Tuesday, November 15, 2016 Textbook: 14.1 and 14.3
Chapter 20 Linear and Multiple Regression
Inference for Least Squares Lines
6. Simple Regression and OLS Estimation
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
CHAPTER 29: Multiple Regression*
CHAPTER 26: Inference for Regression
Chapter 12 Inference on the Least-squares Regression Line; ANOVA
Simple Linear Regression
CHAPTER 12 More About Regression
Heteroskedasticity.
Chapter 7: The Normality Assumption and Inference with OLS
The Multiple Regression Model
Financial Econometrics Fin. 505
Introduction to Regression
Presentation transcript:

8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity fails if the variance of the error term varies in the sample (ie: varies with the x variables) -We used Homoskedasticity for t tests, F test, and confidence intervals, even with large samples

8. Heteroskedasticity 8.1 Consequences of Heteroskedasticity for OLS 8.2 Heteroskedasticity-Robust Inference after OLS Estimation 8.3 Testing for Heteroskedasticity 8.4 Weighted Least Squares Estimation 8.5 The Linear Probability Model Revisited

8.1 Consequences of Heteroskedasticity We have already seen that Heteroskedasticity: Does not cause bias or inconsistency (this depends on MLR. 1 through MLR. 4) Does not affect R2 or adjusted R2 (since these estimate the POPULATION variances which are not conditional on X) Heteroskedasticity does: Make Var(Bjhat) biased, and therefore invalidate typical OLS standard errors (and therefore tests) Make OLS no longer BLUE (a better estimator may exist)

8.2 Heteroskedasticity-Robust Inference after OLS Estimation -Because testing hypothesis is a key element of econometrics, we need to obtain accurate standard errors in the presence of heteroskedasticity -in the last few decades, econometricians have learned how to adjust standard errors when HETEROSKEDASTICITY OF UNKNOWN FORM exists -these heteroskedasticity-robust procedures are valid (in large samples) regardless of eror variance

8.2 Het Fixing 1 -Given a typical single independent variable model, heteroskedasticity implies a varying variance: -Rewriting the OLS slope estimator, we can obtain a formula for its variance:

8.2 Het Fixing 1 -Recall that -Also notice that given homoskedasticity, -While we don’t know σi2, White (1980) showed that a valid estimator is:

8.2 Het Fixing 1 -Given a multiple independent variable model: -The valid estimator of Var(Bjhat) becomes: -where rijhat2 is the ith residual of a regression of xj on all other x variables -where SSRj is the sum of the squared residuals from that regression

8.2 Het Fixing 1 -The square root of this estimate of variance is commonly called the HETEROSKEDASTICITY-ROBUST STANDARD ERROR, but is also called the White, Huber, or Eickert standard errors due to its founders -there are a variety of slight adjustments to this standard error, but economists generally simply use the values reported by their program -this se adjustment gives us HETEROSKEDASTICITY-ROBUST T STATISTICS:

8.2 Why Bother with Normal Errors? -One may ask why we bother with normal OLS errors when heteroskedasticity-robust standard errors are valid more often: Normal OLS t stats have an exact t distribution, regardless of sample size Robust t statistics are valid only for large sample sizes Note that HETEROSKEDASTICITY-ROBUST F STATISTICS also exist, often called the HETEROSKEDASTICITY-ROBUST WALD STATISTIC and reported by most econ programs.

8.3 Testing for Heteroskedasticity -In this chapter we will cover a variety of modern tests for heteroskedasticity -It is important to know if heteroskedasticity exists, as its existence means OLS is no longer the BEST estimator -Note that while other tests for heteroskedasticity exist, the test presented here are preferred due to their more DIRECT testing for heteroskedasticity

8.3 Testing for Het -Consider our typical linear model and a null hypothesis suggesting homoskedasticity: Since we know that Var(u|X)=E(u2|X), we can rewrite the null hypothesis to read:

8.3 Testing for Het -As we are testing whether u2 is related to any explanatory variables, we can use the linear model: -where v is an error term with mean zero given the x’s -note that the dependent variable is SQUARED -this changes our null hypothesis to:

8.3 Testing for Het -Since we don’t know the true error of the regression, but only the residual, our estimation becomes: -Which is valid for large sample distributions -The R2 from the above regression is used to construct an F statistic:

8.3 Testing for Het -This test F statistic is compared to a critical F* with k, n-k-1 degrees of freedom -If the null hypothesis is rejected, there is evidence to conclude that heteroskedasticity exists at a given α -If the null hypothesis is not rejected, there is insufficient evidence to conclude that heteroskedasticity exists at a given α -this is sometimes called the BREUCH-PAGAN TEST FOR HETEROSKEDASTICITY (BP TEST)

8.3 BP HET TEST In order to conduct a BP test for het Run a normal OLS regression (y on x’s) and obtain the square of the residuals, uhat2 Run a regression of uhat2 on all independent variables and save the R2 Obtain a test F statistic and compare it to the critical F* If F>F*, reject the null hypothesis of homoskedasticity and start correcting for heteroskedasticity

8.3 BP HET TEST If we suspect that our model’s heteroskedasticity depends on only certain x variables, Only regress uhat2 on those variables -Keep in mind that the K in the R2 formula and in the degrees of freedom comes from the number of independent variables in the uhat2 regression An alternate test for het is the white test:

8.3 White Test for Het -Given the statistical modifications covered in chapter 5, White (1980) proposed another test for heteroskedasticity -With 3 independent variables, White proposed a linear regression with 9 regressors: -The null hypothesis (homoskedasticity) now sets all δ (except the intercept) equal to zero

8.3 White Test for Het -Unfortunately this test involves MANY regressors (27 regressors for 6 x variables) and as such may have degrees of freedom issues -one special case of the White test is to estimate the regression: -since this preserves the “squared” concept of the White test and is particularly useful when het is suspected to be connected to the level of the expected value E(y|X) -this test has a F distribution w/2,n-3 df

8.3 Special White HET TEST In order to conduct a special White test for het Run a normal OLS regression (y on x’s) and obtain the square of the residuals, uhat2 and the predicted values, yhat Run the regression of uhat2 on both yhat and yhat2 (including an intercept). Record the R2 values Using these R2 values, compute a test F statistic as in the BP test If F>F*, reject the null hypothesis (homoskedasticity)

8.3 Heteroskedasticity Note -Our decision to REJECT the null hypothesis and suspect heteroskedasticity is only valid if MLR.4 is valid -if MLR.4 is violated (ie: bad funcitonal form or omitted variables), one can reject the null hypothesis even if het doesn’t actually exist -Therefore always chose functional form and all variables before testing for heteroskedasticity