Presentation is loading. Please wait.

Presentation is loading. Please wait.

3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.

Similar presentations


Presentation on theme: "3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2."— Presentation transcript:

1 3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2 : 1)The error variance, σ 2 Larger error variance = Larger OLS variance -more “noise” in the equation makes it more difficult to accurately estimate partial effects of the variables -one can reduce the error variance by adding (valid) variables to the equation

2 3.4 The Components of the OLS Variances: Multicollinearity 2) The Total Sample Variation in x j, SST j Larger x j variance – Smaller OLS j variance -increasing sample size keeps increasing SST j since -This still assumes that we have a random sample

3 3.4 The Components of the OLS Variances: Multicollinearity 3) Linear relationships among x variables: R j 2 Larger correlation in x’s – Bigger OLS j variance -R j 2 is the most difficult component to understand - R j 2 differs from the typical R 2 in that it measures the goodness of fit of: -Where x j itself is not considered an explanatory variable

4 3.4 The Components of the OLS Variances: Multicollinearity 3) Linear relationships among x variables: R j 2 -In general, R j 2 is the total variation in x j that is explained by the other independent variables -If R j 2 =1, MLR.3 (and OLS) fails due to perfect multicollinearity (x j is a perfect linear combination of the other x’s) Note that: -High (but not perfect) correlation between independent variables is MULTICOLLINEARITY

5 3.4 Multicollinearity -Note that an R j 2 close to 1 DOES NOT violate MLR. 3 -unfortunately, the “problem” of multicollinearity is hard to define -No R j 2 is accepted as being too high -A high R j 2 can always be offset by a high SST j or a low σ 2 -Ultimately, how big is B j hat relative to its standard error?

6 3.4 Multicollinearity -Ceteris Paribus, it is best to have little correlation between x j and all other independent variables -Dropping independent variables will reduce multicollinearity -But if these variables are valid, we have created bias -Multicollinearity can always be fought by collecting more data -Sometimes multicollinearity is due to over specifying independent variables:

7 3.4 Multicollinearity Example -In a study of heart disease, our economic model is: Heart disease=f(fast food, junk food, other) -Unfortunately, R fast food 2 is high, showing a high correlation between fast food and other x variables (especially junk food) -since fast food and junk food are so correlated, they should be examined together; their separate effects are difficult to calculate -Breaking up variables that can be added together can often cause Multicollinearity

8 3.4 Multicollineairity -it is important to note that multicollinearity may not affect ALL OLS estimates -take the following equation: -if x 2 and x 3 are correlated, Var(B 2 hat) and Var(B 3 hat) will be large (due to multicollinearity) -HOWEVER, from (3.51), if x 1 is fully uncorrelated with x 2 and x 3, R 1 2 =0 and

9 3.4 Including Variables -Whether or not to include an independent variable is a balance between bias and variance: -take the following equation: -where both variables, x 1 and x 2, are included -Compare to the following equation with x 2 omitted: If the true B 2 ≠0 and x 1 and x 2 have ANY correlation, B 1 tilde is biased -Focusing on bias, B 1 hat is preferred

10 3.4 Including Variables -Considering variance complicates things -From (3.51), we know that: -Modifying a proof from chapter 2, we know that: -It is evident that unless x 1 and x 2 are uncorrelated in the sample, Var(B 1 tilde) is always smaller than Var(B 1 hat).

11 3.4 Including Variables -Obviously, if x 1 and x 2 aren’t correlated, we have no bias and no multicollinearity -If x 1 and x 2 are correlated: 1) If B 2 ≠0, B 1 tilde is biased, B 1 hat is unbiased Var(B 1 tilde)< Var(B 1 hat) 2) If B 2 ≠0, B 1 tilde is unbiased, B 1 hat is unbiased Var(B 1 tilde)< Var(B 1 hat) -Obviously in the second situation omit x 2. If it has no real impact on y, adding it only causes multicollinearity and reduces OLS’s efficiency -Never include irrelevant variables

12 3.4 Including Variables -In the first case (B 2 ≠0), leaving x 2 out of the model results in a biased estimator of B 1 -If the bias is small compared to the variance advantages, traditional econometricians have omitted x 2 -However, 2 points argue for including x 2 : 1)Bias doesn’t shrink with n, but variance does 2)Error variance increases with omitted variables

13 3.4 Including Variables 1)Sample size, bias and variance -from discussion on (3.45), roughly bias doesn’t increase with sample size -from (3.51), increasing sample size increases SST j and therefore decreases variance: -One can avoid bias and fight multicollinearity by increasing sample size

14 3.4 Including Variables 2) Error variance and omitted variables -When x 2 is omitted and B 2 ≠0, (3.55) underestimates error -Without including x 2 in the model, x 2 ’s variance is added to the error’s variance -higher error variance increases B j hat’s variance

15 3.4 Estimating σ 2 -In order to obtain unbiased estimators of Var(B j hat), we must first find an unbiased estimator of σ 2. -Since we know that σ 2 =E(u 2 ), an unbiased estimator of σ 2 would be: -Unfortunately, this is not a true estimator as we do not observe the errors u i.

16 3.4 Estimating σ 2 -We know that errors and residuals can be written as: Therefore a natural estimate of σ 2 would replace u with uhat -However, as seen in the bivariate case, this leads to bias, and we had to divide by n-2 to become a consistent estimator

17 3.4 Estimating σ 2 -To make our estimate of σ 2 consistent, we divide by the degrees of freedom n-k-1: Where k is the number of independent variables -Notice in the bivariate case k=1 and the denominator is n-2. Also note:

18 3.4 Estimating σ 2 -Technically, n-k-1 comes from the fact that E(SSR=(n-k-1)σ 2 -Intuitively, from OLS’s first order conditions: There are therefore k+1 restrictions on OLS residuals (j=1,2,…k) -If we therefore have n-(k+1) residuals we can use these restrictions to find the remaining residuals

19 Theorem 3.3 (Unbiased Estimation of σ 2 ) Under the Gauss-Markov Assumptions MLR. 1 through MLR. 5, Note: This proof requires matrix algebra and is found in Appendix E

20 Theorem 3.3 Notes -the positive square root of σhat 2, σhat, is called the STANDARD ERROR OF THE REGRESSION (SER), or the STANDARD ERROR OF THE ESTIMATE -SER is an estimator of the standard deviation of the error term -when another independent variable is added to the equation, both SSR and the degrees of freedom fall -Therefore an additional variable may increase or decrease SER

21 Theorem 3.3 Notes In order to construct confidence intervals and perform hypothesis tests, we need the STANDARD DEVIATION OF B J HAT: Since σ is unknown, we replace it with its estimator, σhat, to give us the STANDARD ERROR OF B J HAT:

22 3.4 Standard Error Notes -since the standard error depends on σhat, it has a sampling distribution -Furthermore, standard error comes from the variance formula, which relies on homoskedasticity (MLR.5) -While heteroskedasticity doesn’t cause bias in B j hat, it does affect its variance and therefore cause bias in its standard errors -Chapter 8 covers how to correct for heteroskedasticity

23 3.5 Efficiency of OLS - BLUE -MLR. 1 through MLR. 4 show that OLS is unbiased, but many unbiased estimators exist -HOWEVER, using MLR.1 through MLR.5, OLS’s estimate B j hat of B j is BLUE: B est L inear U nbiased E stimator

24 3.5 Efficiency of OLS - BLUE E stimator -OLS is an estimator as “it is a rule that can be applied to any sample of data to produce an estimate” U nbiased -Since OLS’s estimate has the property OLS is unbiased

25 3.5 Efficiency of OLS - BLUE L inear -OLS’s estimates are linear since B j hat can be expressed as a linear function of the data on the dependent variable Where w ij is a function of independent variables -This is evident from equation (3.22)

26 3.5 Efficiency of OLS - BLUE B est -OLS is best since it has the smallest variance of all linear unbiased estimators The Gauss-Markov theorem states that, given assumptions MLR. 1 through MLR.5, for any other estimator B j tilde that is linear and unbiased: And this equality is generally strict

27 Theorem 3.4 (Gauss-Markov Theorem) Under the Assumptions MLR. 1 through MLR. 5, Are respectively the best linear unbiased estimators (BLUE’s) of

28 Theorem 3.4 Notes -if our assumptions hold, no linear unbiased estimator will be a better choice than OLS -if we find any other unbiased linear estimator, its variance will be at least as big as OLS’s -If MLR.4 fails, OLS is biased and Theorem 3.4 fails -If MLR.5 (homoskedasticity) fails, OLS is not biased but no longer has the smallest variance, it is LUE


Download ppt "3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2."

Similar presentations


Ads by Google