Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

Similar presentations


Presentation on theme: "1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?"— Presentation transcript:

1 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

2 2 MADE QUIZ ONCE AGAIN What are the main OLS assumptions? 1.On average right 2.Linear 3.Predicting variables and error term uncorrelated 4.No serial correlation in error term 5.Homoscedasticity + Normality of error term

3 3 MADE QUIZ ONCE AGAIN Do we know the error term? Do we know the coefficients? How can we know whether all the assumptions are fulfilled 1.On average right => ??? 2.Linearity => ??? 3.X and ε uncorrelated => ??? 4.ε serially uncorrelated => ??? 5.ε homoscedastic => ???

4 4 MADE We assumed a certain functional form What if in reality the relation exists, but has a different form? Any function can be approximated by the Taylor expansion. Consequently: So what we need to test is that γ’s are all zero in: FUNCTIONAL FORM

5 5 MADE FUNCTIONAL FORM – cont. This is easy, cause we already know how to do it But there could be a lot of these terms (time consuming) Ramsey came up with the idea, that if γ’s are all zero, then fitted y’s should have no correlation with actual y’s. So instead of testing whether γ’s are all zero, we test whether they differ from zero RESET TEST: –Do your model –Find the fitted y’s –Run a model with your Xβ and all the powers of fitted y’s –Test the hypothesis that coefficients by fitted y’s are zero –If you cannot reject the null (i.e. that they are all zero), you have a functional misspecification problem => you cannot say if your b’s are correct estimates of β’s.

6 6 MADE NORMALITY OF THE ERROR We typically assume that ε’s have a standard normal distribution N(O,σ 2 ). This helps us to derive t and F distributions. Although we never know ε’s, we get e’s, who should be consistent with ε’s (so the same distribution) We can test if the distribution of e’s is far from normal Jarque-Berra –Check out the skewness and kurtosis –Compare it to the values for normal distribution –The null says that they are alike, if you reject the null, you reject the normality in residuals Does that hurt?

7 7 MADE STABILITY OF THE PARAMETERS We typically assume that β’s do not depend on the size of X’s (in other words, relation between X’s and y’s is stable) However, we can actually have many subsamples. Then what? Look at your „dots”, see if such thing occurs, and run Chow estimation (just as in LAB)

8 8 MADE NO AUTOCORRELATION (AT LEAST NOT SERIAL) We also assume that subsequent error terms are independent of each other. Assume that it does not hold, so that: What then? –Our estimators are still unbiased: –We can also show they are consistent –But the problem occurs with the estimators of the variance of the estimators (which we need to test the significance hypothesis)

9 9 MADE NO AUTOCORRELATION (AT LEAST NOT SERIAL) Consequently, our estimators of the standard errors are incorrect We cannot trust our t-statistics any more! KEEP THAT IN MIND, WE’LL COME BACK TO IT IN A SECOND

10 10 MADE HOW DO WE GET AUTOCORRELATION? What we need in the error term is white noise

11 11 MADE HOW DO WE GET AUTOCORRELATION? Positive autocorrelation (rare changes of signs)

12 12 MADE HOW DO WE GET AUTOCORRELATION? Negative autocorrelation (frequent changes of signs)

13 13 MADE HOW DO WE GET AUTOCORRELATION? Model misspecification can give it to you for free

14 14 MADE NO HETEROSCEDASTICITY We also assume that error terms do not depend in size on the size of X’s Assume that it does not hold, so that: What then? –Our estimators are still unbiased: –We can also show they are consistent –But the problem occurs with the estimators of the variance of the estimators (which we need to test the significance hypothesis)

15 15 MADE HOW DO WE GET HETEROSCEDASTICITY? What we need is error terms independent of SIZE of X.

16 16 MADE HOW DOES THE V MATRIX LOOK? We know that: This holds for both heteroscedasticity and for autocorrelation. But aren’t there any differences? –Heteroscedasticity is about the diagonal (values along the diagonal differ and should be always the same) –Autocorrelation is about what happens outside the diagonal (they should be zero and they deviate from that)

17 17 MADE Testing for hetero Breusch-Pagan approach –The alternative hypothesis assumes that σ 2 i =σ 2 f(z i ), where f(.) is continuous –Run your model y i =x i β+ε i –Run the regression of ε 2 on any set of variables (x, y, whatever) –Use the R 2 from this regression (it has χ 2 distribution with p dof, where p is the no. of variables in the auxiliary regression) –Test: H0 – no heteroscedasticity in this form (does not say NO heteroscedasticity in general!) H1 – heteroscedasticity in the assumed form

18 18 MADE Testing for hetero White approach –Heteroscedasticity occurs because some interrelations between x’s are not accounted for –Run your model y i =x i β+ε i –Run the regression of ε 2 on all interactions of x’s –Use the R 2 from this regression (it has χ 2 distribution with K(K+1) dof, where K is the no. of interactions in the auxiliary regression) –Test: H0 – no heteroscedasticity in this form (does not say NO heteroscedasticity in general! this form is rather general though ) H1 – heteroscedasticity in the assumed form

19 19 MADE Testing for auto Durbin-Watson approach –The alternative hypothesis states, that there is autocorrelation of order 1 (two closest ε are correlated) –Run your model y i =x i β+ε i –Get your ε 2 –Do the statistic on these ε 2

20 20 MADE Testing for auto Durbin-Watson approach continued –If there is no (or weak) autocorrelation, the last term would be equal (or close) to 0, so the whole statistic would be 2. –IF DW < 2 blue: positive autocorrelation, green: inconclusive no color: no autocorrelation, –IF DW>2 blue: negative autocorrelation, green: inconclusive, no color: no autocorrelation –YOU CAN USE IT ON SMALL SAMPLES EVEN

21 21 MADE Testing for auto Breush-Godfrey approach –There is autocorrelation of order s (s closest ε are correlated) –Run your model y i =x i β+ε i –Get your ε 2 –Run the auxiliary regression on lagged ε in the form of: ε t =xγ+λ 1 ε t-1 + λ 2 ε t-2 +… λ s ε t-s –Test the hypothesis that your λ’s are zero –The nice part is that TR 2 (where T is the number of observations) of this auxiliary regression allows to test it as a combined hypothesis with χ 2 distribution with s dof, where s is the no. of lags you take into account. –MUCH NICER THAN DW, BUT REQUIRES BIG SAMPLES!

22 22 MADE CONCLUSIONS ABOUT AUTO AND HETERO What they both mean is that you can no longer trust the estimates of the standard errors You can still trust the estimators of your model, but you cannot test if they are non-zero (no valid hypothesis testing) If you have autocorrelation but a very big sample, you are asymptotically OK. so you need not to worry Big sample does not help for heteroscedasticity though  In a small sample autocorrelation cannot be eliminated either What we have as a response is GENERALISED LEAST SQUARE estimator => GLS same as OLS only helps to overcome the misestimation of standard errors. If no problems with auto and hetero, GLS less efficient than OLS (do not overuse it!)


Download ppt "1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?"

Similar presentations


Ads by Google