Download presentation
Presentation is loading. Please wait.
1
CHAPTER 4 ECONOMETRICS x x x x x Multiple Regression = more than one explanatory variable Independent variables are X 2 and X 3. Y i = B 1 + B 2 X 2i + B 3 X 3i + u i X 2i is the i th observation of X 2.
2
B 2 and B 3 are partial regression coefficients. Y i = B 1 + B 2 X 2i + B 3 X 3i + u i B 2 measures the change in E(Y) holding the value of X 3 constant. Y i = b 1 + b 2 X 2i + b 3 X 3i + e i Sample regression function with parameter estimates.
3
Educ i = 414 + 0.052 GDP i - 50 Pop i Holding population fixed, education spending increases 5.2¢ for every $1 of GDP. Estimating the impact of GDP and population on education expenditures. Educ i = -161 + 0.048 GDP i GDP and population are correlated. When we don’t control for population, part of the population effect gets picked up by GDP.
4
Educ i = 414 + 0.052 GDP i - 50 Pop i Holding GDP fixed, education spending decreases $50 for each additional person. Estimating the impact of GDP and population on education expenditures. Educ i = 2,946 + 78.7 Pop i When we don’t control for GDP, population picks up the GDP effect.
6
The Classical Linear Regression Model 8. No exact linear relationship between explanatory variables, i.e. no multicollinearity. One more assumption Example of multicollinearity: Linear relationship: X 2 = X 3 + X 4 X 2 = population of the state X 3 = female population of the state X 4 = male population of the state
7
Second example of multicollinearity: Linear relationship: X 2 = 1 - X 3 X 2 = % females in the state X 3 = % males in the state Perfect collinearity is rare; error message if it happens. Regression is possible with high collinearity – but caution in interpretation of coefficients is needed.
8
Estimation of Parameters Procedures for estimating parameters using OLS are the same (the equations just become more complicated.) Standard errors of the estimators are calculated in much the same way. We estimate the variance of the disturbance term in the population from the residuals in the sample. ∑ e i 2 n – k σ2 =σ2 = k represents the number of coefficients estimated.
9
Estimating Goodness of Fit As before, R 2 is used as a measure of goodness of fit. R 2 = ESS / TSS Hypothesis Testing Testing the null hypothesis that B i = 0 is the same as before except: df = n - k
10
01.55 Test statistic: t = b 1 / se(b 1 ) = 414 / 267 = 1.55 The test of significance approach to hypothesis testing p = TDIST(t, df, tails) t -1.55 Educ i = 414 + 0.052 GDP i - 50 Pop i 1 tail: p = 0.065 2 tails: p = 0.13
11
Testing the Joint Hypothesis that B 2 =B 3 =0 Testing that all the coefficients* are equal to zero is the same as testing that R 2 =0. * Not necessarily the intercept, B 1. R 2 / (k - 1) (1 – R 2 ) / (n – k) F =F = F follows the F distribution with (k-1) df in the numerator and (n-k) df in the denominator.
12
0.962 / 2 0.038 / 35 F =F = From the regression of education expenditures on GDP and population (R 2 = 0.962): = 443.0 p = FDIST(F, df, tails) p = FDIST(443, 2, 35) = 1.6 E-25 = 0.000 * Note: This number is reported in standard regression output.
13
Adjusted R 2 Adjusted R 2 is a goodness of fit measure that is adjusted for the number of explanatory variables. R 2 always increases as you add explanatory variables. Adjusted R 2 does not. n – 1 n – k R2 =R2 =1 – (1 – R 2 )
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.