Presentation is loading. Please wait.

Presentation is loading. Please wait.

FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x 2 +...  k x k + u 1. Estimation.

Similar presentations


Presentation on theme: "FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x 2 +...  k x k + u 1. Estimation."— Presentation transcript:

1 FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x 2 +...  k x k + u 1. Estimation

2 FIN357 Li2 Similar to with Simple Regression  0 is still the intercept  1 to  k all called slope parameters u is still the error term (or disturbance) Still assume that E(u|x 1,x 2, …,x k ) = 0 Still minimizing the sum of squared residuals, so have k+1 first order conditions

3 FIN357 Li3 Interpreting the coefficient

4 FIN357 Li4 Simple vs Multiple Reg Estimate

5 FIN357 Li5 Goodness-of-Fit

6 FIN357 Li6 Goodness-of-Fit (continued)  Can compute the fraction of the total sum of squares (SST) that is explained by the model.  R 2 = SSE/SST = 1 – SSR/SST

7 FIN357 Li7 Too Many or Too Few Variables What happens if we include variables in our specification that don’t belong? OLS estimators remain unbiased. Our test will still be valid, but less powerful (meaning t-statistics may be smaller in magnitude, and less likely to detect significant relationship) What if we exclude a variable from our specification that does belong? OLS estimators will usually be biased

8 FIN357 Li8 Omitted Variable Bias

9 FIN357 Li9 Omitted Variable Bias (cont)

10 FIN357 Li10 Omitted Variable Bias Summary Two cases where there is no bias using a simple regression  2 = 0, that is x 2 doesn’t really belong in model x 1 and x 2 are uncorrelated in the sample.

11 FIN357 Li11 Assumptions for Unbiasedness Population model is linear in parameters: y =  0 +  1 x 1 +  2 x 2 +…+  k x k + u We use a random sample of size n, {(x i1, x i2,…, x ik, y i ): i=1, 2, …, n}, from the population E(u|x 1, x 2,… x k ) = 0. None of the x’s is constant, and there are no exact linear relationships among x’s

12 FIN357 Li12 Given the above 4 assumptions OLS estimators of coefficients will be unbiased

13 FIN357 Li13 Variance of the OLS Estimators Assume Var(u|x 1, x 2,…, x k ) =  2 (Homoskedasticity) The 4 assumptions (previous page) for unbiasedness, plus this homoskedasticity assumption are known as the Gauss-Markov assumptions

14 FIN357 Li14 The Gauss-Markov Theorem Given our 5 Gauss-Markov Assumptions it can be shown that OLS is “BLUE” Best Linear Unbiased Estimator Thus, if the assumptions hold, use OLS


Download ppt "FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x 2 +...  k x k + u 1. Estimation."

Similar presentations


Ads by Google