Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 5: Regression with One Explanator (Chapter 3.1–3.5, 3.7 Chapter 4.1–4.4)

Similar presentations


Presentation on theme: "Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 5: Regression with One Explanator (Chapter 3.1–3.5, 3.7 Chapter 4.1–4.4)"— Presentation transcript:

1 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 5: Regression with One Explanator (Chapter 3.1–3.5, 3.7 Chapter 4.1–4.4)

2 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-2 Agenda Finding a good estimator for a straight line through the origin: Chapter 3.1–3.5, 3.7 Finding a good estimator for a straight line with an intercept: Chapter 4.1–4.4

3 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-3 Where Are We? We wish to uncover quantitative features of an underlying process, such as the relationship between family income and financial aid. How much less aid will I receive on average for each dollar of additional family income? We have data, a sample of the process, for example observations on 10,000 students’ aid awards and family incomes.

4 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-4 Where Are We? (cont.) Other factors (  ), such as number of siblings, influence any individual student’s aid, so we cannot directly observe the relationship between income and aid. We need a rule for making a good guess about the relationship between income and financial aid, based on the data.

5 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-5 Where Are We? (cont.) A good guess is a guess which is right on average. We also desire a guess which will have a low variance around the true value.

6 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-6 Where Are We? (cont.) Our rule is called an “estimator.” We started by brainstorming a number of estimators and then comparing their performances in a series of computer simulations. We found that the Ordinary Least Squares estimator dominated the other estimators. Why is Ordinary Least Squares so good?

7 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-7 Where Are We? (cont.) To make more general statements, we need to move beyond the computer and into the world of mathematics. Last time, we reviewed a number of mathematical tools: summations, descriptive statistics, expectations, variances, and covariances.

8 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-8 Where Are We? (cont.) As a starting place, we need to write down all our assumptions about the way the underlying process works, and about how that process led to our data. These assumptions are called the “Data Generating Process.” Then we can derive estimators that have good properties for the Data Generating Process we have assumed.

9 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-9 Where Are We? (cont.) The DGP is a model to approximate reality. We trade off realism to gain parsimony and tractability. Models are to be used, not believed.

10 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-10 Where Are We? (cont.) Much of this course focuses on different types of DGP assumptions that you can make, giving you many options as you trade realism for tractability.

11 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-11 Where Are We? (cont.) Two Ways to Screw Up in Econometrics: – Your Data Generating Process assumptions missed a fundamental aspect of reality (your DGP is not a useful approximation); or – Your estimator did a bad job for your DGP. Today we focus on picking a good estimator for your DGP.

12 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-12 Where Are We? (cont.) Today, we will focus on deriving the properties of an estimator for a simple DGP: the Gauss–Markov Assumptions. First we will find the expectations and variances of any linear estimator under the DGP. Then we will derive the Best Linear Unbiased Estimator (BLUE).

13 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-13 Our Baseline DGP: Gauss–Markov (Chapter 3) Y =  X +  E(  i ) = 0 Var(  i ) =   2 Cov(  i,  j ) = 0, for i ≠ j X ’s fixed across samples (so we can treat them like constants). We want to estimate 

14 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-14 A Strategy for Inference The DGP tells us the assumed relationships between the data we observe and the underlying process of interest. Using the assumptions of the DGP and the algebra of expectations, variances, and covariances, we can derive key properties of our estimators, and search for estimators with desirable properties.

15 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-15 An Example:  g 1

16 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-16 An Example:  g 1 (cont.)

17 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-17 Checking Understanding

18 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-18 Checking Understanding (cont.)

19 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-19 Checking Understanding (cont.)

20 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-20 Linear Estimators  g 1 is unbiased. Can we generalize? We will focus on linear estimators. Linear estimator: a weighted sum of the Y ’s.

21 5-21 Linear Estimators (cont.) Linear estimator: Example:  g 1 is a linear estimator.

22 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-22 Linear Estimators (cont.) All of our “best guesses” are linear estimators!

23 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-23 Expectation of Linear Estimators

24 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-24 Expectation of Linear Estimator (cont.)

25 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-25 Expectation of Linear Estimator (cont.) A linear estimator is unbiased if Sw i X i = 1 Are  g 2 and  g 4 unbiased?

26 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-26 Expectation of Linear Estimator (cont.) Similar calculations hold for  g 3 All 4 of our “best guesses” are unbiased. But  g 4 did much better than  g 3. Not all unbiased estimators are created equal. We want an unbiased estimator with a low mean squared error.

27 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-27 First: A Puzzle….. Suppose n = 1 – Would you like a big X or a small X for that observation? – Why?

28 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-28 What Observations Receive More Weight?

29 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-29 What Observations Receive More Weight? (cont.)  g 1 puts more weight on observations with low values of X.  g 3 puts more weight on observations with low values of X, relative to neighboring observations. These estimators did very poorly in the simulations.

30 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-30 What Observations Receive More Weight? (cont.)  g 2 weights all observations equally.  g 4 puts more weight on observations with high values of X. These observations did very well in the simulations.

31 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-31 Why Weight More Heavily Observations With High X ’s? Under our Gauss–Markov DGP the disturbances are drawn the same for all values of X …. To compare a high X choice and a low X choice, ask what effect a given disturbance will have for each.

32 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-32 Figure 3.1 Effects of a Disturbance for Small and Large X

33 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-33 Linear Estimators and Efficiency For our DGP, good estimators will place more weight on observations with high values of X Inferences from these observations are less sensitive to the effects of the same  Only one of our “best guesses” had this property.  g 4 (a.k.a OLS) dominated the other estimators. Can we do even better?

34 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-34 Linear Estimators and Efficiency (cont.) Mean Squared Error = Variance + Bias 2 To have a low Mean Squared Error, we want two things: a low bias and a low variance.

35 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-35 Linear Estimators and Efficiency (cont.) An unbiased estimator with a low variance will tend to give answers close to the true value of  Using the algebra of variances and our DGP, we can calculate the variance of our estimators.

36 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-36 Algebra of Variances One virtue of independent observations is that Cov( Y i,Y j ) = 0, killing all the cross-terms in the variance of the sum.

37 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-37 Our Baseline DGP: Gauss–Markov Our benchmark DGP: Gauss–Markov Y =  X +  E(  i ) = 0 Var(  i ) =   2 Cov(  i,  j ) = 0, for i ≠ j X ’s fixed across samples We will refer to this DGP (very) frequently.

38 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-38 Variance of OLS

39 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-39 Variance of OLS (cont.) Note: the higher the  X k 2, the lower the variance.

40 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-40 Variance of a Linear Estimator More generally:

41 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-41 Variance of a Linear Estimator (cont.) The algebras of expectations and variances allow us to get exact results where the Monte Carlos gave only approximations. The exact results apply to ANY model meeting our Gauss–Markov assumptions.

42 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-42 Variance of a Linear Estimator (cont.) We now know mathematically that  g 1 –  g 4 are all unbiased estimators of  under our Gauss–Markov assumptions. We also think from our Monte Carlo models that  g 4 is the best of these four estimators, in that it is more efficient than the others. They are all unbiased (we know from the algebra), but  g 4 appears to have a smaller variance than the other 3.

43 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-43 Variance of a Linear Estimator (cont.) Is there an unbiased linear estimator better (i.e., more efficient) than  g 4 ? – What is the Best, Linear, Unbiased Estimator? – How do we find the BLUE estimator?

44 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-44 BLUE Estimators Mean Squared Error = Variance + Bias 2 An unbiased estimator is right “on average” In practice, we don’t get to average. We see only one draw from the DGP.

45 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-45 BLUE Estimators (cont.) Some analysts would prefer an estimator with a small bias, if it gave them a large reduction in variance What good is being right on average if you’re likely to be very wrong in your one draw?

46 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-46 BLUE Estimators (cont.) Mean Squared Error = Variance + Bias 2 In a particular application, there may be a favorable trade-off between accepting a little bias in return for a lot less variance. We will NOT look for these trade-offs. Only after we have made sure our estimator is unbiased will we try to make the variance small.

47 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-47 BLUE Estimators (cont.) A Strategy for Finding the Best Linear Unbiased Estimator: 1.Start with linear estimators:  w i Y i 2.Impose the unbiasedness condition  w i X i =1 3.Calculate the variance of a linear estimator: Var(  w i Y i ) =  2  w i 2 – Use calculus to find the w i that give the smallest variance subject to the unbiasedness condition Result: the BLUE Estimator for Our DGP

48 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-48 BLUE Estimators (cont.)

49 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-49 BLUE Estimators (cont.) OLS is a very good strategy for the Gauss–Markov DGP. OLS is unbiased: our guesses are right on average. OLS is efficient: it has a small variance (or at least the smallest possible variance for unbiased linear estimators). Our guesses will tend to be close to right (or at least as close to right as we can get; the minimum variance could still be pretty large!)

50 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-50 BLUE Estimator (cont.) According to the Gauss–Markov Theorem, OLS is the BLUE Estimator for the Gauss–Markov DGP. We will study other DGP’s. For any DGP, we can follow this same procedure: – Look at Linear Estimators – Impose the unbiasedness conditions – Minimize the variance of the estimator

51 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-51 Example: Cobb–Douglas Production Functions (Chapter 3.7) A classic production function in economics is the Cobb–Douglas function. Y = aL  K 1 -  If firms pay workers and capital their marginal product, then worker compensation equals a fraction  of total output (or national income).

52 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-52 Example: Cobb–Douglas To illustrate, we randomly pick 8 years between 1900 and 1995. For each year, we observe total worker compensation and national income. We use  g 1,  g 2,  g 3, and  g 4 to estimate  Compensation =  ·National Income + 

53 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-53 TABLE 3.6 Estimates of the Cobb–Douglas Parameter , with Standard Errors

54 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-54 TABLE 3.7 Outputs from a Regression* of Compensation on National Income

55 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-55 Example: Cobb–Douglas All 4 of our estimators give very similar estimates. However,  g 2 and  g 4 have much smaller standard errors. (We will see the value of small standard errors when we cover hypothesis tests.) Using our estimate from  g 4, 0.738, a 1 billion dollar increase in National Income is predicted to increase total worker compensation by 0.738 billion dollars.

56 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-56 A New DGP Most lines do not go through the origin. Let’s add an intercept term and find the BLUE Estimator (from Chapter 4).

57 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-57 Gauss–Markov with an Intercept

58 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-58 Gauss–Markov with an Intercept (cont.) Example: let’s estimate the effect of income on college financial aid. Students whose families have 0 income do not receive 0 aid. They receive a lot of aid. E[financial aid | family income] =  0 +  1 (family income)

59 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-59 Gauss–Markov with an Intercept (cont.)

60 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-60 Gauss–Markov with an Intercept (cont.) How do we construct a BLUE Estimator? Step 1: focus on linear estimators. Step 2: calculate the expectation of a linear estimator for this DGP, and find the condition for the estimator to be unbiased. Step 3: calculate the variance of a linear estimator. Find the weights that minimize this variance subject to the unbiasedness constraint.

61 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-61 Expectation of a Linear Estimator

62 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-62 Checking Understanding Question: What are the conditions for an estimator of  1 to be unbiased? What are the conditions for an estimator of  0 to be unbiased?

63 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-63 Checking Understanding (cont.) When is the expectation equal to  1 ? –When  w i = 0 and  w i X i = 1 What if we were estimating  0 ? When is the expectation equal to  0 ? –When  w i = 1 and  w i X i = 0 To estimate 1 parameter, we needed 1 unbiasedness condition. To estimate 2 parameters, we need 2 unbiasedness conditions.

64 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-64 Variance of a Linear Estimator Adding a constant to the DGP does NOT change the variance of the estimator.

65 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-65 BLUE Estimator

66 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-66 BLUE Estimator of  1 This estimator is OLS for the DGP with an intercept. It is the Best (minimum variance) Linear Unbiased Estimator for the Gauss–Markov DGP with an intercept.

67 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-67 BLUE Estimator of  1 (cont.) This formula is very similar to the formula for OLS without an intercept. However, now we subtract the mean values from both X and Y.

68 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-68 BLUE Estimator of  1 (cont.) OLS places more weight on high values of: Observations are more valuable if X is far away from its mean.

69 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-69 BLUE Estimator of  1  (cont.)

70 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-70 BLUE Estimator of  0 The easiest way to estimate the intercept: Notice that the fitted regression line always goes through the point Our fitted regression line passes through “the middle of the data.”

71 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-71 Example: The Phillips Curve Phillips argued that nations face a trade-off between inflation and unemployment. He used annual British data on wage inflation and unemployment from 1861–1913 and 1914–1957 to regress inflation on unemployment.

72 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-72 Example: The Phillips Curve (cont.) The fitted regression line for 1861–1913 did a good job predicting the data from 1914 to 1957. “Out of sample predictions” are a strong test of an econometric model.

73 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-73 Example: The Phillips Curve (cont.) The US data from 1958–1969 also suggest a trade-off between inflation and unemployment.

74 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-74 Example: The Phillips Curve (cont.) How do we interpret these numbers? If Inflation were 0, our best guess of Unemployment would be 0.06 percentage points. A one percentage point increase of Inflation decreases our predicted Unemployment level by 0.55 percentage points.

75 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-75 Figure 4.2 U.S. Unemployment and Inflation, 1958–1969

76 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-76 TABLE 4.1 The Phillips Curve

77 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-77 Example: The Phillips Curve We no longer need to assume our regression line goes through the origin. We have learned how to estimate an intercept. A straight line doesn’t seem to do a great job here. Can we do better?

78 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-78 Review As a starting place, we need to write down all our assumptions about the way the underlying process works, and about how that process led to our data. These assumptions are called the “Data Generating Process.” Then we can derive estimators that have good properties for the Data Generating Process we have assumed.

79 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-79 Review: The Gauss–Markov DGP Y =  X +  E(  i ) = 0 Var(  i ) =   2 Cov(  i,  j ) = 0, for i ≠ j X ’s fixed across samples (so we can treat them like constants). We want to estimate 

80 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-80 Review We will focus on linear estimators. Linear estimator: a weighted sum of the Y ’s.

81 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-81 Review (cont.)

82 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-82 Review (cont.)

83 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-83 Review: BLUE Estimators A Strategy for Finding the Best Linear Unbiased Estimator: 1.Start with linear estimators:  w i Y i 2.Impose the unbiasedness condition  w i X i = 1 3.Use calculus to find the w i that give the smallest variance subject to the unbiasedness condition. Result: The BLUE Estimator for our DGP

84 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-84 Review: BLUE Estimators (cont.) Ordinary Least Squares (OLS) is BLUE for our Gauss–Markov DGP. This result is called the “Gauss–Markov Theorem.”

85 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-85 Review: BLUE Estimators (cont.) OLS is a very good strategy for the Gauss– Markov DGP. OLS is unbiased: our guesses are right on average. OLS is efficient: the smallest possible variance for unbiased linear estimators. Our guesses will tend to be close to right (or at least as close to right as we can get). Warning: the minimum variance could still be pretty large!

86 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-86 Gauss–Markov with an Intercept

87 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-87 Review: BLUE Estimator of  1 This estimator is OLS for the DGP with an intercept. It is the Best (minimum variance) Linear Unbiased Estimator for the Gauss–Markov DGP with an intercept.

88 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 5-88 BLUE Estimator of  0 The easiest way to estimate the intercept: Notice that the fitted regression line always goes through the point Our fitted regression line passes through “the middle of the data.”


Download ppt "Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 5: Regression with One Explanator (Chapter 3.1–3.5, 3.7 Chapter 4.1–4.4)"

Similar presentations


Ads by Google