Download presentation
Presentation is loading. Please wait.
Published byAngel Farley Modified over 11 years ago
1
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 2: Econometrics (Chapter 2.1–2.7)
2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-2 How Does Econometrics Differ From Economic Theory? Economic theory: qualitative results Demand Curves Slope Downward Econometrics: quantitative results price elasticity of demand for milk = -.75
3
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-3 How Does Econometrics Differ From Statistics? Statistics: summarize the data faithfully; let the data speak for themselves. Econometrics: what do we learn from economic theory AND the data at hand?
4
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-4 Whats Metrics For? Estimation: What is the marginal propensity to consume? Hypothesis Testing: Do unions raise workers wages? Forecasting: What will Personal Savings be in 2001 if GDP is $9.2 trillion?
5
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-5 Economists Ask: What Changes What and How? Higher Income, Higher Saving Higher Price, Lower Quantity Demanded Higher Interest Rate, Lower Investment
6
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-6 Savings Versus Income Theory Would Assume an Exact Relationship, e.g., Y = X 0 1000 2000 3000 4000 5000 6000 24000480007200096000
7
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-7 Slope of the Line Is Key! Slope is the change in savings with respect to changes in income Slope is the derivative of savings with respect to income If we know the slope, weve quantified the relationship!
8
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-8 Never So Neat: Savings Versus Income
9
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-9 Underlying Mean + Random Part We devised four intuitively appealing ways to estimate
10
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-10 Best Guess 1 Mean of Ratios:
11
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-11 Figure 2.4 Estimating the Slope of a Line with Two Data Points
12
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-12 Best Guess 2 Ratio of Means:
13
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-13 Figure 2.5 Estimating the Slope of a Line: g 2
14
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-14 Best Guess 3 Mean of Changes in Y over Changes in X :
15
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-15 Best Guess 4 Ordinary Least Squares: (minimizes squared residuals in sample)
16
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-16 Four Ways to Estimate
17
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-17 Underlying Mean + Random Part Are lines through the origin likely phenomena?
18
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-18 Regressions Greatest Hits!!! An Econometric Top 40
19
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-19 Two Classical Favorites!! Friedmans Permanent Income hypothesis: Capital Asset Pricing Model (CAPM) :
20
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-20 A Golden Oldie !! Engel on the Demand for Rye:
21
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-21 Four Guesses How to Choose?
22
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-22 What Criteria Did We Discuss? Pick The One That's Right Make Mean Error Close to Zero Minimize Mean Absolute Error Minimize Mean Square Error
23
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-23 What Criteria Did We Discuss? (cont.) Pick The One That's Right… – In every sample, a different estimator may be right. – Can only decide which is right if we ALREADY KNOW the right answer which is a trivial case.
24
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-24 What Criteria Did We Discuss? (cont.) Make Mean Error Close to Zero …seek unbiased guesses – If E( g- ) = 0, g is right on average – If BIAS = 0, g is an unbiased estimator of
25
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-25 Checking Understanding Question: Which estimator does better under the minimize mean error condition? g - is always a positive number less than 2 (our guesses are always a little high), or g - is always +10 or -10 (50/50 chance)
26
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-26 Checking Understanding (cont.) If our guess is wrong by +10 for half the observations, and by -10 for the other half, then E( g- ) = 0! – The second estimator is unbiased! Mistakes in opposite directions cancel out. The first estimator is always closer to being right, but it does worse on this criterion.
27
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-27 What Criteria Did We Discuss? Minimize Mean Absolute Error… – Mistakes dont cancel out. – Implicitly treats cost of a mistake as being proportional to the mistakes size. – Absolute values dont go well with differentiation.
28
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-28 What Criteria Did We Discuss? (cont.) Minimize Mean Square Error… – Implicitly treats cost of mistakes as disproportionately large for larger mistakes. – Squared expressions are mathematically tractable.
29
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-29 What Criteria Did We Discuss? (cont.) Pick The One Thats Right… – only works trivially Make Mean Error Close to Zero… – seek unbiased guesses Minimize Mean Absolute Error… – mathematically tough Minimize Mean Square Error… – more tractable mathematically
30
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-30 Criteria Focus Across Samples Make Mean Error Close to Zero Minimize Mean Absolute Error Minimize Mean Square Error What do the distributions of the estimators look like?
31
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-31 Try the Four in Many Samples Pros will use estimators repeatedly what track record will they have? Idea: Lets have the computer create many, many data sets. We apply all our estimators to each data set.
32
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-32 Try the Four in Many Samples (cont.) We use our estimates on many datasets that we created ourselves. We know the true value of because we picked it! We can compare estimators. We run horseraces.
33
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-33 Try the Four in Many Samples (cont.) Pros will use estimators repeatedly what track record will they have? Which horse runs best on many tracks? Dont design tracks that guarantee failure. What properties do we need our computer-generated datasets to have to avoid automatic failure for one of our estimators?
34
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-34 Building a Fair Racetrack Under what conditions will each estimator fail?
35
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-35 To Preclude Automatic Failure...
36
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-36 Why Does Viewing Many Samples Work Well? We are interested in means: mean error, mean absolute error, mean squared error. Drawing many (m) independent samples lets us estimate means with variance e 2 /m, where e 2 is the variance of that means error. If m is large, our estimates will be quite precise.
37
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-37 How to Build a Race Track... n = ? – How big is each sample? = ? – What slope are we estimating? Set X 1, X 2, …, X n – Do it once, or for each sample? Draw 1, 2,..., n – Must draw randomly each sample.
38
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-38 What to Assume About the i ? What do the i represent? What should the i equal on average? What variance do we want for the i ?
39
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-39 Checking Understanding n = ? – How big is each sample? = ? – What slope are we estimating? Set X 1, X 2, …, X n – Do it once, or for each sample? Draw 1, 2, …, n – Must draw randomly each sample. Form Y 1, Y 2, …, Y n –Y i = X i + i We create 10,000 datasets with X and Y. For each dataset, what do we want to do?
40
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-40 Checking Understanding (cont.) We create 10,000 datasets with X and Y For each dataset, we use all four of our estimators to estimate g 1, g 2, g 3, and g 4 We save the mean error, mean absolute error, and mean squared error for each estimator
41
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-41 What Have We Assumed? We are creating our own data. We get to specify the underlying Data Generating Process relating Y to X. What is our Data Generating Process (DGP)?
42
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-42 What Is Our Data Generating Process? E( i ) = 0 Var( i ) = 2 Cov( i, k ) = 0 i k X 1, X 2, …, X n are fixed across samples GAUSS–MARKOV ASSUMPTIONS
43
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-43 What Will We Get? We will get precise estimates of: 1.Mean Error of each estimator 2.Mean Absolute Error of each estimator 3.Mean Squared Error of each estimator 4.Distribution of each estimator By running different racetracks (DGPs), we check the robustness of our results.
44
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-44 Review We want an estimator to form a best guess of the slope of a line through the origin. Y i = X i + i We want an estimator that works well across many different samples: low average error, low average absolute error, low squared errors…
45
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-45 Review (cont.) We have brainstormed 4 best guesses:
46
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-46 Review (cont.) We will compare these estimators in horseraces across thousands of computer-generated datasets We get to specify the underlying relationship between Y and X We know the right answer that the estimators are trying to guess We can see how each estimator does
47
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-47 Review (cont.) We choose all the rules for how our data are created. The underlying rules are the Data Generating Process (DGP) We choose to use the Gauss– Markov Rules.
48
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 2-48 What Is Our Data Generating Process? E( i ) = 0 Var( i ) = 2 Cov ( i, k ) = 0 i k X 1, X 2, …, X n are fixed across samples GAUSS–MARKOV ASSUMPTIONS
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.