EC220 - Introduction to econometrics (chapter 2)

Slides:



Advertisements
Similar presentations
EC220 - Introduction to econometrics (chapter 3)
Advertisements

EC220 - Introduction to econometrics (review chapter)
EC220 - Introduction to econometrics (chapter 5)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: slope dummy variables Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: a Monte Carlo experiment Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: introduction to maximum likelihood estimation Original citation: Dougherty,
EC220 - Introduction to econometrics (chapter 7)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: testing a hypothesis relating to a regression coefficient Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
HETEROSCEDASTICITY-CONSISTENT STANDARD ERRORS 1 Heteroscedasticity causes OLS standard errors to be biased is finite samples. However it can be demonstrated.
EC220 - Introduction to econometrics (chapter 7)
EC220 - Introduction to econometrics (chapter 9)
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: expected value of a function of a random variable Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 6) Slideshow: variable misspecification iii: consequences for diagnostics Original.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220.
EC220 - Introduction to econometrics (chapter 1)
EC220 - Introduction to econometrics (review chapter)
TESTING A HYPOTHESIS RELATING TO A REGRESSION COEFFICIENT This sequence describes the testing of a hypotheses relating to regression coefficients. It is.
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: precision of the multiple regression coefficients Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: semilogarithmic models Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: nonlinear regression Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
EC220 - Introduction to econometrics (chapter 12)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: Chow test Original citation: Dougherty, C. (2012) EC220 - Introduction.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: the normal distribution Original citation: Dougherty, C. (2012)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy variable classification with two categories Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: two sets of dummy variables Original citation: Dougherty, C. (2012) EC220.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
EC220 - Introduction to econometrics (review chapter)
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: sampling and estimators Original citation: Dougherty, C. (2012)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: the effects of changing the reference category Original citation: Dougherty,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy classification with more than two categories Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: autocorrelation, partial adjustment, and adaptive expectations Original.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: conflicts between unbiasedness and minimum variance Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 8) Slideshow: measurement error Original citation: Dougherty, C. (2012) EC220 - Introduction.
Confidence intervals were treated at length in the Review chapter and their application to regression analysis presents no problems. We will not repeat.
CONSEQUENCES OF AUTOCORRELATION
Christopher Dougherty EC220 - Introduction to econometrics (chapter 7) Slideshow: weighted least squares and logarithmic regressions Original citation:
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
EC220 - Introduction to econometrics (chapter 8)
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: footnote: the Cochrane-Orcutt iterative process Original citation: Dougherty,
© Christopher Dougherty 1999–2006 The denominator has been rewritten a little more carefully, making it explicit that the summation of the squared deviations.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
A.1The model is linear in parameters and correctly specified. PROPERTIES OF THE MULTIPLE REGRESSION COEFFICIENTS 1 Moving from the simple to the multiple.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 9) Slideshow: instrumental variable estimation: variation Original citation: Dougherty,
. reg LGEARN S WEIGHT85 Source | SS df MS Number of obs = F( 2, 537) = Model |
Christopher Dougherty EC220 - Introduction to econometrics (chapter 6) Slideshow: multiple restrictions and zero restrictions Original citation: Dougherty,
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: alternative expression for population variance Original citation:
RAMSEY’S RESET TEST OF FUNCTIONAL MISSPECIFICATION 1 Ramsey’s RESET test of functional misspecification is intended to provide a simple indicator of evidence.
1 ESTIMATORS OF VARIANCE, COVARIANCE, AND CORRELATION We have seen that the variance of a random variable X is given by the expression above. Variance.
1 CHANGES IN THE UNITS OF MEASUREMENT Suppose that the units of measurement of Y or X are changed. How will this affect the regression results? Intuitively,
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
GRAPHING A RELATIONSHIP IN A MULTIPLE REGRESSION MODEL The output above shows the result of regressing EARNINGS, hourly earnings in dollars, on S, years.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220 -
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: independence of two random variables Original citation: Dougherty,
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
VARIABLE MISSPECIFICATION II: INCLUSION OF AN IRRELEVANT VARIABLE In this sequence we will investigate the consequences of including an irrelevant variable.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: simple regression model Original citation: Dougherty, C. (2012) EC220.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

EC220 - Introduction to econometrics (chapter 2) Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: precision of the regression coefficients   Original citation: Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 2). [Teaching Resource] © 2012 The Author This version available at: http://learningresources.lse.ac.uk/128/ Available in LSE Learning Resources Online: May 2012 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user credits the author and licenses their new creations under the identical terms. http://creativecommons.org/licenses/by-sa/3.0/   http://learningresources.lse.ac.uk/

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u probability density function of b2 b2 We have seen that the regression coefficients b1 and b2 are random variables. They provide point estimates of b1 and b2, respectively. In the last sequence we demonstrated that these point estimates are unbiased. 1

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u probability density function of b2 b2 standard deviation of density function of b2 In this sequence we will see that we can also obtain estimates of the standard deviations of the distributions. These will give some idea of their likely reliability and will provide a basis for tests of hypotheses. 2

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Expressions (which will not be derived) for the variances of their distributions are shown above. See Box 2.3 in the text for a proof of the expression for the variance of b2. 3

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u We will focus on the implications of the expression for the variance of b2. Looking at the numerator, we see that the variance of b2 is proportional to su2. This is as we would expect. The more noise there is in the model, the less precise will be our estimates. 4

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X This is illustrated by the diagrams above. The nonstochastic component of the relationship, Y = 3.0 + 0.8X, represented by the dotted line, is the same in both diagrams. 5

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X The values of X are the same, and the same random numbers have been used to generate the values of the disturbance term in the 20 observations. 6

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X However, in the right-hand diagram the random numbers have been multiplied by a factor of 5. As a consequence, the regression line, the solid line, is a much poorer approximation to the nonstochastic relationship. 7

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Looking at the denominator, the larger is the sum of the squared deviations of X, the smaller is the variance of b2. 8

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u However the size of the sum of the squared deviations depends on two factors: the number of observations, and the size of the deviations of Xi about its sample mean. To discriminate between them, it is convenient to define the mean square deviation of X, MSD(X). 9

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u From the expression as rewritten, it can be seen that the variance of b2 is inversely proportional to n, the number of observations in the sample, controlling for MSD(X). The more information you have, the more accurate your estimates are likely to be. 10

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u A third implication of the expression is that the variance is inversely proportional to the mean square deviation of X. What is the reason for this? 11

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X In the diagrams above, the nonstochastic component of the relationship is the same and the same random numbers have been used for the 20 values of the disturbance term. 12

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X However, MSD(X) is much smaller in the right-hand diagram because the values of X are much closer together. 13

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Y X Y = 3.0 + 0.8X Hence in that diagram the position of the regression line is more sensitive to the values of the disturbance term, and as a consequence the regression line is likely to be relatively inaccurate. 14

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Of course, as can be seen from the variance expressions, it is really the ratio of the MSD(X) to the variance of u which is important, rather than the absolute size of either. 15

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u We cannot calculate the variances exactly because we do not know the variance of the disturbance term. However, we can derive an estimator of su2 from the residuals. 16

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Clearly the scatter of the residuals around the regression line will reflect the unseen scatter of u about the line Yi = 1 + b2Xi, although in general the residual and the value of the disturbance term in any given observation are not equal to one another. 17

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u One measure of the scatter of the residuals is their mean square error, MSD(e), defined as shown. (Remember that the mean of the OLS residuals is equal to zero). Intuitively this should provide a guide to the variance of u. 18

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Before going any further, ask yourself the following question. Which line is likely to be closer to the points representing the sample of observations on X and Y, the true line Y = b1 + b2X or the regression line Y = b1 + b2X? ^ 19

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u The answer is the regression line, because by definition it is drawn in such a way as to minimize the sum of the squares of the distances between it and the observations. 20

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Hence the spread of the residuals will tend to be smaller than the spread of the values of u, and MSD(e) will tend to underestimate su2. 21

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Indeed, it can be shown that the expected value of MSD(e), when there is just one explanatory variable, is given by the expression above. 22

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u However, it follows that we can obtain an unbiased estimator of su2 by multiplying MSD(e) by n / (n – 2). We will denote this su2. 23

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u We can then obtain estimates of the standard deviations of the distributions of b1 and b2 by substituting su2 for su2 in the variance expressions and taking the square roots. 24

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u These are described as the standard errors of b1 and b2, ‘estimates of the standard deviations’ being a bit of a mouthful. 25

PRECISION OF THE REGRESSION COEFFICIENTS . reg EARNINGS S Source | SS df MS Number of obs = 540 -------------+------------------------------ F( 1, 538) = 112.15 Model | 19321.5589 1 19321.5589 Prob > F = 0.0000 Residual | 92688.6722 538 172.283777 R-squared = 0.1725 -------------+------------------------------ Adj R-squared = 0.1710 Total | 112010.231 539 207.811189 Root MSE = 13.126 ------------------------------------------------------------------------------ EARNINGS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- S | 2.455321 .2318512 10.59 0.000 1.999876 2.910765 _cons | -13.93347 3.219851 -4.33 0.000 -20.25849 -7.608444 The standard errors of the coefficients always appear as part of the output of a regression. Here is the regression of hourly earnings on years of schooling discussed in a previous slideshow. The standard errors appear in a column to the right of the coefficients. 26

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Efficiency probability density function of b2 OLS other unbiased estimator b2 b2 The Gauss–Markov theorem states that, provided that the regression model assumptions are valid, the OLS estimators are BLUE: best (most efficient) linear (functions of the values of Y) unbiased estimators of the parameters. 27

PRECISION OF THE REGRESSION COEFFICIENTS Simple regression model: Y = b1 + b2X + u Efficiency probability density function of b2 OLS other unbiased estimator b2 b2 The proof of the theorem is not difficult but it is not high priority and we will take it on trust. See Section 2.7 of the text for a proof for the simple regression model. 28

Copyright Christopher Dougherty 2011. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 2.5 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own and who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lse. 11.07.25