1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,

Slides:



Advertisements
Similar presentations
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: a Monte Carlo experiment Original citation: Dougherty, C. (2012) EC220.
Advertisements

Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: introduction to maximum likelihood estimation Original citation: Dougherty,
EC220 - Introduction to econometrics (chapter 7)
1 XX X1X1 XX X Random variable X with unknown population mean  X function of X probability density Sample of n observations X 1, X 2,..., X n : potential.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: asymptotic properties of estimators: plims and consistency Original.
1 THE NORMAL DISTRIBUTION In the analysis so far, we have discussed the mean and the variance of a distribution of a random variable, but we have not said.
Random effects estimation RANDOM EFFECTS REGRESSIONS When the observed variables of interest are constant for each individual, a fixed effects regression.
MEASUREMENT ERROR 1 In this sequence we will investigate the consequences of measurement errors in the variables in a regression model. To keep the analysis.
ASYMPTOTIC PROPERTIES OF ESTIMATORS: PLIMS AND CONSISTENCY
1 ASSUMPTIONS FOR MODEL C: REGRESSIONS WITH TIME SERIES DATA Assumptions C.1, C.3, C.4, C.5, and C.8, and the consequences of their violations are the.
EC220 - Introduction to econometrics (chapter 2)
EC220 - Introduction to econometrics (chapter 9)
00  sd  0 –sd  0 –1.96sd  0 +sd 2.5% CONFIDENCE INTERVALS probability density function of X null hypothesis H 0 :  =  0 In the sequence.
EXPECTED VALUE OF A RANDOM VARIABLE 1 The expected value of a random variable, also known as its population mean, is the weighted average of its possible.
TESTING A HYPOTHESIS RELATING TO THE POPULATION MEAN 1 This sequence describes the testing of a hypothesis at the 5% and 1% significance levels. It also.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220.
EC220 - Introduction to econometrics (review chapter)
TESTING A HYPOTHESIS RELATING TO A REGRESSION COEFFICIENT This sequence describes the testing of a hypotheses relating to regression coefficients. It is.
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 THE CENTRAL LIMIT THEOREM If a random variable X has a normal distribution, its sample mean X will also have a normal distribution. This fact is useful.
1 In the previous sequence, we were performing what are described as two-sided t tests. These are appropriate when we have no information about the alternative.
Cross-sectional:Observations on individuals, households, enterprises, countries, etc at one moment in time (Chapters 1–10, Models A and B). 1 During this.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
DERIVING LINEAR REGRESSION COEFFICIENTS
1 In a second variation, we shall consider the model shown above. x is the rate of growth of productivity, assumed to be exogenous. w is now hypothesized.
1 This sequence shows why OLS is likely to yield inconsistent estimates in models composed of two or more simultaneous relationships. SIMULTANEOUS EQUATIONS.
EC220 - Introduction to econometrics (review chapter)
1 UNBIASEDNESS AND EFFICIENCY Much of the analysis in this course will be concerned with three properties of estimators: unbiasedness, efficiency, and.
FIXED EFFECTS REGRESSIONS: WITHIN-GROUPS METHOD The two main approaches to the fitting of models using panel data are known, for reasons that will be explained.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: sampling and estimators Original citation: Dougherty, C. (2012)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: autocorrelation, partial adjustment, and adaptive expectations Original.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: conflicts between unbiasedness and minimum variance Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 8) Slideshow: measurement error Original citation: Dougherty, C. (2012) EC220 - Introduction.
THE FIXED AND RANDOM COMPONENTS OF A RANDOM VARIABLE 1 In this short sequence we shall decompose a random variable X into its fixed and random components.
Confidence intervals were treated at length in the Review chapter and their application to regression analysis presents no problems. We will not repeat.
CONSEQUENCES OF AUTOCORRELATION
ALTERNATIVE EXPRESSION FOR POPULATION VARIANCE 1 This sequence derives an alternative expression for the population variance of a random variable. It provides.
CONFLICTS BETWEEN UNBIASEDNESS AND MINIMUM VARIANCE
1 t TEST OF A HYPOTHESIS RELATING TO A POPULATION MEAN The diagram summarizes the procedure for performing a 5% significance test on the slope coefficient.
ASYMPTOTIC AND FINITE-SAMPLE DISTRIBUTIONS OF THE IV ESTIMATOR
MULTIPLE RESTRICTIONS AND ZERO RESTRICTIONS
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: footnote: the Cochrane-Orcutt iterative process Original citation: Dougherty,
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
A.1The model is linear in parameters and correctly specified. PROPERTIES OF THE MULTIPLE REGRESSION COEFFICIENTS 1 Moving from the simple to the multiple.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 9) Slideshow: instrumental variable estimation: variation Original citation: Dougherty,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 6) Slideshow: multiple restrictions and zero restrictions Original citation: Dougherty,
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
1 COVARIANCE, COVARIANCE AND VARIANCE RULES, AND CORRELATION Covariance The covariance of two random variables X and Y, often written  XY, is defined.
1 Y SIMPLE REGRESSION MODEL Suppose that a variable Y is a linear function of another variable X, with unknown parameters  1 and  2 that we wish to estimate.
1 We will continue with a variation on the basic model. We will now hypothesize that p is a function of m, the rate of growth of the money supply, as well.
1 ASYMPTOTIC PROPERTIES OF ESTIMATORS: THE USE OF SIMULATION In practice we deal with finite samples, not infinite ones. So why should we be interested.
Definition of, the expected value of a function of X : 1 EXPECTED VALUE OF A FUNCTION OF A RANDOM VARIABLE To find the expected value of a function of.
INSTRUMENTAL VARIABLES 1 Suppose that you have a model in which Y is determined by X but you have reason to believe that Assumption B.7 is invalid and.
1 INSTRUMENTAL VARIABLE ESTIMATION OF SIMULTANEOUS EQUATIONS In the previous sequence it was asserted that the reduced form equations have two important.
1 HETEROSCEDASTICITY: WEIGHTED AND LOGARITHMIC REGRESSIONS This sequence presents two methods for dealing with the problem of heteroscedasticity. We will.
1 ESTIMATORS OF VARIANCE, COVARIANCE, AND CORRELATION We have seen that the variance of a random variable X is given by the expression above. Variance.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220 -
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
1 We will illustrate the heteroscedasticity theory with a Monte Carlo simulation. HETEROSCEDASTICITY: MONTE CARLO ILLUSTRATION 1 standard deviation of.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: simple regression model Original citation: Dougherty, C. (2012) EC220.
FOOTNOTE: THE COCHRANE–ORCUTT ITERATIVE PROCESS 1 We saw in the previous sequence that AR(1) autocorrelation could be eliminated by a simple manipulation.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence, we discuss the properties of such predictions. True model Fitted model

2 Suppose that, given a sample of n observations, we have fitted a pricing model with k – 1 characteristics, as shown. PREDICTION True model Fitted model

3 Suppose now that one encounters a new variety of the good with characteristics {X 2 *, X 3 *,..., X k * }. Given the sample regression result, it is natural to predict that the price of the new variety should be given by the third equation. PREDICTION Prediction conditional on True model Fitted model

4 What can one say about the properties of this prediction? First, it is natural to ask whether it is fair, in the sense of not systematically overestimating or underestimating the actual price. Second, we will be concerned about the likely accuracy of the prediction. PREDICTION Prediction conditional on True model Fitted model

5 PREDICTION Prediction conditional on True model Fitted model We will consider the case where the good has only one relevant characteristic and suppose that we have fitted the simple regression model shown. Hence, given a new variety of the good with characteristic X = X *, the model gives us the predicted price.

6 PREDICTION Prediction conditional on True model Fitted model We will assume that the model applies to the new good and therefore the actual price, conditional on X = X *, is generated as shown, where u* is the value of the disturbance term for the new good. Actual value of

7 PREDICTION We will define the prediction error of the model, PE, as the difference between the actual price and the predicted price. Prediction conditional on True model Fitted model Actual value of

8 Substituting for the actual and predicted prices, the prediction error is as shown. PREDICTION Prediction conditional on True model Fitted model Actual value of

9 PREDICTION We take expectations. Prediction conditional on True model Fitted model Actual value of

10 PREDICTION  1 and  2 are assumed to be fixed parameters, so they are not affected by taking expectations. Likewise, X * is assumed to be a fixed quantity and unaffected by taking expectations. However, u*, b 1 and b 2 are random variables. Prediction conditional on True model Fitted model Actual value of

11 PREDICTION E(u*) = 0 because u* is randomly drawn from the distribution for u, which we have assumed as zero population mean. Under the usual OLS assumptions, b 1 will be an unbiased estimator of  1 and b 2 an unbiased estimator of  2. Prediction conditional on True model Fitted model Actual value of

12 PREDICTION Hence the expectation of the prediction error is zero. The result generalizes easily to the case where there are multiple characteristics and the new good embodies a new combination of them. Prediction conditional on True model Fitted model Actual value of

13 The population variance of the prediction error is given by the expression shown. Unsurprisingly, this implies that, the further is the value of X * from the sample mean, the larger will be the population variance of the prediction error. PREDICTION Variance of prediction error

14 It also implies, again unsurprisingly, that, the larger is the sample, the smaller will be the population variance of the prediction error, with a lower limit of  u 2. PREDICTION Variance of prediction error

15 Provided that the regression model assumptions are valid, b 1 and b 2 will tend to their true values as the sample becomes large, so the only source of error in the prediction will be u*, and by definition this has population variance  u 2. PREDICTION Variance of prediction error

16 The standard error of the prediction error is calculated using the square root of the expression for the population variance, replacing the variance of u with the estimate obtained when fitting the model in the sample period. PREDICTION Standard error Variance of prediction error

17 Hence we are able to construct a confidence interval for a prediction. t crit is the critical level of t, given the significance level selected and the number of degrees of freedom, and s.e. is the standard error of the prediction. PREDICTION P XX* upper limit of confidence interval for P*

18 The confidence interval has been drawn as a function of X *. As we noted from the mathematical expression, it becomes wider, the greater the distance from X * to the sample mean. PREDICTION P XX* upper limit of confidence interval for P* lower limit of confidence interval for P*

19 PREDICTION P XX* upper limit of confidence interval for P* lower limit of confidence interval for P* With multiple explanatory variables, the expression for the prediction variance becomes complex. One point to note is that multicollinearity may not have an adverse effect on prediction precision, even if the estimates of the coefficients have large variances.

20 PREDICTION For simplicity, suppose that there are two explanatory variables, that both have positive true coefficients, and that they are positively correlated, the model being as shown, and that we are predicting the value of Y *, given values X 2 * and X 3 *. Suppose X 2 and X 3 are positively correlated,  2 > 0,  3 > 0. Then cov(b 2, b 3 ) < 0. If b 2 is overestimated, b 3 is likely to be underestimated. (b 2 X 2 * + b 3 X 3 * ) may be a good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations.

21 Then if the effect of X 2 is overestimated, so that b 2 >  2, the effect of X 3 is likely to be underestimated, with b 3 <  3. As a consequence, the effects of the errors may to some extent cancel out, with the result that the linear combination may be close to (  2 X 2 * +  3 X 3 * ). PREDICTION Suppose X 2 and X 3 are positively correlated,  2 > 0,  3 > 0. Then cov(b 2, b 3 ) < 0. If b 2 is overestimated, b 3 is likely to be underestimated. (b 2 X 2 * + b 3 X 3 * ) may be a good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations.

22 This will be illustrated with a simulation, with the model and data shown. We fit the model and make the prediction Y * = b 1 + b 2 X 2 * + b 3 X 3 *. Simulation PREDICTION

23 Since X 2 and X 3 are virtually identical, this may be approximated as Y * = b 1 + (b 2 + b 3 )X 2 *. Thus the predictive accuracy depends on how close (b 2 + b 3 ) is to (  2 +  3 ), that is, to 5. PREDICTION Simulation

24 The figure shows the distributions of b 2 and b 3 for 10 million samples. Their distributions have relatively wide variances around their true values, as should be expected, given the multicollinearity. The actual standard deviations of their distributions is PREDICTION standard deviations 0.45 standard deviation 0.04

25 The figure also shows the distribution of their sum. As anticipated, it is distributed around 5, but with a much lower standard deviation, 0.04, despite the multicollinearity affecting the point estimates of the individual coefficients. PREDICTION standard deviations 0.45 standard deviation 0.04

Copyright Christopher Dougherty These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 3.6 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics or the University of London International Programmes distance learning course EC2020 Elements of Econometrics