Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats)

Similar presentations


Presentation on theme: "1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats)"— Presentation transcript:

1 1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats) Interpreting the regression coefficients & R-Square Predictions using the Regression Equation Regression Diagnostics & Fixes –Multicollinearity, Heteroskedasticity, Serial Correlation, Non- normality of error term

2 2 Introduction In this model we extend the simple linear regression model, and allow for any number of independent variables. We will also learn to detect econometric problems.

3 3 Coefficients Dependent variableIndependent variables Random error variable Model and Required Conditions We allow for k independent variables to potentially be related to the dependent variable y =  0 +  1 x 1 +  2 x 2 + …+  k x k + 

4 4 y =  0 +  1 x X y X2X2 1 The simple linear regression model allows for one independent variable, “x” y =  0 +  1 x +  The multiple linear regression model allows for more than one independent variable. Y =  0 +  1 x 1 +  2 x 2 +  Note how the straight line becomes a plain, and... y =  0 +  1 x 1 +  2 x 2

5 5 Required conditions for the error variable  –The error  is normally distributed with mean equal to zero and a constant standard deviation   (independent of the value of y).   is unknown. –The errors are independent. These conditions are required in order to –estimate the model coefficients, –assess the resulting model.

6 6 –La Quinta Motor Inns is planning an expansion. –Management wishes to predict which sites are likely to be profitable. –Several areas where predictors of profitability can be identified are: Competition Market awareness Demand generators Demographics Physical quality Example 1 Where to locate a new motor inn?

7 7 Profitability Competition Market awareness CustomersCommunity Physical Margin RoomsNearestOffice space College enrollment IncomeDisttwn Distance to downtown. Median household income. Distance to the nearest La Quinta inn. Number of hotels/motels rooms within 3 miles from the site.

8 8 –Data was collected from randomly selected 100 inns that belong to La Quinta, and ran for the following suggested model: Margin =     Rooms   Nearest   Office    College +  5 Income +  6 Disttwn +

9 9 Excel output This is the sample regression equation (sometimes called the prediction equation) MARGIN = 72.455 - 0.008 ROOMS - 1.646 NEAREST + 0.02 OFFICE +0.212 COLLEGE - 0.413 INCOME + 0.225 DISTTWN Let us assess this equation

10 10 Standard error of estimate –We need to estimate the standard error of estimate –Compare s  to the mean value of y From the printout, Standard Error = 5.5121 Calculating the mean value of y we have –It seems s  is not particularly small. –Can we conclude the model does not fit the data well?

11 11 Coefficient of determination –The definition is –From the printout, R 2 = 0.5251 –52.51% of the variation in the measure of profitability is explained by the linear regression model formulated above. –When adjusted for degrees of freedom, Adjusted R 2 = 1-[SSE/(n-k-1)] / [SS(Total)/(n-1)] = = 49.44%

12 12 Testing the validity of the model –We pose the question: Is there at least one independent variable linearly related to the dependent variable? –To answer the question we test the hypothesis H 0 :  1 =  2 = … =  k = 0 H 1 : At least one  i is not equal to zero. –If at least one  i is not equal to zero, the model is valid.

13 13 To test these hypotheses we perform an analysis of variance procedure. The F test –Construct the F statistic –Rejection region F>F ,k,n-k-1 MSE MSR F  MSR=SSR/k MSE=SSE/(n-k-1) [Variation in y] = SSR + SSE. Large F results from a large SSR. Then, much of the variation in y is explained by the regression model. The null hypothesis should be rejected; thus, the model is valid. Required conditions must be satisfied.

14 14 x1x1 x2x2 y1y1 y2y2 y Two data points (x 1,y 1 ) and (x 2,y 2 ) of a certain sample are shown. Total variation in y = Variation explained by the regression line) + Unexplained variation (error)

15 15 Excel provides the following ANOVA results Example 1 - continued SSE SSR MSE MSR MSR/MSE

16 16 Excel provides the following ANOVA results Example 1 - continued F ,k,n-k-1 = F 0.05,6,100-6-1 =2.17 F = 17.14 > 2.17 Also, the p-value (Significance F) = 3.03382(10) -13 Clearly,  = 0.05>3.03382(10) -13, and the null hypothesis is rejected. Conclusion: There is sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis. At least one of the  i is not equal to zero. Thus, at least one independent variable is linearly related to y. This linear regression model is valid

17 17 Let us interpret the coefficients – This is the intercept, the value of y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept. – In this model, for each additional 1000 rooms within 3 mile of the La Quinta inn, the operating margin decreases on the average by 7.6% (assuming the other variables are held constant).

18 18 – In this model, for each additional mile that the nearest competitor is to La Quinta inn, the average operating margin decreases by 1.65% – For each additional 1000 sq-ft of office space, the average increase in operating margin will be.02%. – For additional thousand students MARGIN increases by.21%. – For additional $1000 increase in median household income, MARGIN decreases by.41% – For each additional mile to the downtown center, MARGIN increases by.23% on the average

19 19 Testing the coefficients –The hypothesis for each  i –Excel printout H 0 :  i = 0 H 1 :  i = 0 Test statistic d.f. = n - k -1

20 20 Using the linear regression equation –The model can be used by Producing a prediction interval for the particular value of y, for a given set of values of x i. Producing an interval estimate for the expected value of y, for a given set of values of x i. –The model can be used to learn about relationships between the independent variables x i, and the dependent variable y, by interpreting the coefficients  i

21 21 Example 1 - continued. Produce predictions –Predict the MARGIN of an inn at a site with the following characteristics: 3815 rooms within 3 miles, Closet competitor 3.4 miles away, 476,000 sq-ft of office space, 24,500 college students, $39,000 median household income, 3.6 miles distance to downtown center. MARGIN = 72.455 - 0.008 (3815) - 1.646 (3.4) + 0.02( 476) +0.212 (24.5) - 0.413( 39) + 0.225 (3.6) = 37.1%

22 22 The required conditions for the model assessment to apply must be checked. –Is the error variable normally distributed? (JB Stat) –Is the error variance constant? (White Test) –Are the errors independent? (DW Test) –Can we identify outliers? –Is multicollinearity a problem? Regression Diagnostics - II

23 23 Example 2 House price and multicollinearity –A real estate agent believes that a house selling price can be predicted using the house size, number of bedrooms, and lot size. –A random sample of 100 houses was drawn and data recorded. –Analyze the relationship among the four variables

24 24 Solution The proposed model is PRICE =  0 +  1 BEDROOMS +  2 H-SIZE +  3 LOTSIZE +  –Excel solution The model is valid, but no variable is significantly related to the selling price !!

25 25 –when regressing the price on each independent variable alone, it is found that each variable is strongly related to the selling price. –Multicollinearity is the source of this problem. Multicollinearity causes two kinds of difficulties: –The t statistics appear to be too small. –The  coefficients cannot be interpreted as “slopes”. However,

26 26 + + + + + + + + + + + + + + + + + + + + + + + + y ^ Residual ^ y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The spread of the data points does not change much. When the requirement of a constant variance is met we have homoscedasticity.

27 27 Heteroscedasticity –When the requirement of a constant variance is violated we have heteroscedasticity. The plot of the residual Vs. predicted value of Y will exhibit a cone shape. + + + + + + + + + + + + + + + + + + + + + + + + The spread increases with y ^ y ^ Residual ^ y + + + + + + + + + + + + + + + + + + + + + + +

28 28 Detection/ Fix for Heteroscedasticity Detection: White test (Use Chi-square stat) Fix: White Correction (Uses OLS but keeps heteroscedasticity from making the variance of the OLS estimators swell in size)

29 29 + + + + + + + + + + + + + + + + + + + + + + + + + Time Residual Time + + + Patterns in the appearance of the residuals over time indicates that autocorrelation exists. Note the runs of positive residuals, replaced by runs of negative residuals Note the oscillating behavior of the residuals around zero. 00

30 30 + + + + + + + + + + Residuals Time Positive first order autocorrelation occurs when consecutive residuals tend to be similar. Then, the value of d is small (less than 2). Positive first order autocorrelation Negative first order autocorrelation + + + + 0 0 Residuals Time + Negative first order autocorrelation occurs when consecutive residuals tend to markedly differ. Then, the value of d is large (greater than 2).

31 31 Autocorrelation or Serial Correlation The Durbin - Watson Test –This test detects first order auto-correlation between consecutive residuals in a time series –If autocorrelation exists the error variables are not independent Residual at time i

32 32 One tail test for positive first order auto-correlation –If d<d L there is enough evidence to show that positive first-order correlation exists –If d>d U there is not enough evidence to show that positive first-order correlation exists –If d is between d L and d U the test is inconclusive. One tail test for negative first order auto-correlation –If d>4-d L, negative first order correlation exists –If d<4-d U, negative first order correlation does not exists –if d falls between 4-d U and 4-d L the test is inconclusive.

33 33 Two-tail test for first order auto-correlation –If d 4-d L first order auto-correlation exists –If d falls between d L and d U or between 4-d U and 4-d L the test is inconclusive –If d falls between d U and 4-d U there is no evidence for first order auto-correlation dLdL dUdU 20 4 4-d U 4-d L First order correlation exists First order correlation exists Inconclusive test Inconclusive test First order correlation does not exist First order correlation does not exist

34 34 –How does the weather affect the sales of lift tickets in a ski resort? –Data of the past 20 years sales of tickets, along with the total snowfall and the average temperature during Christmas week in each year, was collected. –The model hypothesized was TICKETS =  0 +  1 SNOWFALL +  2 TEMPERATURE+  –Regression analysis yielded the following results: Example 3

35 35 The model seems to be very poor: The model seems to be very poor: The fit is very low (R-square=0.12), It is not valid (Signif. F =0.33) No variable is linearly related to Sales Diagnosis of the required conditions resulted with the following findings Diagnosis of the required conditions resulted with the following findings

36 36 Residual over time Residual vs. predicted y The errors are not independent The error variance is constant The errors may be normally distributed The error distribution

37 37 Test for positive first order auto-correlation: n=20, k=2. From the Durbin-Watson table we have: d L =1.10, d U =1.54. The statistic d=0.59 Conclusion: Because d<d L, there is sufficient evidence to infer that positive first order auto-correlation exists. Using the computer - Excel Tools > data Analysis > Regression (check the residual option and then OK) Tools > Data Analysis Plus > Durbin Watson Statistic > Highlight the range of the residuals from the regression run > OK The residuals

38 38 The modified regression model TICKETS =  0 +  1 SNOWFALL +  2 TEMPERATURE +  3 YEARS +  The autocorrelation has occurred over time. Therefore, a time dependent variable added to the model may correct the problem All the required conditions are met for this model. The fit of this model is high R 2 = 0.74. The model is useful. Significance F = 5.93 E-5. SNOWFALL and YEARS are linearly related to ticket sales. TEMPERATURE is not linearly related to ticket sales.

39 39 Non-normality of Error term Indication of Non-normal Distribution: Mean quite different from median of residual. Skewness of residual is not zero. Kurtosis is much different than 3. (Formal test Jarque-Bera Test) Fix: Transform the dependent variable. Log(y), Square Y, Square Root of Y, 1/Y etc.


Download ppt "1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats)"

Similar presentations


Ads by Google