Lecture 9- Chapter 19 Multiple regression. 19.1 Introduction In this chapter we extend the simple linear regression model and allow for any number of.

Slides:



Advertisements
Similar presentations
Chapter 18 Multiple Regression.
Advertisements

Multiple Regression. Introduction In this chapter, we extend the simple linear regression model. Any number of independent variables is now allowed. We.
Lecture Unit Multiple Regression.
Chap 12-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 12 Simple Regression Statistics for Business and Economics 6.
Forecasting Using the Simple Linear Regression Model and Correlation
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
1 Multiple Regression Chapter Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent.
1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats)
Multiple Regression Analysis
LECTURE 3 Introduction to Linear Regression and Correlation Analysis
Chapter 13 Multiple Regression
Chapter 12 Simple Regression
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 11 th Edition.
Simple Linear Regression
1 Multiple Regression Chapter Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent.
Lecture 25 Multiple Regression Diagnostics (Sections )
Chapter 12 Multiple Regression
Lecture 22 Multiple Regression (Sections )
Chapter 13 Introduction to Linear Regression and Correlation Analysis
1 Multiple Regression. 2 Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent variables.
1 Pertemuan 13 Uji Koefisien Korelasi dan Regresi Matakuliah: A0392 – Statistik Ekonomi Tahun: 2006.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 13-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Lecture 24 Multiple Regression (Sections )
Pengujian Parameter Koefisien Korelasi Pertemuan 04 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
Chapter Topics Types of Regression Models
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Lecture 23 Multiple Regression (Sections )
1 4. Multiple Regression I ECON 251 Research Methods.
Linear Regression Example Data
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 10 th Edition.
Chapter 7 Forecasting with Simple Regression
Introduction to Regression Analysis, Chapter 13,
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
Chapter 13 Simple Linear Regression
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.
Introduction to Linear Regression and Correlation Analysis
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
Statistics for Business and Economics 7 th Edition Chapter 11 Simple Regression Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Ch.
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
Introduction to Linear Regression
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 18 Multiple Regression.
Economics 173 Business Statistics Lecture 20 Fall, 2001© Professor J. Petry
Economics 173 Business Statistics Lecture 19 Fall, 2001© Professor J. Petry
Lecture 10: Correlation and Regression Model.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Statistics for Managers Using Microsoft® Excel 5th Edition
Introduction to Multiple Regression Lecture 11. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more.
Chapter 12 Simple Linear Regression.
Copyright © 2009 Cengage Learning 17.1 Chapter 19 Multiple Regression.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 14-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
Economics 173 Business Statistics Lecture 18 Fall, 2001 Professor J. Petry
Multiple Regression Reference: Chapter 18 of Statistics for Management and Economics, 7 th Edition, Gerald Keller. 1.
1 Assessment and Interpretation: MBA Program Admission Policy The dean of a large university wants to raise the admission standards to the popular MBA.
Forecasting. Model with indicator variables The choice of a forecasting technique depends on the components identified in the time series. The techniques.
Lecture 9 Forecasting. Introduction to Forecasting * * * * * * * * o o o o o o o o Model 1Model 2 Which model performs better? There are many forecasting.
Chapter 13 Simple Linear Regression
Inference for Least Squares Lines
Statistics for Managers using Microsoft Excel 3rd Edition
Prepared by Lee Revere and John Large
Presentation transcript:

Lecture 9- Chapter 19 Multiple regression

19.1 Introduction In this chapter we extend the simple linear regression model and allow for any number of independent variables. We expect to build a model that fits the data better than the simple linear regression model.

We will use computer printout to –Assess the model How well does it fit the data? Is it useful? Are any required conditions violated? –Employ the model interpreting the coefficients predictions using the prediction equation estimating the expected value of the dependent variable

Coefficients Dependent variableIndependent variables Random error variable 19.2 Model and required conditions We allow for k independent variables to potentially be related to the dependent variable: y =  0 +  1 x 1 +  2 x 2 + …+  k x k + 

y =  0 +  1 x X y X2X2 1 The simple linear regression model allows for one independent variable x. y =  0 +  1 x +  The multiple linear regression model allows for more than one independent variable. y =  0 +  1 x 1 +  2 x 2 +  Note how the straight line becomes a plane and... y =  0 +  1 x 1 +  2 x 2

X y X2X2 1 … a parabola becomes a parabolic surface.

–The mean of  is zero: E() = 0. –The standard deviation of  is a constant (  ). –The errors are independent. –The errors are independent of the independent variable x. –The error  is normally distributed. These conditions are required in order to –estimate the model coefficients with desirable properties –test hypotheses about the model coefficients –assess the resulting model. Required conditions for the error variable 

–If the model passes the assessment tests, use it to interpret the coefficients and generate predictions. –Assess the model fit and usefulness using the model statistics. –Diagnose violations of required conditions. Try to remedy problems identified Estimating the coefficients and assessing the model The procedure –Obtain the model coefficients and statistics using statistical computer software.

The Holiday Inns group is planning an expansion. Management wishes to predict which sites are likely to be profitable. Several areas where predictors of profitability can be identified are: –competition –market awareness –demand generators –demographics –physical quality. Example

Profitability Competition Market awareness CustomersCommunity Physical Margin RoomsNearestOffice space University enrolment IncomeDistance to town Distance to downtown Median household income Distance to the nearest Holiday Inn Number of hotel/motel rooms within 3 km of the site

–Data were collected from 100 randomly- selected Holiday Inns and run for the following suggested model: Margin =     Rooms   Nearest  Office  Enrolment +  5 Income +  6 Distance to town + 

Excel output This is the sample regression equation (sometimes called the prediction equation) MARGIN = – ROOMS – NEAREST OFFICE ENROLMT – INCOME DISTTWN Let us assess this equation

–We need to estimate the standard error of estimate –where k is the number of X (independent) variables. –Compare s  to the mean value of y from the printout, standard error = calculating the mean value of y we have –It seems s  is not particularly small (relative to y values). –Can we conclude that the model does not fit the data well? Standard error of estimate

–The definition of R 2 is –From the printout R 2 = –52.51% of the variation in the measure of profitability is explained by the linear regression model formulated above. –When adjusted for degrees of freedom, adjusted R 2 = 1 – [SSE/(n – k – 1)]/[SS y /(n – 1)] = 49.44% Coefficient of determination R 2

We pose the question: Is there at least one independent variable linearly related to the dependent variable? To answer the question, we test the hypotheses H 0 :  1 =  2 = … =  k = 0 H A : at least one  i is not equal to zero If at least one  i is not equal to zero, the model is useful. Testing the utility of the model

To test these hypotheses we perform an analysis of variance procedure. The F-test –Construct the F-statistic –Rejection region F>F ,k,n-k-1 MSE MSR F  MSR = SSR/k MSE = SSE/(n – k – 1) SST = [Variation in y] = SSR + SSE. Large F results from a large SSR. Then much of the variation in y is explained by the regression model. The null hypothesis should be rejected; thus the model is useful. Required conditions must be satisfied.

Excel provides the following ANOVA results Example – continued SSE SSR MSE MSR MSR/MSE

Excel provides the following ANOVA results Example - continued F ,k,n-k-1 = F 0.05,6, = 2.17 F = > 2.17 Also the p-value (significance F) = (10) -13 Clearly  = 0.05 > (10) -13 and the null hypothesis is rejected. Conclusion: There is sufficient evidence to reject the null hypothesis in favour of the alternative hypothesis. At least one of the  i is not equal to zero. Thus, at least one independent variable is linearly related to y. This linear regression model is useful.

This is the intercept, the value of y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept. In this model, for each additional rooms within 3 km of the Holiday Inn, the operating margin decreases on average by 7.6% (assuming the other variables are held constant). Interpreting the coefficients

– In this model, for each additional km that the nearest competitor is to the Holiday Inn, the average operating margin decreases by 1.65%. – For each additional sq-metre of office space, the average increase in operating margin will be 0.02%. – For each additional thousand students MARGIN increases by 0.21%. – For each additional $1 000 increase in median household income, MARGIN decreases by 0.41%. – For each additional km to downtown, MARGIN increases by 0.23% on average.

The hypothesis for each  i Excel printout H 0 :  i = 0 H A :  i  0 Test statistic d.f. = n - k -1 Testing the coefficients

The model can be used by: –producing a prediction interval for the particular value of y, for a given set of values of x i. –producing an interval estimate for the expected value of y, for a given set of values of x i. The model can be used to learn about relationships between the independent variables x i and the dependent variable y, by interpreting the coefficients  i Using the regression equation

Example – continued Predict the MARGIN of an inn at a site with the following characteristics: –3 815 rooms within 3 km –closest competitor 3.4 km away – sq-metre of office space – university students –$ median household income –3.6 km distance to downtown centre. MARGIN = – (3815) – (3.4) ( 476) (24.5) – 0.413( 39) (3.6) = 37.1%

The required conditions for the model assessment to apply must be checked. –Is the error variable normally distributed? –Is the error variance constant? –Are the errors independent? –Can we identify outliers? –Is multicollinearity a problem? 19.4 Regression diagnostics – II Draw a histogram of the residuals or use a  2 test for normality. Plot the residuals versus y. ^ Plot the residuals versus the time periods. Calculate the paired correlation coefficients of the independent variables.

Example 19.2 –A real estate agent believes that a house selling price can be predicted using the house size, number of bedrooms and lot size. –A random sample of 100 houses was drawn and data recorded. –Analyse the relationship among the four variables.

Solution –The proposed model is PRICE =  0 +  1 BEDROOMS +  2 H-SIZE + 3 LOTSIZE +  Excel solution The model is useful, but no variable is significantly related to the selling price!

–when regressing the price on each independent variable alone, it is found that each variable is strongly related to the selling price. –Multicollinearity is the source of this problem. –Multicollinearity causes two kinds of difficulties: The t statistics appear to be too small. The  coefficients cannot be interpreted as ‘slopes’. However,

Remedying violations of required conditions –Non-normality or heteroscedasticity can be remedied using transformations on the y variable. –The transformations can improve the linear relationship between the dependent variable and the independent variables. –Many computer software systems allow us to make the transformations easily.

A brief list of transformations –y’ = log y (for y > 0) Use when the s  increases with y, or Use when the error distribution is positively skewed. –y’ = y 2 Use when the s 2  is proportional to E(y), or Use when the error distribution is negatively skewed. –y’ = y 1/2 (for y > 0) Use when the s 2  is proportional to E(y). –y’ = 1/y Use when s 2  increases significantly when y increases beyond some value.

Example 19.3 –A statistics lecturer wanted to know whether time limit affect the marks on a quiz. –A random sample of 100 students was split into five groups. –Each student did a quiz, but each group was given a different time limit. See data below. MarksMarks Analyse these results and include diagnostics.

This model is useful and provides a good fit. The errors seem to be normally distributed. The model tested: MARK =  0 +  1 TIME + 

The standard error of estimate seems to increase with the predicted value of y. Two transformations are used to remedy this problem: 1. y’ = log e y 2. y’ = 1/y

Let us see what happens when a transformation is applied: 40,18 40,23 40, , 2.89 Log e 23 = Log e 18 = 2.89 The original data, where ‘mark’ is a function of ‘time’ The modified data, where LogMark is a function of ‘time’

The new regression analysis and diagnostics are: The model tested: LOGMARK =  ’ 0 +  ’ 1 TIME +  ’ Predicted LogMark = time This model is useful and provides a good fit.

The errors seem to be normally distributed. The standard error still changes with the predicted y, but the change is smaller than before.

Let TIME = 55 minutes LogMark = time = (55) = To find the predicted mark, take the antilog: Mark = antilog e = e = How do we use the modified model to predict?

19.5 Regression diagnostics – III (time series) Durbin–Watson test –This test detects first-order autocorrelation between consecutive residuals in a time series. –If autocorrelation exists, the error variables are not independent. Residual at time t

Residuals Time Positive first-order autocorrelation occurs when consecutive residuals tend to be similar. Then the value of d is small (< 2). Positive first-order autocorrelation Negative first-order autocorrelation Residuals Time + Negative first-order autocorrelation occurs when consecutive residuals tend to differ markedly. Then the value of d is large (> 2).

–One-tail test for positive first-order autocorrelation If d < d L there is enough evidence to show that positive first-order correlation exists. If d > d U there is not enough evidence to show that positive first-order correlation exists. If d is between d L and d U the test is inconclusive. –One-tail test for negative first-order autocorrelation If d > 4 – d L negative first-order correlation exists. If d < 4 – d U negative first-order correlation does not exist. If d is between 4 – d U & 4 – d L the test is inconclusive.

–Two-tail test for first-order autocorrelation If d 4 – d L first-order autocorrelation exists. If d falls between d L and d U or between 4 – d U and 4 – d L the test is inconclusive. If d falls between d U and 4 – d U there is no evidence for first-order autocorrelation. dLdL dUdU d U 4-d L First-order correlation exists First-order correlation exists Inconclusive test Inconclusive test First-order correlation does not exist First-order correlation does not exist

–How does the weather affect the sales of lift tickets in a ski resort? –Data of the past 20 years sales of tickets, along with the total snowfall and the average temperature during Christmas week in each year, were collected. –The model hypothesised was TICKETS =  0 +  1 SNOWFALL+  2 TEMPERATURE +  –Regression analysis yielded the following results: Example

The model seems to be very poor: The model seems to be very poor: The fit is very low (R-square = 0.12) It is not valid (signif. F = 0.33) No variable is linearly related to sales Diagnosis of the required conditions resulted in the following findings: Diagnosis of the required conditions resulted in the following findings:

Residual over time Residual vs. predicted y The errors are not independent The error variance is constant The errors may be normally distributed The error distribution

Test for positive first-order autocorrelation: n = 20, k = 2. From the Durbin–Watson table we have: d L = 1.10, d U = The statistic d = Conclusion: because d < d L there is sufficient evidence to infer that positive first-order autocorrelation exists. Using the computer – Excel Tools > Data Analysis > Regression (check residual option then OK) Tools > Data Analysis Plus > Durbin–Watson statistic > Highlight range of residuals from regression run > OK The residuals

The modified regression model TICKETS =  0 +  1 SNOWFALL +  2 TEMPERATURE +  3 YEARS +  The autocorrelation has occurred over time. Therefore a time-dependent variable added to the model may correct the problem. All the required conditions are met for this model. The fit of this model is high: R 2 = The model is useful. Significance F = 5.93 E – 5. SNOWFALL and YEARS are linearly related to ticket sales. TEMPERATURE is not linearly related to ticket sales.