1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.

Slides:



Advertisements
Similar presentations
Chapter 12 Simple Linear Regression
Advertisements

Regresi dan Korelasi Linear Pertemuan 19
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Chapter 12 Simple Linear Regression
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Chapter 10 Simple Regression.
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 12b Testing for significance—the t-test Developing confidence intervals for estimates of β 1. Testing for significance—the f-test Using Excel’s.
Chapter 12a Simple Linear Regression
SIMPLE LINEAR REGRESSION
Pengujian Parameter Koefisien Korelasi Pertemuan 04 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
Chapter Topics Types of Regression Models
1 1 Slide © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
SIMPLE LINEAR REGRESSION
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Korelasi dan Regresi Linear Sederhana Pertemuan 25
1 Pertemuan 13 Regresi Linear dan Korelasi Matakuliah: I0262 – Statistik Probabilitas Tahun: 2007 Versi: Revisi.
1 1 Slide Simple Linear Regression Chapter 14 BA 303 – Spring 2011.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
1 1 Slide © 2009 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Chapter 13: Inference in Regression
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
EQT 272 PROBABILITY AND STATISTICS
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
1 1 Slide Simple Linear Regression Part A n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n.
Econ 3790: Business and Economics Statistics
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved Chapter 13 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
1 1 Slide Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide © 2004 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Econ 3790: Business and Economics Statistics Instructor: Yogesh Uppal
QMS 6351 Statistics and Research Methods Regression Analysis: Testing for Significance Chapter 14 ( ) Chapter 15 (15.5) Prof. Vera Adamchik.
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
INTRODUCTORY LINEAR REGRESSION SIMPLE LINEAR REGRESSION - Curve fitting - Inferences about estimated parameter - Adequacy of the models - Linear.
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
Chapter 13 Multiple Regression
1 1 Slide Simple Linear Regression Estimation and Residuals Chapter 14 BA 303 – Spring 2011.
1 1 Slide © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Lecture 10: Correlation and Regression Model.
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
© 2016 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
1 1 Slide © 2011 Cengage Learning Assumptions About the Error Term  1. The error  is a random variable with mean of zero. 2. The variance of , denoted.
REGRESSION AND CORRELATION SIMPLE LINEAR REGRESSION 10.2 SCATTER DIAGRAM 10.3 GRAPHICAL METHOD FOR DETERMINING REGRESSION 10.4 LEAST SQUARE METHOD.
INTRODUCTION TO MULTIPLE REGRESSION MULTIPLE REGRESSION MODEL 11.2 MULTIPLE COEFFICIENT OF DETERMINATION 11.3 MODEL ASSUMPTIONS 11.4 TEST OF SIGNIFICANCE.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Lecture 11: Simple Linear Regression
INFERENSIA KORELASI DAN REGRESI LINIER SEDERHANA Pertemuan 12
John Loucks St. Edward’s University . SLIDES . BY.
Statistics for Business and Economics (13e)
Quantitative Methods Simple Regression.
Econ 3790: Business and Economics Statistics
Slides by JOHN LOUCKS St. Edward’s University.
SIMPLE LINEAR REGRESSION
Essentials of Statistics for Business and Economics (8e)
St. Edward’s University
Presentation transcript:

1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University

2 2 Slide © 2005 Thomson/South-Western Chapter 14 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing for Significance n Using the Estimated Regression Equation for Estimation and Prediction for Estimation and Prediction n Computer Solution n Residual Analysis: Validating Model Assumptions

3 3 Slide © 2005 Thomson/South-Western Simple Linear Regression Model y =  0 +  1 x +  where:  0 and  1 are called parameters of the model,  is a random variable called the error term.  is a random variable called the error term. The simple linear regression model is: The simple linear regression model is: The equation that describes how y is related to x and The equation that describes how y is related to x and an error term is called the regression model. an error term is called the regression model.

4 4 Slide © 2005 Thomson/South-Western Simple Linear Regression Equation n The simple linear regression equation is: E ( y ) is the expected value of y for a given x value. E ( y ) is the expected value of y for a given x value.  1 is the slope of the regression line.  1 is the slope of the regression line.  0 is the y intercept of the regression line.  0 is the y intercept of the regression line. Graph of the regression equation is a straight line. Graph of the regression equation is a straight line. E ( y ) =  0 +  1 x

5 5 Slide © 2005 Thomson/South-Western Simple Linear Regression Equation n Positive Linear Relationship E(y)E(y)E(y)E(y) E(y)E(y)E(y)E(y) xx Slope  1 is positive Regression line Intercept  0

6 6 Slide © 2005 Thomson/South-Western Simple Linear Regression Equation n Negative Linear Relationship E(y)E(y)E(y)E(y) E(y)E(y)E(y)E(y) xx Slope  1 is negative Regression line Intercept  0

7 7 Slide © 2005 Thomson/South-Western Simple Linear Regression Equation n No Relationship E(y)E(y)E(y)E(y) E(y)E(y)E(y)E(y) xx Slope  1 is 0 Regression line Intercept  0

8 8 Slide © 2005 Thomson/South-Western Estimated Simple Linear Regression Equation n The estimated simple linear regression equation is the estimated value of y for a given x value. is the estimated value of y for a given x value. b 1 is the slope of the line. b 1 is the slope of the line. b 0 is the y intercept of the line. b 0 is the y intercept of the line. The graph is called the estimated regression line. The graph is called the estimated regression line.

9 9 Slide © 2005 Thomson/South-Western Estimation Process Regression Model y =  0 +  1 x +  Regression Equation E ( y ) =  0 +  1 x Unknown Parameters  0,  1 Sample Data: x y x 1 y x n y n b 0 and b 1 provide estimates of  0 and  1 Estimated Regression Equation Sample Statistics b 0, b 1

10 Slide © 2005 Thomson/South-Western Least Squares Method n Least Squares Criterion where: y i = observed value of the dependent variable for the i th observation for the i th observation^ y i = estimated value of the dependent variable for the i th observation for the i th observation

11 Slide © 2005 Thomson/South-Western n Slope for the Estimated Regression Equation Least Squares Method

12 Slide © 2005 Thomson/South-Western n y -Intercept for the Estimated Regression Equation Least Squares Method where: x i = value of independent variable for i th observation observation n = total number of observations _ y = mean value for dependent variable _ x = mean value for independent variable y i = value of dependent variable for i th observation observation

13 Slide © 2005 Thomson/South-Western Reed Auto periodically has a special week-long sale. As part of the advertising campaign Reed runs one or more television commercials during the weekend preceding the sale. Data from a sample of 5 previous sales are shown on the next slide. Simple Linear Regression n Example: Reed Auto Sales

14 Slide © 2005 Thomson/South-Western Simple Linear Regression n Example: Reed Auto Sales Number of TV Ads TV Ads Number of Cars Sold

15 Slide © 2005 Thomson/South-Western Estimated Regression Equation n Slope for the Estimated Regression Equation n y -Intercept for the Estimated Regression Equation n Estimated Regression Equation

16 Slide © 2005 Thomson/South-Western Scatter Diagram and Trend Line

17 Slide © 2005 Thomson/South-Western Coefficient of Determination n Relationship Among SST, SSR, SSE where: SST = total sum of squares SST = total sum of squares SSR = sum of squares due to regression SSR = sum of squares due to regression SSE = sum of squares due to error SSE = sum of squares due to error SST = SSR + SSE

18 Slide © 2005 Thomson/South-Western n The coefficient of determination is: Coefficient of Determination where: SSR = sum of squares due to regression SST = total sum of squares r 2 = SSR/SST

19 Slide © 2005 Thomson/South-Western Coefficient of Determination r 2 = SSR/SST = 100/114 =.8772 The regression relationship is very strong; 88% The regression relationship is very strong; 88% of the variability in the number of cars sold can be explained by the linear relationship between the number of TV ads and the number of cars sold.

20 Slide © 2005 Thomson/South-Western Sample Correlation Coefficient where: b 1 = the slope of the estimated regression b 1 = the slope of the estimated regression equation equation

21 Slide © 2005 Thomson/South-Western The sign of b 1 in the equation is “+”. Sample Correlation Coefficient r xy =

22 Slide © 2005 Thomson/South-Western Assumptions About the Error Term  1. The error  is a random variable with mean of zero. 2. The variance of , denoted by  2, is the same for all values of the independent variable. all values of the independent variable. 2. The variance of , denoted by  2, is the same for all values of the independent variable. all values of the independent variable. 3. The values of  are independent. 4. The error  is a normally distributed random variable. variable. 4. The error  is a normally distributed random variable. variable.

23 Slide © 2005 Thomson/South-Western Testing for Significance To test for a significant regression relationship, we To test for a significant regression relationship, we must conduct a hypothesis test to determine whether must conduct a hypothesis test to determine whether the value of  1 is zero. the value of  1 is zero. To test for a significant regression relationship, we To test for a significant regression relationship, we must conduct a hypothesis test to determine whether must conduct a hypothesis test to determine whether the value of  1 is zero. the value of  1 is zero. Two tests are commonly used: Two tests are commonly used: t Test and F Test Both the t test and F test require an estimate of  2, Both the t test and F test require an estimate of  2, the variance of  in the regression model. the variance of  in the regression model. Both the t test and F test require an estimate of  2, Both the t test and F test require an estimate of  2, the variance of  in the regression model. the variance of  in the regression model.

24 Slide © 2005 Thomson/South-Western An Estimate of  An Estimate of  Testing for Significance where: s 2 = MSE = SSE/( n  2) The mean square error (MSE) provides the estimate of  2, and the notation s 2 is also used.

25 Slide © 2005 Thomson/South-Western Testing for Significance An Estimate of  An Estimate of  To estimate  we take the square root of  2. To estimate  we take the square root of  2. The resulting s is called the standard error of The resulting s is called the standard error of the estimate. the estimate.

26 Slide © 2005 Thomson/South-Western n Hypotheses n Test Statistic Testing for Significance: t Test

27 Slide © 2005 Thomson/South-Western n Rejection Rule Testing for Significance: t Test where: t  is based on a t distribution with n - 2 degrees of freedom Reject H 0 if p -value <  or t t 

28 Slide © 2005 Thomson/South-Western 1. Determine the hypotheses. 2. Specify the level of significance. 3. Select the test statistic.  = State the rejection rule. Reject H 0 if p -value <.05 or | t| > (with 3 degrees of freedom) Testing for Significance: t Test

29 Slide © 2005 Thomson/South-Western Testing for Significance: t Test 5. Compute the value of the test statistic. 6. Determine whether to reject H 0. t = provides an area of.01 in the upper tail. Hence, the p -value is less than.02. (Also, t = 4.63 > ) We can reject H 0.

30 Slide © 2005 Thomson/South-Western Confidence Interval for  1 H 0 is rejected if the hypothesized value of  1 is not H 0 is rejected if the hypothesized value of  1 is not included in the confidence interval for  1. included in the confidence interval for  1. We can use a 95% confidence interval for  1 to test We can use a 95% confidence interval for  1 to test the hypotheses just used in the t test. the hypotheses just used in the t test.

31 Slide © 2005 Thomson/South-Western The form of a confidence interval for  1 is: The form of a confidence interval for  1 is: Confidence Interval for  1 where is the t value providing an area of  /2 in the upper tail of a t distribution with n - 2 degrees of freedom b 1 is the pointestimator is the margin of error

32 Slide © 2005 Thomson/South-Western Confidence Interval for  1 Reject H 0 if 0 is not included in the confidence interval for  1. 0 is not included in the confidence interval. Reject H 0 = 5 +/ (1.08) = 5 +/ or 1.56 to 8.44 n Rejection Rule 95% Confidence Interval for  1 95% Confidence Interval for  1 n Conclusion

33 Slide © 2005 Thomson/South-Western n Hypotheses n Test Statistic Testing for Significance: F Test F = MSR/MSE

34 Slide © 2005 Thomson/South-Western n Rejection Rule Testing for Significance: F Test where: F  is based on an F distribution with 1 degree of freedom in the numerator and n - 2 degrees of freedom in the denominator Reject H 0 if p -value <  p -value <  or F > F 

35 Slide © 2005 Thomson/South-Western 1. Determine the hypotheses. 2. Specify the level of significance. 3. Select the test statistic.  = State the rejection rule. Reject H 0 if p -value <.05 or F > (with 1 d.f. in numerator and 3 d.f. in denominator) 3 d.f. in denominator) Testing for Significance: F Test F = MSR/MSE

36 Slide © 2005 Thomson/South-Western Testing for Significance: F Test 5. Compute the value of the test statistic. 6. Determine whether to reject H 0. F = provides an area of.025 in the upper tail. Thus, the p -value corresponding to F = is less than 2(.025) =.05. Hence, we reject H 0. F = provides an area of.025 in the upper tail. Thus, the p -value corresponding to F = is less than 2(.025) =.05. Hence, we reject H 0. F = MSR/MSE = 100/4.667 = The statistical evidence is sufficient to conclude The statistical evidence is sufficient to conclude that we have a significant relationship between the number of TV ads aired and the number of cars sold.

37 Slide © 2005 Thomson/South-Western Some Cautions about the Interpretation of Significance Tests Just because we are able to reject H 0 :  1 = 0 and Just because we are able to reject H 0 :  1 = 0 and demonstrate statistical significance does not enable demonstrate statistical significance does not enable us to conclude that there is a linear relationship between x and y. Rejecting H 0 :  1 = 0 and concluding that the Rejecting H 0 :  1 = 0 and concluding that the relationship between x and y is significant does not enable us to conclude that a cause-and-effect relationship is present between x and y.

38 Slide © 2005 Thomson/South-Western Using the Estimated Regression Equation for Estimation and Prediction where: confidence coefficient is 1 -  and t  /2 is based on a t distribution with n - 2 degrees of freedom n Confidence Interval Estimate of E ( y p ) n Prediction Interval Estimate of y p

39 Slide © 2005 Thomson/South-Western If 3 TV ads are run prior to a sale, we expect the mean number of cars sold to be: Point Estimation ^ y = (3) = 25 cars

40 Slide © 2005 Thomson/South-Western n Excel’s Confidence Interval Output Confidence Interval for E ( y p )

41 Slide © 2005 Thomson/South-Western The 95% confidence interval estimate of the mean number of cars sold when 3 TV ads are run is: Confidence Interval for E ( y p ) = to cars

42 Slide © 2005 Thomson/South-Western n Excel’s Prediction Interval Output Prediction Interval for y p

43 Slide © 2005 Thomson/South-Western The 95% prediction interval estimate of the number of cars sold in one particular week when 3 TV ads are run is: Prediction Interval for y p = to cars

44 Slide © 2005 Thomson/South-Western Residual Analysis Much of the residual analysis is based on an Much of the residual analysis is based on an examination of graphical plots. examination of graphical plots. Residual for Observation i Residual for Observation i The residuals provide the best information about . The residuals provide the best information about . If the assumptions about the error term  appear If the assumptions about the error term  appear questionable, the hypothesis tests about the questionable, the hypothesis tests about the significance of the regression relationship and the significance of the regression relationship and the interval estimation results may not be valid. interval estimation results may not be valid.

45 Slide © 2005 Thomson/South-Western Residual Plot Against x If the assumption that the variance of  is the same for all values of x is valid, and the assumed regression model is an adequate representation of the relationship between the variables, then If the assumption that the variance of  is the same for all values of x is valid, and the assumed regression model is an adequate representation of the relationship between the variables, then The residual plot should give an overall The residual plot should give an overall impression of a horizontal band of points impression of a horizontal band of points

46 Slide © 2005 Thomson/South-Western x 0 Good Pattern Residual Residual Plot Against x

47 Slide © 2005 Thomson/South-Western Residual Plot Against x x 0 Residual Nonconstant Variance

48 Slide © 2005 Thomson/South-Western Residual Plot Against x x 0 Residual Model Form Not Adequate

49 Slide © 2005 Thomson/South-Western n Residuals Residual Plot Against x

50 Slide © 2005 Thomson/South-Western Residual Plot Against x

51 Slide © 2005 Thomson/South-Western End of Chapter 14