Presentation is loading. Please wait.

Presentation is loading. Please wait.

Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.

Similar presentations


Presentation on theme: "Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics."— Presentation transcript:

1 Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics for Managers using Excel by Levine (2) Computer Algorithms: Introduction to Design & Analysis by Baase and Gelder (slides by Ben Choi to accompany the Sara Baase’s text). (3) Discrete Mathematics by Richard Johnsonbaugh 1 Conceptual Foundations (MATH21001) Lecture Week 6 Reading: Textbook, Chapter 6:Simple Linear Regression

2 2 Learning Objectives In this lecture, you learn:  To use regression analysis to predict the value of a dependent variable based on an independent variable  The meaning of the regression coefficients b 0 and b 1  To evaluate the assumptions of regression analysis and know what to do if the assumptions are violated  To make inferences about the slope and correlation coefficient  To estimate mean values and predict individual values

3 3 Correlation vs. Regression  A scatter plot (or scatter diagram) can be used to show the relationship between two numerical variables  Correlation analysis is used to measure strength of the association (linear relationship) between two variables  Correlation is only concerned with strength of the relationship  No causal effect is implied with correlation  Correlation was first presented in Chapter 3

4 4 Regression Analysis Regression analysis is used to:  Predict the value of a dependent variable based on the value of at least one independent variable  Explain the impact of changes in an independent variable on the dependent variable Dependent variable: the variable you wish to explain Independent variable: the variable used to explain the dependent variable

5 5 Simple Linear Regression Model  Only one independent variable, X  Relationship between X and Y is described by a linear function  Changes in Y are related to changes in X

6 6 Types of Relationships Y X Y X Y Y X X Linear relationshipsCurvilinear relationships

7 7 Types of Relationships Y X Y X Y Y X X Strong relationshipsWeak relationships

8 8 Types of Relationships Y X Y X No relationship

9 9 The Linear Regression Model Linear component The population regression model: Population Y intercept Population Slope Coefficient Random Error term Dependent Variable Independent Variable Random Error component

10 10 The Linear Regression Model Random Error for this X i value Y X Observed Value of Y for X i Predicted Value of Y for X i XiXi Slope = β 1 Intercept = β 0 εiεi

11 11 Linear Regression Equation The simple linear regression equation provides an estimate of the population regression line Estimate of the regression intercept Estimate of the regression slope Estimated (or predicted) Y value for observation i Value of X for observation i

12 12 The Least Squares Method  b 0 and b 1 are obtained by finding the values of b 0 and b 1 that minimize the sum of the squared differences between Y and :

13 13 Finding the Least Squares Equation  The coefficients b 0 and b 1, and other regression results, will be found using Excel Formulas are shown in the textbook.

14 14 Interpretation of the Intercept and the Slope  b 0 is the estimated mean value of Y when the value of X is zero  b 1 is the estimated change in the mean value of Y for every one-unit change in X

15 15 Linear Regression Example  A real estate agent wishes to examine the relationship between the selling price of a home and its size (measured in square feet)  A random sample of 10 houses is selected  Dependent variable (Y) = house price in $1000s  Independent variable (X) = square feet

16 16 Linear Regression Example Data House Price in $1000s (Y) Square Feet (X) 2451400 3121600 2791700 3081875 1991100 2191550 4052350 3242450 3191425 2551700

17 17 Linear Regression Example Scatterplot  House price model: scatter plot

18 18 Linear Regression Example Using Excel Tools -------- Data Analysis -------- Regression

19 19 Linear Regression Example Excel Output Regression Statistics Multiple R0.76211 R Square0.58082 Adjusted R Square0.52842 Standard Error41.33032 Observations10 ANOVA dfSSMSFSignificance F Regression118934.9348 11.08480.01039 Residual813665.56521708.1957 Total932600.5000 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept98.2483358.033481.692960.12892-35.57720232.07386 Square Feet0.109770.032973.329380.010390.033740.18580 The regression equation is:

20 20 Linear Regression Example Graphical Representation  House price model: scatter plot and regression line Slope = 0.10977 Intercept = 98.248

21 21 Linear Regression Example Interpretation of b 0  b 0 is the estimated mean value of Y when the value of X is zero (if X = 0 is in the range of observed X values)  Because the square footage of the house cannot be 0, the Y intercept has no practical application.

22 22 Linear Regression Example Interpretation of b 1  b 1 measures the mean change in the average value of Y as a result of a one-unit change in X  Here, b 1 =.10977 tells us that the mean value of a house increases by.10977($1000) = $109.77, on average, for each additional one square foot of size

23 23 Linear Regression Example Making Predictions Predict the price for a house with 2000 square feet: The predicted price for a house with 2000 square feet is 317.85($1,000s) = $317,850

24 24 Linear Regression Example Making Predictions  When using a regression model for prediction, only predict within the relevant range of data Relevant range for interpolation Do not try to extrapolate beyond the range of observed X’s

25 25 Measures of Variation Total variation is made up of two parts: Total Sum of Squares Regression Sum of Squares Error Sum of Squares where: = Mean value of the dependent variable Y i = Observed values of the dependent variable i = Predicted value of Y for the given X i value

26 26 Measures of Variation  SST = total sum of squares  Measures the variation of the Y i values around their mean Y  SSR = regression sum of squares  Explained variation attributable to the relationship between X and Y  SSE = error sum of squares  Variation attributable to factors other than the relationship between X and Y

27 27 Measures of Variation XiXi Y X YiYi SST =  (Y i - Y) 2 SSE =  (Y i - Y i ) 2  SSR =  (Y i - Y) 2  _ _ _ Y  Y Y _ Y 

28 28 Coefficient of Determination, r 2  The coefficient of determination is the portion of the total variation in the dependent variable that is explained by variation in the independent variable  The coefficient of determination is also called r- squared and is denoted as r 2

29 29 Coefficient of Determination, r 2 r 2 = 1 Y X Y X r 2 = -1 r 2 = 1 Perfect linear relationship between X and Y: 100% of the variation in Y is explained by variation in X

30 30 Coefficient of Determination, r 2 Y X Y X 0 < r 2 < 1 Weaker linear relationships between X and Y: Some but not all of the variation in Y is explained by variation in X

31 31 Coefficient of Determination, r 2 r 2 = 0 No linear relationship between X and Y: The value of Y is not related to X. (None of the variation in Y is explained by variation in X) Y X r 2 = 0

32 32 Linear Regression Example Coefficient of Determination, r 2 Regression Statistics Multiple R0.76211 R Square0.58082 Adjusted R Square0.52842 Standard Error41.33032 Observations10 ANOVA dfSSMSFSignificance F Regression118934.9348 11.08480.01039 Residual813665.56521708.1957 Total932600.5000 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept98.2483358.033481.692960.12892-35.57720232.07386 Square Feet0.109770.032973.329380.010390.033740.18580 58.08% of the variation in house prices is explained by variation in square feet

33 33 Standard Error of Estimate  The standard deviation of the variation of observations around the regression line is estimated by Where SSE = error sum of squares n = sample size

34 34 Linear Regression Example Standard Error of Estimate Regression Statistics Multiple R0.76211 R Square0.58082 Adjusted R Square0.52842 Standard Error41.33032 Observations10 ANOVA dfSSMSFSignificance F Regression118934.9348 11.08480.01039 Residual813665.56521708.1957 Total932600.5000 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept98.2483358.033481.692960.12892-35.57720232.07386 Square Feet0.109770.032973.329380.010390.033740.18580

35 35 Comparing Standard Errors YY X X S YX is a measure of the variation of observed Y values from the regression line The magnitude of S YX should always be judged relative to the size of the Y values in the sample data

36 36 Assumptions of Regression L.I.N.E  Linearity  The relationship between X and Y is linear  Independence of Errors  Error values are statistically independent  Normality of Error  Error values are normally distributed for any given value of X  Equal Variance (also called homoscedasticity)  The probability distribution of the errors has constant variance

37 37 Residual Analysis  The residual for observation i, e i, is the difference between its observed and predicted value  Check the assumptions of regression by examining the residuals  Examine for Linearity assumption  Evaluate Independence assumption  Evaluate Normal distribution assumption  Examine Equal variance for all levels of X  Graphical Analysis of Residuals  Can plot residuals vs. X

38 38 Residual Analysis for Linearity Not LinearLinear x residuals x Y x Y x

39 39 Residual Analysis for Independence Not IndependentIndependent X X residuals X

40 40 Checking for Normality  Examine the Stem-and-Leaf Display of the Residuals  Examine the Box-and-Whisker Plot of the Residuals  Examine the Histogram of the Residuals  Construct a Normal Probability Plot of the Residuals

41 41 Residual Analysis for Equal Variance Unequal varianceEqual variance xx Y x x Y residuals

42 42 Linear Regression Example Excel Residual Output RESIDUAL OUTPUT Predicted House PriceResiduals 1251.92316-6.923162 2273.8767138.12329 3284.85348-5.853484 4304.062843.937162 5218.99284-19.99284 6268.38832-49.38832 7356.2025148.79749 8367.17929-43.17929 9254.667464.33264 10284.85348-29.85348 Does not appear to violate any regression assumptions

43 43 Measuring Autocorrelation: The Durbin-Watson Statistic  Used when data are collected over time to detect if autocorrelation is present  Autocorrelation exists if residuals in one time period are related to residuals in another period

44 44 Autocorrelation  Autocorrelation is correlation of the errors (residuals) over time  Violates the regression assumption that residuals are statistically independent  Here, residuals suggest a cyclic pattern, not random

45 45 The Durbin-Watson Statistic  The possible range is 0 ≤ D ≤ 4  D should be close to 2 if H 0 is true  D less than 2 may signal positive autocorrelation, D greater than 2 may signal negative autocorrelation  The Durbin-Watson statistic is used to test for autocorrelation H 0 : residuals are not correlated H 1 : autocorrelation is present

46 46 The Durbin-Watson Statistic  Calculate the Durbin-Watson test statistic = D (The Durbin-Watson Statistic can be found using Excel) Decision rule: reject H 0 if D < d L H 0 : positive autocorrelation does not exist H 1 : positive autocorrelation is present 0dUdU 2dLdL Reject H 0 Do not reject H 0  Find the values d L and d U from the Durbin-Watson table (for sample size n and number of independent variables k) Inconclusive

47 47 The Durbin-Watson Statistic  Example with n = 25: Durbin-Watson Calculations Sum of Squared Difference of Residuals3296.18 Sum of Squared Residuals3279.98 Durbin-Watson Statistic1.00494 Excel output:

48 48 The Durbin-Watson Statistic  Here, n = 25 and there is k = 1 independent variable  Using the Durbin-Watson table, d L = 1.29 and d U = 1.45  D = 1.00494 < d L = 1.29, so reject H 0 and conclude that significant positive autocorrelation exists  Therefore the linear model is not the appropriate model to predict sales Decision: reject H 0 since D = 1.00494 < d L 0d U =1.452d L =1.29 Reject H 0 Do not reject H 0 Inconclusive

49 49 Inferences About the Slope: t Test  t test for a population slope  Is there a linear relationship between X and Y?  Null and alternative hypotheses  H 0 : β 1 = 0(no linear relationship)  H 1 : β 1 ≠ 0(linear relationship does exist)  Test statistic where: b 1 = regression slope coefficient β 1 = hypothesized slope S b1 = standard error of the slope

50 50 Inferences About the Slope: t Test Example House Price in $1000s (y) Square Feet (x) 2451400 3121600 2791700 3081875 1991100 2191550 4052350 3242450 3191425 2551700 Estimated Regression Equation: The slope of this model is 0.1098 Is there a relationship between the square footage of the house and its sales price?

51 51 Inferences About the Slope: t Test Example  H 0 : β 1 = 0  H 1 : β 1 ≠ 0 From Excel output: CoefficientsStandard Errort StatP-value Intercept98.2483358.033481.692960.12892 Square Feet0.109770.032973.329380.01039 t b1b1

52 52 Inferences About the Slope: t Test Example Test Statistic: t = 3.329 There is sufficient evidence that square footage affects house price Decision: Reject H 0 Reject H 0  /2=.025 -t α/2 Do not reject H 0 0 t α/2  /2=.025 -2.30602.3060 3.329 d.f. = 10- 2 = 8  H 0 : β 1 = 0  H 1 : β 1 ≠ 0

53 53 Inferences About the Slope: t Test Example  H 0 : β 1 = 0  H 1 : β 1 ≠ 0 From Excel output: CoefficientsStandard Errort StatP-value Intercept98.2483358.033481.692960.12892 Square Feet0.109770.032973.329380.01039 P-Value There is sufficient evidence that square footage affects house price. Decision: Reject H 0, since p-value < α

54 54 F-Test for Significance  F Test statistic: where where F follows an F distribution with k numerator degrees of freedom and (n - k - 1) denominator degrees of freedom (k = the number of independent variables in the regression model)

55 55 F-Test for Significance Excel Output Regression Statistics Multiple R0.76211 R Square0.58082 Adjusted R Square0.52842 Standard Error41.33032 Observations10 ANOVA dfSSMSFSignificance F Regression118934.9348 11.08480.01039 Residual813665.56521708.1957 Total932600.5000 With 1 and 8 degrees of freedom P-value for the F-Test

56 56 F-Test for Significance  H 0 : β 1 = 0  H 1 : β 1 ≠ 0   =.05  df 1 = 1 df 2 = 8 Test Statistic: Decision: Conclusion: Reject H 0 at  = 0.05 There is sufficient evidence that house size affects selling price 0  =.05 F.05 = 5.32 Reject H 0 Do not reject H 0 Critical Value: F  = 5.32 F

57 57 Confidence Interval Estimate for the Slope Confidence Interval Estimate of the Slope: Excel Printout for House Prices: At the 95% level of confidence, the confidence interval for the slope is (0.0337, 0.1858) CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept98.2483358.033481.692960.12892-35.57720232.07386 Square Feet0.109770.032973.329380.010390.033740.18580 d.f. = n - 2

58 58 Confidence Interval Estimate for the Slope Since the units of the house price variable is $1000s, you are 95% confident that the mean change in sales price is between $33.74 and $185.80 per square foot of house size CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept98.2483358.033481.692960.12892-35.57720232.07386 Square Feet0.109770.032973.329380.010390.033740.18580 This 95% confidence interval does not include 0. Conclusion: There is a significant relationship between house price and square feet at the.05 level of significance

59 59 t Test for a Correlation Coefficient  Hypotheses  H 0 : ρ = 0 (no correlation between X and Y)  H 1 : ρ ≠ 0 (correlation exists)  Test statistic (with n – 2 degrees of freedom)

60 60 t Test for a Correlation Coefficient Is there evidence of a linear relationship between square feet and house price at the.05 level of significance? H 0 : ρ = 0 (No correlation) H 1 : ρ ≠ 0 (correlation exists)  =.05, df = 10 - 2 = 8

61 61 t Test for a Correlation Coefficient Reject H 0  /2=.025 -t α/2 Do not reject H 0 0 t α/2  /2=.025 -2.3060 2.3060 3.329 d.f. = 10- 2 = 8 Conclusion: There is evidence of a linear association at the 5% level of significance Decision: Reject H 0

62 62 Estimating Mean Values and Predicting Individual Values X Y = b 0 +b 1 X i  Confidence Interval for the mean of Y, given X i Prediction Interval for an individual Y, given X i Goal: Form intervals around Y to express uncertainty about the value of Y for a given X i Y X i Y 

63 63 Confidence Interval for the Average Y, Given X Confidence interval estimate for the mean value of Y given a particular X i Size of interval varies according to distance away from mean, X

64 64 Prediction Interval for an Individual Y, Given X Prediction interval estimate for an individual value of Y given a particular X i This extra term adds to the interval width to reflect the added uncertainty for an individual case

65 65 Estimation of Mean Values: Example Find the 95% confidence interval for the mean price of 2,000 square-foot houses Predicted Price Y i = 317.85 ($1,000s)  Confidence Interval Estimate for μ Y|X=X The confidence interval endpoints are 280.66 and 354.90, or from $280,660 to $354,900 i

66 66 Estimation of Individual Values: Example Find the 95% prediction interval for an individual house with 2,000 square feet Predicted Price Y i = 317.85 ($1,000s)  Prediction Interval Estimate for Y X=X The prediction interval endpoints are 215.50 and 420.07, or from $215,500 to $420,070 i

67 67 Pitfalls of Regression Analysis  Lacking an awareness of the assumptions underlying least-squares regression  Not knowing how to evaluate the assumptions  Not knowing the alternatives to least-squares regression if a particular assumption is violated  Using a regression model without knowledge of the subject matter  Extrapolating outside the relevant range

68 68 Strategies for Avoiding the Pitfalls of Regression  Start with a scatter plot of X on Y to observe possible relationship  Perform residual analysis to check the assumptions  Plot the residuals vs. X to check for violations of assumptions such as equal variance  Use a histogram, stem-and-leaf display, box- and-whisker plot, or normal probability plot of the residuals to uncover possible non-normality

69 69 Strategies for Avoiding the Pitfalls of Regression  If there is violation of any assumption, use alternative methods or models  If there is no evidence of assumption violation, then test for the significance of the regression coefficients and construct confidence intervals and prediction intervals  Avoid making predictions or forecasts outside the relevant range

70 70 Summary  Introduced types of regression models  Reviewed assumptions of regression and correlation  Discussed determining the simple linear regression equation  Described measures of variation  Discussed residual analysis  Addressed measuring autocorrelation In this lecture, we have

71 71 Summary  Described inference about the slope  Discussed correlation -- measuring the strength of the association  Addressed estimation of mean values and prediction of individual values  Discussed possible pitfalls in regression and recommended strategies to avoid them In this lecture, we have


Download ppt "Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics."

Similar presentations


Ads by Google