Presentation is loading. Please wait.

Presentation is loading. Please wait.

10-1 COMPLETE BUSINESS STATISTICS by AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 6 th edition (SIE)

Similar presentations


Presentation on theme: "10-1 COMPLETE BUSINESS STATISTICS by AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 6 th edition (SIE)"— Presentation transcript:

1 10-1 COMPLETE BUSINESS STATISTICS by AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 6 th edition (SIE)

2 10-2 Chapter 10 Simple Linear Regression and Correlation

3 10-3 Using Statistics The Simple Linear Regression Model Estimation: The Method of Least Squares Error Variance and the Standard Errors of Regression Estimators Correlation Hypothesis Tests about the Regression Relationship How Good is the Regression? Analysis of Variance Table and an F Test of the Regression Model Residual Analysis and Checking for Model Inadequacies Use of the Regression Model for Prediction The Solver Method for Regression Simple Linear Regression and Correlation 10

4 10-4 Determine whether a regression experiment would be useful in a given instance Formulate a regression model Compute a regression equation Compute the covariance and the correlation coefficient of two random variables Compute confidence intervals for regression coefficients Compute a prediction interval for the dependent variable LEARNING OBJECTIVES 10 After studying this chapter, you should be able to:

5 10-5 Test hypothesis about a regression coefficients Conduct an ANOVA experiment using regression results Analyze residuals to check if the assumptions about the regression model are valid Solve regression problems using spreadsheet templates Apply covariance concept to linear composites of random variables Use LINEST function to carry out a regression LEARNING OBJECTIVES (continued) 10 After studying this chapter, you should be able to:

6 10-6 10-1 Using Statistics RegressionRegression refers to the statistical technique of modeling the relationship between variables. simple linearregression two variablesIn simple linear regression, we model the relationship between two variables. dependent variable independent variableOne of the variables, denoted by Y, is called the dependent variable and the other, denoted by X, is called the independent variable. straight-line relationshipThe model we will use to depict the relationship between X and Y will be a straight-line relationship. scatter plotA graphical sketch of the the pairs (X, Y) is called a scatter plot.

7 10-7 10-1 Using Statistics

8 10-8 X Y X Y X 0 0 0 0 0 Y X Y X Y X Y Examples of Other Scatterplots

9 10-9 statistical model The inexact nature of the relationship between advertising and sales suggests that a statistical model might be useful in analyzing the relationship. systematic component random component A statistical model separates the systematic component of a relationship from the random component. statistical model The inexact nature of the relationship between advertising and sales suggests that a statistical model might be useful in analyzing the relationship. systematic component random component A statistical model separates the systematic component of a relationship from the random component. Data Statisticalmodel Systematiccomponent+Randomerrors In ANOVA, the systematic component is the variation of means between samples or treatments (SSTR) and the random component is the unexplained variation (SSE). regression In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. In ANOVA, the systematic component is the variation of means between samples or treatments (SSTR) and the random component is the unexplained variation (SSE). regression In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. Model Building

10 10-10 The population simple linear regression model: Y=  0 +  1 X +  Nonrandom or Random Systematic Component Component where Y is the dependent variable, the variable we wish to explain or predict X is the independent variable, also called the predictor variable  is the error term, the only random component in the model, and thus, the only source of randomness in Y.  0 is the intercept of the systematic component of the regression relationship.  1 is the slope of the systematic component. The conditional mean of Y: The population simple linear regression model: Y=  0 +  1 X +  Nonrandom or Random Systematic Component Component where Y is the dependent variable, the variable we wish to explain or predict X is the independent variable, also called the predictor variable  is the error term, the only random component in the model, and thus, the only source of randomness in Y.  0 is the intercept of the systematic component of the regression relationship.  1 is the slope of the systematic component. The conditional mean of Y: 10-2 The Simple Linear Regression Model

11 10-11 The simple linear regression model gives an exact linear relationship between the expected or average value of Y, the dependent variable, and X, the independent or predictor variable: E[Y i ]=  0 +  1 X i Actual observed values of Y differ from the expected value by an unexplained or random error: Y i = E[Y i ] +  i =  0 +  1 X i +  i The simple linear regression model gives an exact linear relationship between the expected or average value of Y, the dependent variable, and X, the independent or predictor variable: E[Y i ]=  0 +  1 X i Actual observed values of Y differ from the expected value by an unexplained or random error: Y i = E[Y i ] +  i =  0 +  1 X i +  i X Y E[Y]=  0 +  1 X XiXi } }  1 = Slope 1  0 = Intercept YiYi { Error:  i Regression Plot Picturing the Simple Linear Regression Model

12 10-12 The relationship between X and Y is a straight-line relationship. The values of the independent variable X are assumed fixed (not random); the only randomness in the values of Y comes from the error term  i. The errors  i are normally distributed with mean 0 and variance  2. The errors are uncorrelated (not related) in successive observations. That is:  ~ N(0,  2 ) The relationship between X and Y is a straight-line relationship. The values of the independent variable X are assumed fixed (not random); the only randomness in the values of Y comes from the error term  i. The errors  i are normally distributed with mean 0 and variance  2. The errors are uncorrelated (not related) in successive observations. That is:  ~ N(0,  2 ) X Y E[Y]=  0 +  1 X Assumptions of the Simple Linear Regression Model Identical normal distributions of errors, all centered on the regression line. Assumptions of the Simple Linear Regression Model

13 10-13 Estimation of a simple linear regression relationship involves finding estimated or predicted values of the intercept and slope of the linear regression line. The estimated regression equation: Y = b 0 + b 1 X + e where b 0 estimates the intercept of the population regression line,  0 ; b 1 estimates the slope of the population regression line,  1 ; and e stands for the observed errors - the residuals from fitting the estimated regression line b 0 + b 1 X to a set of n points. Estimation of a simple linear regression relationship involves finding estimated or predicted values of the intercept and slope of the linear regression line. The estimated regression equation: Y = b 0 + b 1 X + e where b 0 estimates the intercept of the population regression line,  0 ; b 1 estimates the slope of the population regression line,  1 ; and e stands for the observed errors - the residuals from fitting the estimated regression line b 0 + b 1 X to a set of n points. 10-3 Estimation: The Method of Least Squares

14 10-14 Fitting a Regression Line X Y Data X Y Three errors from a fitted line X Y Three errors from the least squares regression line X Errors from the least squares regression line are minimized

15 10-15. { Y X Errors in Regression XiXiXiXi

16 10-16 Least Squares Regression b0b0 SSE b1b1 Least squares b 0 Least squares b 1 At this point SSE is minimized with respect to b 0 and b 1

17 10-17 Sums of Squares, Cross Products, and Least Squares Estimators

18 10-18 MilesDollarsMiles 2 Miles*Dollars 1211180214665212182222 1345240518090253234725 1422200520220842851110 1687251128459694236057 1849233234188014311868 2026230541046764669930 2133301645496896433128 2253338550760097626405 2400309057600007416000 2468369460910249116792 2699337172846019098329 28063998787363611218388 30823555949872410956510 320946921029768115056628 346642441201315614709704 364352981327144919300614 385248011483790418493452 403351471626508920757852 426757381820728824484046 449864202023200428877160 453360592054808827465448 480464262307841630870504 509063212590810032173890 523370262738428836767056 543969642958272037877196 79,448106,605293,426,946390,185,014 MilesDollarsMiles 2 Miles*Dollars 1211180214665212182222 1345240518090253234725 1422200520220842851110 1687251128459694236057 1849233234188014311868 2026230541046764669930 2133301645496896433128 2253338550760097626405 2400309057600007416000 2468369460910249116792 2699337172846019098329 28063998787363611218388 30823555949872410956510 320946921029768115056628 346642441201315614709704 364352981327144919300614 385248011483790418493452 403351471626508920757852 426757381820728824484046 449864202023200428877160 453360592054808827465448 480464262307841630870504 509063212590810032173890 523370262738428836767056 543969642958272037877196 79,448106,605293,426,946390,185,014 Example 10-1

19 10-19 Template (partial output) that can be used to carry out a Simple Regression

20 10-20 Template (continued) that can be used to carry out a Simple Regression

21 10-21 Template (continued) that can be used to carry out a Simple Regression Residual Analysis. The plot shows the absence of a relationship between the residuals and the X-values (miles). Residual Analysis. The plot shows the absence of a relationship between the residuals and the X-values (miles).

22 10-22 Template (continued) that can be used to carry out a Simple Regression Note: Note: The normal probability plot is approximately linear. This would indicate that the normality assumption for the errors has not been violated.

23 10-23 Y X What you see when looking at the total variation of Y. X What you see when looking along the regression line at the error variance of Y. Y Total Variance and Error Variance

24 10-24 X Y Square and sum all regression errors to find SSE. 10-4 Error Variance and the Standard Errors of Regression Estimators

25 10-25 Standard Errors of Estimates in Regression

26 10-26 Length = 1 Height = Slope Least-squares point estimate: b 1 =1.25533 Upper 95% bound on slope: 1.35820 Lower 95% bound: 1.15246 (not a possible value of the regression slope at 95%) 0 Confidence Intervals for the Regression Parameters

27 10-27 Template (partial output) that can be used to obtain Confidence Intervals for    and  

28 10-28 correlation degree of linear association The correlation between two random variables, X and Y, is a measure of the degree of linear association between the two variables. The population correlation, denoted by , can take on any value from -1 to 1. correlation degree of linear association The correlation between two random variables, X and Y, is a measure of the degree of linear association between the two variables. The population correlation, denoted by , can take on any value from -1 to 1.  indicates a perfect negative linear relationship -1 <  < 0 indicates a negative linear relationship  indicates no linear relationship 0 <  < 1 indicates a positive linear relationship  indicates a perfect positive linear relationship The absolute value of  indicates the strength or exactness of the relationship.  indicates a perfect negative linear relationship -1 <  < 0 indicates a negative linear relationship  indicates no linear relationship 0 <  < 1 indicates a positive linear relationship  indicates a perfect positive linear relationship The absolute value of  indicates the strength or exactness of the relationship. 10-5 Correlation

29 10-29 Y X  = 0 Y X  = -.8 Y X  =.8 Y X  = 0 Y X  = -1 Y X  = 1 Illustrations of Correlation

30 10-30 Example 10-1: = r SS XY SS X Y   51402852.4 40947557.8466855898 51402852.4 5232194329 9824 ()().. Note: *Note: If  0, b 1 >0 Covariance and Correlation

31 10-31 H 0 :  = 0(No linear relationship) H 1 :  0(Some linear relationship) Test Statistic: Hypothesis Tests for the Correlation Coefficient

32 10-32 Y X Y X Y X Constant YUnsystematic VariationNonlinear Relationship A hypothesis test for the existence of a linear relationship between X and Y: H 0 H 1 Test statistic for the existence of a linear relationship between X and Y: (-) where is the least-squares estimate ofthe regression slope and() is the standard error of. When thenull hypothesis is true, the statistic has a distribution with- degrees offreedom. : : ()   1 0 1 0 2 1 1 111 2    t n b sb bsbb tn 10-6 Hypothesis Tests about the Regression Relationship

33 10-33 Hypothesis Tests for the Regression Slope

34 10-34 coefficient of determination, r 2 The coefficient of determination, r 2, is a descriptive measure of the strength of the regression relationship, a measure of how well the regression line fits the data.. { Y X { } Total Deviation Explained Deviation Unexplained Deviation Percentage of total variation explained by the regression. 10-7 How Good is the Regression?

35 10-35 Y X r 2 = 0SSE SST Y X r 2 = 0.90 SSESSE SST SSR Y X r 2 = 0.50 SSE SST SSR The Coefficient of Determination

36 10-36 10-8 Analysis-of-Variance Table and an F Test of the Regression Model

37 10-37 Template (partial output) that displays Analysis of Variance and an F Test of the Regression Model

38 10-38 10-9 Residual Analysis and Checking for Model Inadequacies

39 10-39 Normal Probability Plot of the Residuals Flatter than Normal

40 10-40 Normal Probability Plot of the Residuals More Peaked than Normal

41 10-41 Normal Probability Plot of the Residuals Positively Skewed

42 10-42 Normal Probability Plot of the Residuals Negatively Skewed

43 10-43 Point Prediction A single-valued estimate of Y for a given value of X obtained by inserting the value of X in the estimated regression equation. Prediction Interval For a value of Y given a value of X Variation in regression line estimate Variation of points around regression line For an average value of Y given a value of X Variation in regression line estimate Point Prediction A single-valued estimate of Y for a given value of X obtained by inserting the value of X in the estimated regression equation. Prediction Interval For a value of Y given a value of X Variation in regression line estimate Variation of points around regression line For an average value of Y given a value of X Variation in regression line estimate 10-10 Use of the Regression Model for Prediction

44 10-44 X Y X Y Regression line Upper limit on slope Lower limit on slope 1) Uncertainty about the slope of the regression line X Y X Y Regression line Upper limit on intercept Lower limit on intercept 2) Uncertainty about the intercept of the regression line Errors in Predicting E[Y|X]

45 10-45 X Y X Prediction Interval for E[Y|X] Y Regression line The prediction band for E[Y|X] is narrowest at the mean value of X. The prediction band widens as the distance from the mean of X increases. Predictions become very unreliable when we extrapolate beyond the range of the sample itself. The prediction band for E[Y|X] is narrowest at the mean value of X. The prediction band widens as the distance from the mean of X increases. Predictions become very unreliable when we extrapolate beyond the range of the sample itself. Prediction Interval for E[Y|X] Prediction band for E[Y|X]

46 10-46 Additional Error in Predicting Individual Value of Y 3) Variation around the regression line X Y Regression line X Y X Prediction Interval for E[Y|X] Y Regression line Prediction band for E[Y|X] Prediction band for Y

47 10-47 Prediction Interval for a Value of Y

48 10-48 Prediction Interval for the Average Value of Y

49 10-49 Template Output with Prediction Intervals

50 10-50 10-11 The Solver Method for Regression See the text for instructions. The solver macro available in EXCEL can also be used to conduct a simple linear regression. See the text for instructions.

51 10-51 The Case of Independent Random Variables: For independent random variables, X 1, X 2, …, X n, the expected value for the sum, is given by: E(X 1 + X 2 + … + X n ) = E(X 1 ) + E(X 2 )+ … + E(X n ) For independent random variables, X 1, X 2, …, X n, the variance for the sum, is given by: V(X 1 + X 2 + … + X n ) = V(X 1 ) + V(X 2 )+ … + V(X n ) The Case of Independent Random Variables: For independent random variables, X 1, X 2, …, X n, the expected value for the sum, is given by: E(X 1 + X 2 + … + X n ) = E(X 1 ) + E(X 2 )+ … + E(X n ) For independent random variables, X 1, X 2, …, X n, the variance for the sum, is given by: V(X 1 + X 2 + … + X n ) = V(X 1 ) + V(X 2 )+ … + V(X n ) 10-12 Linear Composites of Dependent Random Variables

52 10-52 The Case of Independent Random Variables with Weights: For independent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the expected value for the sum, is given by: E(  1 X 1 +  2 X 2 + … +  n X n ) =  1 E(X 1 ) +  2 E(X 2 )+ … +  n E(X n ) For independent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the variance for the sum, is given by: V(  1 X 1 +  2 X 2 + … +  n X n ) =  1 2 V(X 1 ) +  2 2 V(X 2 )+ … +  n 2 V(X n ) The Case of Independent Random Variables with Weights: For independent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the expected value for the sum, is given by: E(  1 X 1 +  2 X 2 + … +  n X n ) =  1 E(X 1 ) +  2 E(X 2 )+ … +  n E(X n ) For independent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the variance for the sum, is given by: V(  1 X 1 +  2 X 2 + … +  n X n ) =  1 2 V(X 1 ) +  2 2 V(X 2 )+ … +  n 2 V(X n ) 10-12 Linear Composites of Dependent Random Variables

53 10-53 The covariance between two random variables X 1 and X 2 is given by: Cov(X 1, X 2 ) = E{[X 1 – E(X 1 )] [X 2 – E(X 2 )]} A simpler measure of covariance is given by: Cov(X 1, X 2 ) =  SD(X 1 ) SD(X 2 ) where  is the correlation between X 1 and X 2. The covariance between two random variables X 1 and X 2 is given by: Cov(X 1, X 2 ) = E{[X 1 – E(X 1 )] [X 2 – E(X 2 )]} A simpler measure of covariance is given by: Cov(X 1, X 2 ) =  SD(X 1 ) SD(X 2 ) where  is the correlation between X 1 and X 2. Covariance of two random variables X 1 and X 2

54 10-54 The Case of Dependent Random Variables with Weights: For dependent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the variance for the sum, is given by: V(  1 X 1 +  1 X 2 + … +  n X n ) =  1 2 V(X 1 ) +  2 2 V(X 2 )+ … +  n 2 V(X n ) + 2  1  2 Cov(X 1, X 2 ) + … + 2  n-1  n Cov(X n-1, X n ) The Case of Dependent Random Variables with Weights: For dependent random variables, X 1, X 2, …, X n, with respective weights  1,  2, …,  n, the variance for the sum, is given by: V(  1 X 1 +  1 X 2 + … +  n X n ) =  1 2 V(X 1 ) +  2 2 V(X 2 )+ … +  n 2 V(X n ) + 2  1  2 Cov(X 1, X 2 ) + … + 2  n-1  n Cov(X n-1, X n ) 10-12 Linear Composites of Dependent Random Variables


Download ppt "10-1 COMPLETE BUSINESS STATISTICS by AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 6 th edition (SIE)"

Similar presentations


Ads by Google