Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Regression Hypothesis testing and Estimation.

Similar presentations


Presentation on theme: "Linear Regression Hypothesis testing and Estimation."— Presentation transcript:

1 Linear Regression Hypothesis testing and Estimation

2 Assume that we have collected data on two variables X and Y. Let ( x 1, y 1 ) ( x 2, y 2 ) ( x 3, y 3 ) … ( x n, y n ) denote the pairs of measurements on the on two variables X and Y for n cases in a sample (or population)

3 The Statistical Model

4 Each y i is assumed to be randomly generated from a normal distribution with mean  i =  +  x i and standard deviation . ( ,  and  are unknown) yiyi  +  x i  xixi Y =  +  X slope =  

5 The Data The Linear Regression Model The data falls roughly about a straight line. Y =  +  X unseen

6 The Least Squares Line Fitting the best straight line to “linear” data

7 Let Y = a + b X denote an arbitrary equation of a straight line. a and b are known values. This equation can be used to predict for each value of X, the value of Y. For example, if X = x i (as for the i th case) then the predicted value of Y is:

8 The residual can be computed for each case in the sample, The residual sum of squares (RSS) is a measure of the “goodness of fit of the line Y = a + bX to the data

9 The optimal choice of a and b will result in the residual sum of squares attaining a minimum. If this is the case than the line: Y = a + bX is called the Least Squares Line

10 The equation for the least squares line Let

11 Linear Regression Hypothesis testing and Estimation

12 The Least Squares Line Fitting the best straight line to “linear” data

13 Computing Formulae:

14 Then the slope of the least squares line can be shown to be:

15 and the intercept of the least squares line can be shown to be:

16 The residual sum of Squares Computing formula

17 Estimating , the standard deviation in the regression model : This estimate of  is said to be based on n – 2 degrees of freedom Computing formula

18 Sampling distributions of the estimators

19 The sampling distribution slope of the least squares line : It can be shown that b has a normal distribution with mean and standard deviation

20 Thus has a standard normal distribution, and has a t distribution with df = n - 2

21 (1 –  )100% Confidence Limits for slope  : t  /2 critical value for the t-distribution with n – 2 degrees of freedom

22 Testing the slope The test statistic is: - has a t distribution with df = n – 2 if H 0 is true.

23 The Critical Region Reject df = n – 2 This is a two tailed tests. One tailed tests are also possible

24 The sampling distribution intercept of the least squares line : It can be shown that a has a normal distribution with mean and standard deviation

25 Thus has a standard normal distribution and has a t distribution with df = n - 2

26 (1 –  )100% Confidence Limits for intercept  : t  /2 critical value for the t-distribution with n – 2 degrees of freedom

27 Testing the intercept The test statistic is: - has a t distribution with df = n – 2 if H 0 is true.

28 The Critical Region Reject df = n – 2

29 Example

30 The following data showed the per capita consumption of cigarettes per month (X) in various countries in 1930, and the death rates from lung cancer for men in 1950. TABLE : Per capita consumption of cigarettes per month (X i ) in n = 11 countries in 1930, and the death rates, Y i (per 100,000), from lung cancer for men in 1950. Country (i)X i Y i Australia4818 Canada5015 Denmark3817 Finland11035 Great Britain11046 Holland4924 Iceland236 Norway259 Sweden3011 Switzerland5125 USA13020

31

32 Fitting the Least Squares Line

33 First compute the following three quantities:

34 Computing Estimate of Slope (  ), Intercept (  ) and standard deviation (  ),

35 95% Confidence Limits for slope  : t.025 = 2.262 critical value for the t-distribution with 9 degrees of freedom 0.0706 to 0.3862

36 95% Confidence Limits for intercept  : -4.34 to 17.85 t.025 = 2.262 critical value for the t-distribution with 9 degrees of freedom

37 Y = 6.756 + (0.228)X 95% confidence Limits for slope 0.0706 to 0.3862 95% confidence Limits for intercept -4.34 to 17.85

38 Testing the positive slope The test statistic is:

39 The Critical Region Reject df = 11 – 2 = 9 A one tailed test

40 and conclude we reject

41 Confidence Limits for Points on the Regression Line The intercept  is a specific point on the regression line. It is the y – coordinate of the point on the regression line when x = 0. It is the predicted value of y when x = 0. We may also be interested in other points on the regression line. e.g. when x = x 0 In this case the y – coordinate of the point on the regression line when x = x 0 is  +  x 0

42 x0x0  +  x 0 y =  +  x

43 (1-  )100% Confidence Limits for  +  x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

44 Prediction Limits for new values of the Dependent variable y An important application of the regression line is prediction. Knowing the value of x (x 0 ) what is the value of y? The predicted value of y when x = x 0 is: This in turn can be estimated by:.

45 The predictor Gives only a single value for y. A more appropriate piece of information would be a range of values. A range of values that has a fixed probability of capturing the value for y. A (1-  )100% prediction interval for y.

46 (1-  )100% Prediction Limits for y when x = x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

47 Example In this example we are studying building fires in a city and interested in the relationship between: 1. X = the distance of the closest fire hall and the building that puts out the alarm and 2. Y = cost of the damage (1000$) The data was collected on n = 15 fires.

48 The Data

49 Scatter Plot

50 Computations

51 Computations Continued

52

53

54 95% Confidence Limits for slope  : t.025 = 2.160 critical value for the t-distribution with 13 degrees of freedom 4.07 to 5.77

55 95% Confidence Limits for intercept  : 7.21 to 13.35 t.025 = 2.160 critical value for the t-distribution with 13 degrees of freedom

56 Least Squares Line y=4.92x+10.28

57 (1-  )100% Confidence Limits for  +  x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

58 95% Confidence Limits for  +  x 0 :

59 95% Confidence Limits for  +  x 0 Confidence limits

60 (1-  )100% Prediction Limits for y when x = x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

61 95% Prediction Limits for y when x = x 0

62 95% Prediction Limits for y when x =  x 0 Prediction limits

63 Linear Regression Summary Hypothesis testing and Estimation

64 (1 –  )100% Confidence Limits for slope  : t  /2 critical value for the t-distribution with n – 2 degrees of freedom

65 Testing the slope The test statistic is: - has a t distribution with df = n – 2 if H 0 is true.

66 (1 –  )100% Confidence Limits for intercept  : t  /2 critical value for the t-distribution with n – 2 degrees of freedom

67 Testing the intercept The test statistic is: - has a t distribution with df = n – 2 if H 0 is true.

68 (1-  )100% Confidence Limits for  +  x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

69 (1-  )100% Prediction Limits for y when x = x 0 : t  /2 is the  /2 critical value for the t-distribution with n - 2 degrees of freedom

70 Correlation

71 The statistic: Definition is called Pearsons correlation coefficient

72 1.-1 ≤ r ≤ 1, |r| ≤ 1, r 2 ≤ 1 2.|r| = 1 (r = +1 or -1) if the points (x 1, y 1 ), (x 2, y 2 ), …, (x n, y n ) lie along a straight line. (positive slope for +1, negative slope for -1) Properties

73 The test for independence (zero correlation) The test statistic: Reject H 0 if |t| > t a/2 (df = n – 2) H 0 : X and Y are independent H A : X and Y are correlated The Critical region This is a two-tailed critical region, the critical region could also be one-tailed

74 Example In this example we are studying building fires in a city and interested in the relationship between: 1. X = the distance of the closest fire hall and the building that puts out the alarm and 2. Y = cost of the damage (1000$) The data was collected on n = 15 fires.

75 The Data

76 Scatter Plot

77 Computations

78 Computations Continued

79

80 The correlation coefficient The test for independence (zero correlation) The test statistic: We reject H 0 : independence, if |t| > t 0.025 = 2.160 H 0 : independence, is rejected

81 Relationship between Regression and Correlation

82 Recall Also since Thus the slope of the least squares line is simply the ratio of the standard deviations × the correlation coefficient

83 The test for independence (zero correlation) Uses the test statistic: H 0 : X and Y are independent H A : X and Y are correlated Note: and

84 1.The test for independence (zero correlation) H 0 : X and Y are independent H A : X and Y are correlated are equivalent The two tests 2.The test for zero slope H 0 :  = 0. H A :  ≠ 0

85 1.the test statistic for independence:

86 Regression (in general)

87 In many experiments we would have collected data on a single variable Y (the dependent variable ) and on p (say) other variables X 1, X 2, X 3,..., X p (the independent variables). One is interested in determining a model that describes the relationship between Y (the response (dependent) variable) and X 1, X 2, …, X p (the predictor (independent) variables. This model can be used for –Prediction –Controlling Y by manipulating X 1, X 2, …, X p

88 The Model: is an equation of the form Y = f(X 1, X 2,...,X p |  1,  2,...,  q ) +  where  1,  2,...,  q are unknown parameters of the function f and  is a random disturbance (usually assumed to have a normal distribution with mean 0 and standard deviation .

89 Examples: 1. Y = Blood Pressure, X = age The model Y =  +  X +  thus  1 =  and  2 = . This model is called: the simple Linear Regression Model Y =  +  X

90 2. Y = average of five best times for running the 100m, X = the year The model Y =  e -  X +   thus  1 =  2 =  and  2 = . This model is called: the exponential Regression Model Y =  e -  X + 

91 2. Y = gas mileage ( mpg) of a car brand X 1 = engine size X 2 = horsepower X 3 = weight The model Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 + . This model is called: the Multiple Linear Regression Model

92 The Multiple Linear Regression Model

93 In Multiple Linear Regression we assume the following model Y =  0 +  1 X 1 +  2 X 2 +... +  p X p +  This model is called the Multiple Linear Regression Model. Again are unknown parameters of the model and where  0,  1,  2,...,  p are unknown parameters and  is a random disturbance assumed to have a normal distribution with mean 0 and standard deviation .

94 The importance of the Linear model 1. It is the simplest form of a model in which each dependent variable has some effect on the independent variable Y. –When fitting models to data one tries to find the simplest form of a model that still adequately describes the relationship between the dependent variable and the independent variables. –The linear model is sometimes the first model to be fitted and only abandoned if it turns out to be inadequate.

95 2.In many instance a linear model is the most appropriate model to describe the dependence relationship between the dependent variable and the independent variables. –This will be true if the dependent variable increases at a constant rate as any or the independent variables is increased while holding the other independent variables constant.

96 3. Many non-Linear models can be Linearized (put into the form of a Linear model by appropriately transformation the dependent variables and/or any or all of the independent variables.) –This important fact ensures the wide utility of the Linear model. (i.e. the fact the many non- linear models are linearizable.)

97 An Example The following data comes from an experiment that was interested in investigating the source from which corn plants in various soils obtain their phosphorous. –The concentration of inorganic phosphorous (X 1 ) and the concentration of organic phosphorous (X 2 ) was measured in the soil of n = 18 test plots. –In addition the phosphorous content (Y) of corn grown in the soil was also measured. The data is displayed below:

98 Inorganic Phosphorous X 1 Organic Phosphorous X 2 Plant Available Phosphorous Y Inorganic Phosphorous X 1 Organic Phosphorous X 2 Plant Available Phosphorous Y 0.4536412.65851 0.4236010.93776 3.1197123.14696 0.6346123.15077 4.7245421.64493 1.7657723.15695 9.444811.93654 10.1319326.858168 11.6299329.95199

99 Coefficients Intercept 56.2510241 (  0 ) X1X1 1.78977412 (  1 ) X2X2 0.08664925 (  2 ) Equation: Y = 56.2510241 + 1.78977412 X 1 + 0.08664925 X 2

100

101 The Multiple Linear Regression Model

102 In Multiple Linear Regression we assume the following model Y =  0 +  1 X 1 +  2 X 2 +... +  p X p +  This model is called the Multiple Linear Regression Model. Again are unknown parameters of the model and where  0,  1,  2,...,  p are unknown parameters and  is a random disturbance assumed to have a normal distribution with mean 0 and standard deviation .

103 Summary of the Statistics used in Multiple Regression

104 The Least Squares Estimates: - the values that minimize

105 The Analysis of Variance Table Entries a) Adjusted Total Sum of Squares (SS Total ) b) Residual Sum of Squares (SS Error ) c) Regression Sum of Squares (SS Reg ) Note: i.e. SS Total = SS Reg +SS Error

106 The Analysis of Variance Table SourceSum of Squaresd.f.Mean SquareF RegressionSS Reg pSS Reg /p = MS Reg MS Reg /s 2 ErrorSS Error n-p-1SS Error /(n-p-1) =MS Error = s 2 TotalSS Total n-1

107 Uses: 1.To estimate  2 (the error variance). - Use s 2 = MS Error to estimate  2. 2.To test the Hypothesis H 0 :  1 =  2 =...  =  p = 0. Use the test statistic - Reject H 0 if F > F  (p,n-p-1).

108 3.To compute other statistics that are useful in describing the relationship between Y (the dependent variable) and X 1, X 2,...,X p (the independent variables). a)R 2 = the coefficient of determination = SS Reg /SS Total = = the proportion of variance in Y explained by X 1, X 2,...,X p 1 - R 2 = the proportion of variance in Y that is left unexplained by X 1, X2,..., X p = SS Error /SS Total.

109 b)R a 2 = "R 2 adjusted" for degrees of freedom. = 1 -[the proportion of variance in Y that is left unexplained by X 1, X 2,..., X p adjusted for d.f.]

110 c) R=  R 2 = the Multiple correlation coefficient of Y with X 1, X 2,...,X p = = the maximum correlation between Y and a linear combination of X 1, X 2,...,X p Comment: The statistics F, R 2, R a 2 and R are equivalent statistics.

111 Using Statistical Packages To perform Multiple Regression

112 Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS

113 After starting the SSPS program the following dialogue box appears:

114 If you select Opening an existing file and press OK the following dialogue box appears

115 The following dialogue box appears:

116 If the variable names are in the file ask it to read the names. If you do not specify the Range the program will identify the Range: Once you “click OK”, two windows will appear

117 One that will contain the output:

118 The other containing the data:

119 To perform any statistical Analysis select the Analyze menu:

120 Then select Regression and Linear.

121 The following Regression dialogue box appears

122 Select the Dependent variable Y.

123 Select the Independent variables X 1, X 2, etc.

124 If you select the Method - Enter.

125 All variables will be put into the equation. There are also several other methods that can be used : 1.Forward selection 2.Backward Elimination 3.Stepwise Regression

126

127 Forward selection 1.This method starts with no variables in the equation 2.Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. 3.Adds the most significant. 4.Continues until all variables not in the equation have no significant effect on the dependent variable.

128 Backward Elimination 1.This method starts with all variables in the equation 2.Carries out statistical tests on variables in the equation to see which have no significant effect on the dependent variable. 3.Deletes the least significant. 4.Continues until all variables in the equation have a significant effect on the dependent variable.

129 Stepwise Regression (uses both forward and backward techniques) 1.This method starts with no variables in the equation 2.Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. 3.It then adds the most significant. 4.After a variable is added it checks to see if any variables added earlier can now be deleted. 5.Continues until all variables not in the equation have no significant effect on the dependent variable.

130 All of these methods are procedures for attempting to find the best equation The best equation is the equation that is the simplest (not containing variables that are not important) yet adequate (containing variables that are important)

131 Once the dependent variable, the independent variables and the Method have been selected if you press OK, the Analysis will be performed.

132 The output will contain the following table R 2 and R 2 adjusted measures the proportion of variance in Y that is explained by X 1, X 2, X 3, etc (67.6% and 67.3%) R is the Multiple correlation coefficient (the maximum correlation between Y and a linear combination of X 1, X 2, X 3, etc)

133 The next table is the Analysis of Variance Table The F test is testing if the regression coefficients of the predictor variables are all zero. Namely none of the independent variables X 1, X 2, X 3, etc have any effect on Y

134 The final table in the output Gives the estimates of the regression coefficients, there standard error and the t test for testing if they are zero Note: Engine size has no significant effect on Mileage

135 The estimated equation from the table below: Is:

136 Note the equation is: Mileage decreases with: 1.With increases in Engine Size (not significant, p = 0.432) With increases in Horsepower (significant, p = 0.000) With increases in Weight (significant, p = 0.000)

137 Logistic regression

138 Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from a continuous independent variable x. This model can be extended to Multiple linear regression model: y =  0 +  1 x 1 +  2 x 2 + … + +  p x p +  Here we are trying to predict a continuous dependent variable y from a several continuous dependent variables x 1, x 2, …, x p.

139 Now suppose the dependent variable y is binary. It takes on two values “Success” (1) or “Failure” (0) This is the situation in which Logistic Regression is used We are interested in predicting a y from a continuous dependent variable x.

140 Example We are interested how the success (y) of a new antibiotic cream is curing “acne problems” and how it depends on the amount (x) that is applied daily. The values of y are 1 (Success) or 0 (Failure). The values of x range over a continuum

141 The logisitic Regression Model Let p denote P[y = 1] = P[Success]. This quantity will increase with the value of x. The ratio: is called the odds ratio This quantity will also increase with the value of x, ranging from zero to infinity. The quantity: is called the log odds ratio

142 Example: odds ratio, log odds ratio Suppose a die is rolled: Success = “roll a six”, p = 1/6 The odds ratio The log odds ratio

143 The logisitic Regression Model i. e. : In terms of the odds ratio Assumes the log odds ratio is linearly related to x.

144 The logisitic Regression Model or Solving for p in terms x.

145 Interpretation of the parameter  0 (determines the intercept) p x

146 Interpretation of the parameter  1 (determines when p is 0.50 (along with  0 )) p x when

147 Also when is the rate of increase in p with respect to x when p = 0.50

148 Interpretation of the parameter  1 (determines slope when p is 0.50 ) p x

149 The data The data will for each case consist of 1.a value for x, the continuous independent variable 2.a value for y (1 or 0) (Success or Failure) Total of n = 250 cases

150

151 Estimation of the parameters The parameters are estimated by Maximum Likelihood estimation and require a statistical package such as SPSS

152 Using SPSS to perform Logistic regression Open the data file:

153 Choose from the menu: Analyze -> Regression -> Binary Logistic

154 The following dialogue box appears Select the dependent variable (y) and the independent variable (x) (covariate). Press OK.

155 Here is the output The Estimates and their S.E.

156 The parameter Estimates

157 Interpretation of the parameter  0 (determines the intercept) Interpretation of the parameter  1 (determines when p is 0.50 (along with  0 ))

158 Another interpretation of the parameter  1 is the rate of increase in p with respect to x when p = 0.50

159 The dependent variable y is binary. It takes on two values “Success” (1) or “Failure” (0) The Logistic Regression Model We are interested in predicting a y from a continuous dependent variable x.

160 The logisitic Regression Model Let p denote P[y = 1] = P[Success]. This quantity will increase with the value of x. The ratio: is called the odds ratio This quantity will also increase with the value of x, ranging from zero to infinity. The quantity: is called the log odds ratio

161 The logisitic Regression Model i. e. : In terms of the odds ratio Assumes the log odds ratio is linearly related to x.

162 The logisitic Regression Model In terms of p

163 The graph of p vs x p x

164 The Multiple Logistic Regression model

165 Here we attempt to predict the outcome of a binary response variable Y from several independent variables X 1, X 2, … etc

166 Multiple Logistic Regression an example In this example we are interested in determining the risk of infants (who were born prematurely) of developing BPD (bronchopulmonary dysplasia) More specifically we are interested in developing a predictive model which will determine the probability of developing BPD from X 1 = gestational Age and X 2 = Birthweight

167 For n = 223 infants in prenatal ward the following measurements were determined 1.X 1 = gestational Age (weeks), 2.X 2 = Birth weight (grams) and 3.Y = presence of BPD

168 The data

169 The results

170 Graph: Showing Risk of BPD vs GA and BrthWt

171 Non-Parametric Statistics


Download ppt "Linear Regression Hypothesis testing and Estimation."

Similar presentations


Ads by Google