Presentation is loading. Please wait.

Presentation is loading. Please wait.

7.1 - Motivation 7.1 - Motivation 7.2 - Correlation / Simple Linear Regression 7.2 - Correlation / Simple Linear Regression 7.3 - Extensions of Simple.

Similar presentations


Presentation on theme: "7.1 - Motivation 7.1 - Motivation 7.2 - Correlation / Simple Linear Regression 7.2 - Correlation / Simple Linear Regression 7.3 - Extensions of Simple."— Presentation transcript:

1 7.1 - Motivation 7.1 - Motivation 7.2 - Correlation / Simple Linear Regression 7.2 - Correlation / Simple Linear Regression 7.3 - Extensions of Simple Linear Regression 7.3 - Extensions of Simple Linear Regression CHAPTER 7 Linear Correlation & Regression Methods CHAPTER 7 Linear Correlation & Regression Methods

2 Parameter Estimation via SAMPLE DATA … Testing for association between two POPULATION variables X and Y… Categorical variables Numerical variables Chi-squared Test  Chi-squared Test ???????  ??????? Examples: X = Disease status (D+, D–) Y = Exposure status (E+, E–) X = # children in household (0, 1-2, 3-4, 5+) Y = Income level (Low, Middle, High) PARAMETERS  Means:  Variances:  Covariance:

3 Parameter Estimation via SAMPLE DATA … PARAMETERS  Means:  Variances: Numerical variables ???????  ???????  Means:  Variances: PARAMETERS STATISTICS (can be +, –, or 0)  Covariance:

4 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn PARAMETERS  Means:  Variances:  Covariance: Numerical variables ???????  ???????  Means:  Variances: PARAMETERS STATISTICS (can be +, –, or 0) X Y JAMA. 2003;290:1486-1493 Scatterplot ( n data points)  Covariance:

5 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn PARAMETERS  Means:  Variances:  Covariance: Numerical variables ???????  ???????  Means:  Variances: PARAMETERS STATISTICS (can be +, –, or 0) X Y JAMA. 2003;290:1486-1493 Scatterplot Does this suggest a linear trend between X and Y? If so, how do we measure it? ( n data points)  Covariance:

6 Testing for association between two population variables X and Y… Numerical variables ???????  ??????? PARAMETERS  Means:  Variances:  Covariance:  Linear Correlation Coefficient: Always between –1 and +1 LINEAR ^

7 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn PARAMETERS  Means:  Variances:  Covariance: Numerical variables ???????  ???????  Means:  Variances:  Covariance: PARAMETERS STATISTICS (can be +, –, or 0) X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 ( n data points)

8 Parameter Estimation via SAMPLE DATA … JAMA. 2003;290:1486-1493 x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn PARAMETERS  Means:  Variances:  Covariance: Numerical variables ???????  ???????  Means:  Variances:  Covariance: PARAMETERS STATISTICS (can be +, –, or 0) X Y Scatterplot ( n data points) Example in R (reformatted for brevity): > c(mean(x), mean(y)) 7.05 12.08 > var(x) 29.48944 > var(y) 43.76178 > cov(x, y) -25.86667  Linear Correlation Coefficient: Always between –1 and +1 > cor(x, y) -0.7200451 > pop = seq(0, 20, 0.1) > x = sort(sample(pop, 10)) 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 > y = sample(pop, 10) 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 n = 10 plot(x, y, pch = 19)

9 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Numerical variables X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 r measures the strength of linear association ( n data points)

10 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Numerical variables X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 –1 0 +1 positive linear correlation negative linear correlation r r measures the strength of linear association ( n data points)

11 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Numerical variables X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 –1 0 +1 positive linear correlation negative linear correlation r r measures the strength of linear association ( n data points)

12 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Numerical variables X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 –1 0 +1 positive linear correlation negative linear correlation r r measures the strength of linear association ( n data points) linear r measures the strength of linear association

13 Parameter Estimation via SAMPLE DATA … x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Numerical variables X Y JAMA. 2003;290:1486-1493 Scatterplot  Linear Correlation Coefficient: Always between –1 and +1 –1 0 +1 positive linear correlation negative linear correlation r ( n data points) > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 linear r measures the strength of linear association

14 linear Testing for linear association between two numerical population variables X and Y…  Linear Correlation Coefficient Now that we have r, we can conduct HYPOTHESIS TESTING on  Test Statistic for p-value p-value =.0189 <.05 2 * pt(-2.935, 8)

15 Parameter Estimation via SAMPLE DATA … If such an association between X and Y exists, then it follows that for any intercept  0 and slope  1, we have…  Linear Correlation Coefficient: linear r measures the strength of linear association “Response = Model + Error” Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals in what sense???

16 Parameter Estimation via SAMPLE DATA … in what sense??? If such an association between X and Y exists, then it follows that for any intercept  0 and slope  1, we have…  Linear Correlation Coefficient: linear r measures the strength of linear association “Response = Model + Error” Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes “Least Squares Regression Line” SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES

17 If such an association between X and Y exists, then it follows that for any intercept  0 and slope  1, we have…  Linear Correlation Coefficient: linear r measures the strength of linear association “Response = Model + Error” Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES Check 

18 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response

19 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response

20 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~ SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response

21 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Residuals i.e., that minimizes SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response residuals X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~

22 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 i.e., that minimizes SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response residuals X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~ ~EXERCISE~ Residuals

23 linear Testing for linear association between two numerical population variables X and Y…  Linear Regression Coefficients Test Statistic for p-value? “Response = Model + Error” Now that we have these, we can conduct HYPOTHESIS TESTING on  0 and  1

24 Find estimates and for the “best” line > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 i.e., that minimizes SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response residuals X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~ ~EXERCISE~ Residuals

25 linear Testing for linear association between two numerical population variables X and Y…  Linear Regression Coefficients Now that we have these, we can conduct HYPOTHESIS TESTING on  0 and  1 Test Statistic for p-value “Response = Model + Error” Same t-score as H 0 :  = 0! p-value =.0189

26 > plot(x, y, pch = 19) > lsreg = lm(y ~ x) # or lsfit(x,y) > abline(lsreg) > summary(lsreg) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -8.6607 -3.2154 0.8954 3.4649 5.7742 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.2639 2.6097 6.999 0.000113 *** x -0.8772 0.2989 -2.935 0.018857 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.869 on 8 degrees of freedom Multiple R-squared: 0.5185, Adjusted R-squared: 0.4583 F-statistic: 8.614 on 1 and 8 DF, p-value: 0.01886 BUT WHY HAVE TWO METHODS FOR THE SAME PROBLEM??? Because this second method generalizes…

27 SourcedfSSMSF-ratiop-value Treatment Error Total – ANOVA Table

28 SourcedfSSMSF-ratiop-value Regression Error Total – ANOVA Table

29 SourcedfSSMSF-ratiop-value Regression 1 Error Total – ANOVA Table

30 linear Testing for linear association between two numerical population variables X and Y…  Linear Regression Coefficients Now that we have these, we can conduct HYPOTHESIS TESTING on  0 and  1 Test Statistic for p-value “Response = Model + Error” Same t-score as H 0 :  = 0! p-value =.0189

31 SourcedfSSMSF-ratiop-value Regression 1 Error 8 Total – ANOVA Table

32 Parameter Estimation via SAMPLE DATA …  Means:  Variances: STATISTICS JAMA. 2003;290:1486-1493 x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Scatterplot ( n data points)

33 Parameter Estimation via SAMPLE DATA …  Means:  Variances: STATISTICS JAMA. 2003;290:1486-1493 x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Scatterplot ( n data points) SS Tot is a measure of the total amount of variability in the observed responses (i.e., before any model-fitting).

34 Parameter Estimation via SAMPLE DATA … JAMA. 2003;290:1486-1493 x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Scatterplot ( n data points)  Means:  Variances: STATISTICS SS Reg is a measure of the total amount of variability in the fitted responses (i.e., after model-fitting.)

35 Parameter Estimation via SAMPLE DATA …  Means:  Variances: STATISTICS JAMA. 2003;290:1486-1493 x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn Scatterplot ( n data points) SS Err is a measure of the total amount of variability in the resulting residuals (i.e., after model-fitting).

36 > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response residuals X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~ ~EXERCISE~ Residuals = 189.656 = 393.856 = 9 (43.76178) = 204.2

37 SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor observed response fitted response residuals X1.11.82.13.74.07.39.111.912.417.1 Y13.118.317.619.119.33.25.613.68.03.0 ~EXERCISE~ ~EXERCISE~ Residuals = 189.656 = 393.856 = 204.2 SS Tot = SS Reg + SS Err minimum > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Tot Err Reg

38 SourcedfSSMSF-ratiop-value Regression 1204.200MS Reg F k – 1, n – k 0 < p < 1 Error 8189.656MS Err Total 9 393.856– ANOVA Table

39 SourcedfSSMSF-ratiop-value Regression 1204.200 8.613490.018857 Error 8189.65623.707 Total 9 393.856– ANOVA Table Same as before!

40 SourcedfSSMSF-ratiop-value Regression 1204.200 8.613490.018857 Error 8189.65623.707 Total 9 393.856– > summary(aov(lsreg)) Df Sum Sq Mean Sq F value Pr(>F) x 1 204.20 204.201 8.6135 0.01886 * Residuals 8 189.66 23.707

41 SourcedfSSMSF-ratiop-value Regression 1204.200 8.613490.018857 Error 8189.65623.707 Total 9 393.856– Moreover, The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining. Coefficient of Determination

42 > cor(x, y) -0.7200451 > cor(x, y) -0.7200451 Coefficient of Determination Moreover, The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining.

43 > plot(x, y, pch = 19) > lsreg = lm(y ~ x) > abline(lsreg) > summary(lsreg) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -8.6607 -3.2154 0.8954 3.4649 5.7742 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.2639 2.6097 6.999 0.000113 *** x -0.8772 0.2989 -2.935 0.018857 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.869 on 8 degrees of freedom Multiple R-squared: 0.5185, Adjusted R-squared: 0.4583 F-statistic: 8.614 on 1 and 8 DF, p-value: 0.01886 Coefficient of Determination The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining.

44 Given:  Linear Correlation Coefficient  Least Squares Regression Line minimizes SS Err = Summary of Linear Correlation and Simple Linear Regression x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn MeansVariancesCovariance JAMA. 2003;290:1486-1493 X Y XYXY = SS Tot – SS Reg –1  r  +1 linear measures the strength of linear association (ANOVA) All point estimates can be upgraded to CIs for hypothesis testing, etc.

45 Given:  Linear Correlation Coefficient –1  r  +1 linear measures the strength of linear association  Least Squares Regression Line minimizes SS Err = (ANOVA) Summary of Linear Correlation and Simple Linear Regression x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn MeansVariancesCovariance JAMA. 2003;290:1486-1493 X Y XYXY = SS Tot – SS Reg upper 95% confidence band 95% Confidence Intervals lower 95% confidence band All point estimates can be upgraded to CIs for hypothesis testing, etc. (see notes for “95% prediction intervals”)

46 Given:  Linear Correlation Coefficient –1  r  +1 linear measures the strength of linear association  Least Squares Regression Line minimizes SS Err = (ANOVA) Summary of Linear Correlation and Simple Linear Regression x1x1 x2x2 x3x3 x4x4 …xnxn y1y1 y2y2 y3y3 y4y4 …ynyn MeansVariancesCovariance JAMA. 2003;290:1486-1493 X Y XYXY  Coefficient of Determination = SS Tot – SS Reg All point estimates can be upgraded to CIs for hypothesis testing, etc. proportion of total variability modeled by the regression line’s variability.

47 linear Testing for linear association between a population response variable Y and multiple predictor variables X 1, X 2, X 3, … etc. “Response = Model + Error” Multilinear Regression “main effects” For now, assume the “additive model,” i.e., main effects only.

48 Multilinear Regression Fitted response  Residual True response y i X1X1 X2X2 0 Y (x 1i, x 2i ) Predictors Once calculated, how do we then test the null hypothesis? Least Squares calculation of regression coefficients is computer-intensive. Formulas require Linear Algebra (matrices)! ANOVA

49 linear Testing for linear association between a population response variable Y and multiple predictor variables X 1, X 2, X 3, … etc. “Response = Model + Error” Multilinear Regression “main effects” R code example: lsreg = lm(y ~ x1+x2+x3)

50 R code example: lsreg = lm(y ~ x+x^2+x^3) linear Testing for linear association between a population response variable Y and multiple predictor variables X 1, X 2, X 3, … etc. “Response = Model + Error” Multilinear Regression “main effects” quadratic terms, etc. (“polynomial regression”) quadratic terms, etc. (“polynomial regression”)

51 R code example: lsreg = lm(y ~ x+x^2+x^3) R code example: lsreg = lm(y ~ x1+x2+x1:x2) R code example: lsreg = lm(y ~ x1*x2) linear Testing for linear association between a population response variable Y and multiple predictor variables X 1, X 2, X 3, … etc. “Response = Model + Error” Multilinear Regression “main effects” quadratic terms, etc. (“polynomial regression”) quadratic terms, etc. (“polynomial regression”) “ interactions ”

52

53

54

55

56 Recall… Example in R (reformatted for brevity): > I = c(1,1,1,1,1,0,0,0,0,0) > lsreg = lm(y ~ x*I) > summary(lsreg) Coefficients: Estimate (Intercept) 6.56463 x 0.00998 I 6.80422 x:I 1.60858 Suppose these are actually two subgroups, requiring two distinct linear regressions! Multiple Linear Reg with interaction with an indicator (“dummy”) variable: I = 1 I = 0

57 ANOVA Table (revisited) From sample of n data points…. Note that if true, then it would follow that But how are these regression coefficients calculated in general? Normal equations “Normal equations” solved via computer (intensive). Note that if true, then it would follow that

58 SourcedfSS MS Fp-value Regression Error Total ANOVA Table (revisited) (based on n data points). *** How are only the statistically significant variables determined? ***

59 p-values: p 1 <.05 p 2 <.05 p 4 <.05 “MODEL SELECTION”(BE) Step 0.Conduct an overall F-test of significance (via ANOVA) of the full model. X1X1 ++ + + …… X2X2 X3X3 X4X4 …… Step 1. t-tests: Reject H 0 Reject H 0 Accept H 0 Reject H 0 …… Step 2. Are all coefficients significant at level  ? If not…. If significant, then…

60 p-values: p 1 <.05 p 2 <.05 p 4 <.05 “MODEL SELECTION”(BE) Step 0.Conduct an overall F-test of significance (via ANOVA) of the full model. X1X1 ++ + + …… X2X2 X3X3 X4X4 …… Step 1. t-tests: Reject H 0 Reject H 0 Accept H 0 Reject H 0 …… Step 2. Are all coefficients significant at level  ? If not…. X1X1 ++ + + …… X2X2 X4X4 X3X3 delete that term, If significant, then…

61 p-values: p 1 <.05 p 2 <.05 p 4 <.05 “MODEL SELECTION”(BE) Step 0.Conduct an overall F-test of significance (via ANOVA) of the full model. X1X1 ++ + + …… X2X2 X3X3 X4X4 …… Step 1. t-tests: Reject H 0 Reject H 0 Accept H 0 Reject H 0 …… Step 2. Are all coefficients significant at level  ? If not…. X1X1 ++ + + …… X2X2 X4X4 Step 3. Repeat 1-2 as necessary until all coefficients are significant → reduced model delete that term, and recompute new coefficients! If significant, then… X1X1 X2X2 X4X4 +++ ……

62 11 22 kk = = = H0:H0: k  2 independent, equivariant, normally-distributed “treatment groups” Recall ~

63

64

65

66

67

68

69

70

71

72 Re-plot data on a “log-log” scale.

73

74

75 Re-plot data on a “log” scale (of Y only)..

76 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No)

77

78 “MAXIMUM LIKELIHOOD ESTIMATION” “log-odds” (“logit”) = example of a general “link function” (Note: Not based on LS implies “pseudo-R 2,” etc.)

79 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) Suppose one of the predictor variables is binary… “log-odds” (“logit”) SUBTRACT !

80 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) Suppose one of the predictor variables is binary… “log-odds” (“logit”) SUBTRACT !

81 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) Suppose one of the predictor variables is binary… “log-odds” (“logit”)

82 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) Suppose one of the predictor variables is binary… “log-odds” (“logit”)

83 Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) Suppose one of the predictor variables is binary… “log-odds” (“logit”) ………….. implies …………..

84 in population dynamics Unrestricted population growth (e.g., bacteria) Population size y obeys the following law with constant a > 0. With initial condition Restricted population growth (disease, predation, starvation, etc.) Population size y obeys the following law, constant a > 0, and “carrying capacity” M. Exponential growth Let survival probability  = Logistic growth


Download ppt "7.1 - Motivation 7.1 - Motivation 7.2 - Correlation / Simple Linear Regression 7.2 - Correlation / Simple Linear Regression 7.3 - Extensions of Simple."

Similar presentations


Ads by Google