CHAPTER 7 Linear Correlation & Regression Methods

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

Topic 12: Multiple Linear Regression
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Lecture 10 F-tests in MLR (continued) Coefficients of Determination BMTRY 701 Biostatistical Methods II.
13- 1 Chapter Thirteen McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Linear regression models
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 13 Nonlinear and Multiple Regression.
1 Chapter 2 Simple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
Chapter 13 Multiple Regression
Chapter 12 Multiple Regression
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 11 th Edition.
Chapter 11 Multiple Regression.
Ch. 14: The Multiple Regression Model building
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Linear Regression and Linear Prediction Predicting the score on one variable.
Simple Linear Regression Analysis
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
Introduction to Linear Regression and Correlation Analysis
Chapter 13: Inference in Regression
Introduction to Regression Analysis. Two Purposes Explanation –Explain (or account for) the variance in a variable (e.g., explain why children’s test.
7.1 - Motivation Motivation Correlation / Simple Linear Regression Correlation / Simple Linear Regression Extensions of Simple.
Lecture 3: Inference in Simple Linear Regression BMTRY 701 Biostatistical Methods II.
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
April 4 Logistic Regression –Lee Chapter 9 –Cody and Smith 9:F.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
MARKETING RESEARCH CHAPTER 18 :Correlation and Regression.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
Lecture 7: Multiple Linear Regression Interpretation with different types of predictors BMTRY 701 Biostatistical Methods II.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
Environmental Modeling Basic Testing Methods - Statistics III.
Linear Models Alan Lee Sample presentation for STATS 760.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
Introduction to Multiple Regression Lecture 11. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more.
Biostatistics Regression and Correlation Methods Class #10 April 4, 2000.
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Lecture 11: Simple Linear Regression
Chapter 14 Introduction to Multiple Regression
Chapter 20 Linear and Multiple Regression
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Chapter 12 Simple Linear Regression and Correlation
Chapter 13 Nonlinear and Multiple Regression
Correlation and Simple Linear Regression
Chapter 13 Created by Bethany Stubbe and Stephan Kogitz.
John Loucks St. Edward’s University . SLIDES . BY.
Chapter 11 Simple Regression
Chapter 13 Simple Linear Regression
Correlation and Simple Linear Regression
CHAPTER 29: Multiple Regression*
6-1 Introduction To Empirical Models
Prepared by Lee Revere and John Large
Chapter 12 Simple Linear Regression and Correlation
Simple Linear Regression
Ass. Prof. Dr. Mogeeb Mosleh
Correlation and Simple Linear Regression
Multiple Regression Chapter 14.
Simple Linear Regression
Simple Linear Regression and Correlation
Simple Linear Regression
Chapter Thirteen McGraw-Hill/Irwin
MGS 3100 Business Analysis Regression Feb 18, 2016
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

CHAPTER 7 Linear Correlation & Regression Methods 7.1 - Motivation 7.2 - Correlation / Simple Linear Regression 7.3 - Extensions of Simple Linear Regression

Testing for association between two POPULATION variables X and Y… Parameter Estimation via SAMPLE DATA … Categorical variables Numerical variables  Chi-squared Test  ??????? Categories of X Categories of Y PARAMETERS Means: Variances: Covariance: Examples: X = Disease status (D+, D–) Y = Exposure status (E+, E–) X = # children in household (0, 1-2, 3-4, 5+) Y = Income level (Low, Middle, High)

Parameter Estimation via SAMPLE DATA … Numerical variables  ??????? STATISTICS PARAMETERS PARAMETERS Means: Means: Variances: Variances: Covariance: Covariance: (can be +, –, or 0)

x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn  ??????? STATISTICS PARAMETERS PARAMETERS Y Means: Means: JAMA. 2003;290:1486-1493 Variances: Variances: Scatterplot (n data points) Covariance: Covariance: (can be +, –, or 0) X

x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn  ??????? STATISTICS PARAMETERS PARAMETERS Y Means: Means: JAMA. 2003;290:1486-1493 Variances: Variances: Scatterplot (n data points) Covariance: Covariance: (can be +, –, or 0) Does this suggest a linear trend between X and Y? X If so, how do we measure it?

LINEAR Testing for association between two population variables X and Y… ^ Numerical variables  ??????? PARAMETERS Means: Variances: Covariance: Linear Correlation Coefficient: Always between –1 and +1

x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn  ??????? STATISTICS PARAMETERS PARAMETERS Y Means: Means: JAMA. 2003;290:1486-1493 Variances: Variances: Scatterplot (n data points) Covariance: Covariance: (can be +, –, or 0) Linear Correlation Coefficient: Always between –1 and +1 X

Parameter Estimation via SAMPLE DATA … Example in R (reformatted for brevity): Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn > pop = seq(0, 20, 0.1) > x = sort(sample(pop, 10)) 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 > y = sample(pop, 10) 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0  ??????? STATISTICS PARAMETERS PARAMETERS Y > c(mean(x), mean(y)) 7.05 12.08 > var(x) 29.48944 > var(y) 43.76178 Means: Means: JAMA. 2003;290:1486-1493 Variances: Variances: plot(x, y, pch = 19) Scatterplot n = 10 (n data points) Covariance: Covariance: > cov(x, y) -25.86667 (can be +, –, or 0) Linear Correlation Coefficient: Always between –1 and +1 > cor(x, y) -0.7200451 X

Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient: Always between –1 and +1 Y JAMA. 2003;290:1486-1493 r measures the strength of linear association Scatterplot (n data points) X

Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient: Always between –1 and +1 Y JAMA. 2003;290:1486-1493 r measures the strength of linear association Scatterplot (n data points) –1 0 +1 r positive linear correlation negative linear correlation X

Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient: Always between –1 and +1 Y JAMA. 2003;290:1486-1493 r measures the strength of linear association Scatterplot (n data points) –1 0 +1 r positive linear correlation negative linear correlation X

Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient: Always between –1 and +1 Y JAMA. 2003;290:1486-1493 r measures the strength of linear association r measures the strength of linear association Scatterplot (n data points) –1 0 +1 r positive linear correlation negative linear correlation X

Parameter Estimation via SAMPLE DATA … Numerical variables x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient: Always between –1 and +1 Y JAMA. 2003;290:1486-1493 r measures the strength of linear association > cor(x, y) -0.7200451 Scatterplot (n data points) –1 0 +1 r positive linear correlation negative linear correlation X

Test Statistic for p-value Testing for linear association between two numerical population variables X and Y… Now that we have r, we can conduct HYPOTHESIS TESTING on  Linear Correlation Coefficient Test Statistic for p-value Linear Correlation Coefficient 2 * pt(-2.935, 8) p-value = .0189 < .05

“Response = Model + Error” Parameter Estimation via SAMPLE DATA … If such an association between X and Y exists, then it follows that for any intercept 0 and slope 1, we have… Linear Correlation Coefficient: r measures the strength of linear association “Response = Model + Error” > cor(x, y) -0.7200451 Find estimates and for the “best” line in what sense??? Residuals

“Response = Model + Error” Parameter Estimation via SAMPLE DATA … SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES If such an association between X and Y exists, then it follows that for any intercept 0 and slope 1, we have… Linear Correlation Coefficient: r measures the strength of linear association “Response = Model + Error” > cor(x, y) -0.7200451 Find estimates and for the “best” line “Least Squares Regression Line” i.e., that minimizes in what sense??? Residuals

“Response = Model + Error” SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES If such an association between X and Y exists, then it follows that for any intercept 0 and slope 1, we have… Linear Correlation Coefficient: r measures the strength of linear association “Response = Model + Error” > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals Check 

SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X 1.1 1.8 2.1 predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 observed response > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X 1.1 1.8 2.1 predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 observed response fitted response > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response residuals > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response residuals > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

Test Statistic for p-value? Testing for linear association between two numerical population variables X and Y… Now that we have these, we can conduct HYPOTHESIS TESTING on 0 and 1 Linear Regression Coefficients “Response = Model + Error” Test Statistic for p-value? Linear Regression Coefficients

~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response residuals > cor(x, y) -0.7200451 Find estimates and for the “best” line i.e., that minimizes Residuals

Test Statistic for p-value Testing for linear association between two numerical population variables X and Y… Now that we have these, we can conduct HYPOTHESIS TESTING on 0 and 1 Linear Regression Coefficients “Response = Model + Error” Test Statistic for p-value Linear Regression Coefficients Same t-score as H0:  = 0! p-value = .0189

BUT WHY HAVE TWO METHODS FOR THE SAME PROBLEM??? > plot(x, y, pch = 19) > lsreg = lm(y ~ x) # or lsfit(x,y) > abline(lsreg) > summary(lsreg) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -8.6607 -3.2154 0.8954 3.4649 5.7742 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.2639 2.6097 6.999 0.000113 *** x -0.8772 0.2989 -2.935 0.018857 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.869 on 8 degrees of freedom Multiple R-squared: 0.5185, Adjusted R-squared: 0.4583 F-statistic: 8.614 on 1 and 8 DF, p-value: 0.01886 BUT WHY HAVE TWO METHODS FOR THE SAME PROBLEM??? Because this second method generalizes…

ANOVA Table Source df SS MS F-ratio p-value Treatment Error Total –

ANOVA Table Source df SS MS F-ratio p-value Regression Error Total – ?

? ANOVA Table 1 Source df SS MS F-ratio p-value Regression Error Total – ?

Test Statistic for p-value Testing for linear association between two numerical population variables X and Y… Now that we have these, we can conduct HYPOTHESIS TESTING on 0 and 1 Linear Regression Coefficients “Response = Model + Error” Test Statistic for p-value Linear Regression Coefficients Same t-score as H0:  = 0! p-value = .0189

? ? ? ? ANOVA Table 1 8 Source df SS MS F-ratio p-value Regression Error 8 Total – ? ? ? ?

x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Parameter Estimation via SAMPLE DATA … STATISTICS Means: Variances: JAMA. 2003;290:1486-1493 Scatterplot (n data points)

Parameter Estimation via SAMPLE DATA … x1 x2 x3 x4 … xn y1 y2 y3 y4 yn STATISTICS Means: Variances: JAMA. 2003;290:1486-1493 Scatterplot (n data points) SSTot is a measure of the total amount of variability in the observed responses (i.e., before any model-fitting).

Parameter Estimation via SAMPLE DATA … x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Means: Variances: STATISTICS JAMA. 2003;290:1486-1493 Scatterplot (n data points) SSReg is a measure of the total amount of variability in the fitted responses (i.e., after model-fitting.)

Parameter Estimation via SAMPLE DATA … x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Means: Variances: STATISTICS JAMA. 2003;290:1486-1493 Scatterplot (n data points) SSErr is a measure of the total amount of variability in the resulting residuals (i.e., after model-fitting).

~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES X predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response residuals > cor(x, y) -0.7200451 = 204.2 = 189.656 = 9 (43.76178) Residuals = 393.856

SSTot = SSReg + SSErr ~ E R C I S SIMPLE LINEAR REGRESSION via the METHOD OF LEAST SQUARES predictor X 1.1 1.8 2.1 3.7 4.0 7.3 9.1 11.9 12.4 17.1 Y 13.1 18.3 17.6 19.1 19.3 3.2 5.6 13.6 8.0 3.0 ~ E R C I S observed response fitted response residuals > cor(x, y) -0.7200451 = 204.2 = 189.656 = 393.856 Residuals minimum SSTot = SSReg + SSErr Tot Err Reg

ANOVA Table Source df SS MS F-ratio p-value Regression 1 204.200 MSReg Fk – 1, n – k 0 < p < 1 Error 8 189.656 MSErr Total 9 393.856 –

ANOVA Table Source df SS MS F-ratio p-value Regression 1 204.200 8.61349 0.018857 Error 8 189.656 23.707 Total 9 393.856 – Same as before!

> summary(aov(lsreg)) Df Sum Sq Mean Sq F value Pr(>F) Source df SS MS F-ratio p-value Regression 1 204.200 8.61349 0.018857 Error 8 189.656 23.707 Total 9 393.856 – > summary(aov(lsreg)) Df Sum Sq Mean Sq F value Pr(>F) x 1 204.20 204.201 8.6135 0.01886 * Residuals 8 189.66 23.707

Source df SS MS F-ratio p-value Regression 1 204.200 8.61349 0.018857 Error 8 189.656 23.707 Total 9 393.856 – Coefficient of Determination The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining. Moreover,

> cor(x, y) -0.7200451 Coefficient of Determination The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining. Moreover,

> plot(x, y, pch = 19) > lsreg = lm(y ~ x) > abline(lsreg) > summary(lsreg) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -8.6607 -3.2154 0.8954 3.4649 5.7742 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.2639 2.6097 6.999 0.000113 *** x -0.8772 0.2989 -2.935 0.018857 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.869 on 8 degrees of freedom Multiple R-squared: 0.5185, Adjusted R-squared: 0.4583 F-statistic: 8.614 on 1 and 8 DF, p-value: 0.01886 Coefficient of Determination The least squares regression line accounts for 51.85% of the total variability in the observed response, with 48.15% remaining.

Summary of Linear Correlation and Simple Linear Regression Means Variances Covariance Given: X Y x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient JAMA. 2003;290:1486-1493 X Y –1  r  +1 measures the strength of linear association Least Squares Regression Line minimizes SSErr = = SSTot – SSReg (ANOVA) All point estimates can be upgraded to CIs for hypothesis testing, etc.

Summary of Linear Correlation and Simple Linear Regression 95% Confidence Intervals Means Variances Covariance (see notes for “95% prediction intervals”) Given: X Y x1 x2 x3 x4 … xn y1 y2 y3 y4 yn upper 95% confidence band Linear Correlation Coefficient JAMA. 2003;290:1486-1493 X Y –1  r  +1 measures the strength of linear association Least Squares Regression Line lower 95% confidence band minimizes SSErr = = SSTot – SSReg (ANOVA) All point estimates can be upgraded to CIs for hypothesis testing, etc.

Summary of Linear Correlation and Simple Linear Regression Means Variances Covariance Given: X Y x1 x2 x3 x4 … xn y1 y2 y3 y4 yn Linear Correlation Coefficient JAMA. 2003;290:1486-1493 X Y –1  r  +1 measures the strength of linear association Least Squares Regression Line minimizes SSErr = = SSTot – SSReg (ANOVA) All point estimates can be upgraded to CIs for hypothesis testing, etc. proportion of total variability modeled by the regression line’s variability. Coefficient of Determination

Multilinear Regression Testing for linear association between a population response variable Y and multiple predictor variables X1, X2, X3, … etc. Multilinear Regression “Response = Model + Error” “main effects” For now, assume the “additive model,” i.e., main effects only.

Multilinear Regression Fitted response  Residual True response yi X1 X2 Y (x1i , x2i) Predictors Least Squares calculation of regression coefficients is computer-intensive. Formulas require Linear Algebra (matrices)! Once calculated, how do we then test the null hypothesis? ANOVA

Multilinear Regression Testing for linear association between a population response variable Y and multiple predictor variables X1, X2, X3, … etc. Multilinear Regression “Response = Model + Error” “main effects” R code example: lsreg = lm(y ~ x1+x2+x3)

Multilinear Regression Testing for linear association between a population response variable Y and multiple predictor variables X1, X2, X3, … etc. Multilinear Regression “Response = Model + Error” “main effects” quadratic terms, etc. (“polynomial regression”) R code example: lsreg = lm(y ~ x+x^2+x^3) R code example: lsreg = lm(y ~ x1+x2+x3)

Multilinear Regression Testing for linear association between a population response variable Y and multiple predictor variables X1, X2, X3, … etc. Multilinear Regression “Response = Model + Error” “main effects” quadratic terms, etc. (“polynomial regression”) “interactions” “interactions” R code example: lsreg = lm(y ~ x1*x2) R code example: lsreg = lm(y ~ x1+x2+x1:x2) R code example: lsreg = lm(y ~ x+x^2+x^3)

Recall… Multiple Linear Reg with interaction Example in R (reformatted for brevity): with an indicator (“dummy”) variable: > I = c(1,1,1,1,1,0,0,0,0,0) I = 1 > lsreg = lm(y ~ x*I) > summary(lsreg) Coefficients: Estimate (Intercept) 6.56463 x 0.00998 I 6.80422 x:I 1.60858 I = 0 Suppose these are actually two subgroups, requiring two distinct linear regressions!

ANOVA Table (revisited) Note that if true, then it would follow that From sample of n data points…. Note that if true, then it would follow that But how are these regression coefficients calculated in general? “Normal equations” solved via computer (intensive).

ANOVA Table (revisited) (based on n data points). Source df SS MS F p-value Regression Error Total *** How are only the statistically significant variables determined? ***

“MODEL SELECTION”(BE) Step 0. Conduct an overall F-test of significance (via ANOVA) of the full model. “MODEL SELECTION”(BE) If significant, then… X1 + + …… X2 X3 X4 Step 1. t-tests: …… …… p-values: p1 < .05 p2 < .05 p4 < .05 …… Reject H0 Reject H0 Accept H0 Reject H0 Step 2. Are all coefficients significant at level  ? If not….

“MODEL SELECTION”(BE) Step 0. Conduct an overall F-test of significance (via ANOVA) of the full model. “MODEL SELECTION”(BE) If significant, then… X1 + + …… X2 X3 X4 Step 1. t-tests: …… …… p-values: p1 < .05 p2 < .05 p4 < .05 …… Reject H0 Reject H0 Accept H0 Reject H0 Step 2. Are all coefficients significant at level  ? If not…. delete that term, X1 X2 X3 X4 + + ……

“MODEL SELECTION”(BE) Step 0. Conduct an overall F-test of significance (via ANOVA) of the full model. “MODEL SELECTION”(BE) If significant, then… X1 + + …… X2 X3 X4 Step 1. t-tests: …… …… p-values: p1 < .05 p2 < .05 p4 < .05 …… Reject H0 Reject H0 Accept H0 Reject H0 Step 2. Are all coefficients significant at level  ? If not…. delete that term, and recompute new coefficients! X1 + + …… X2 X4 X1 X2 X4 + + …… Step 3. Repeat 1-2 as necessary until all coefficients are significant → reduced model

Analysis of Variance (ANOVA) Recall ~ Analysis of Variance (ANOVA) k  2 independent, equivariant, normally-distributed “treatment groups” MODEL ASSUMPTIONS? 1 2 k = H0:

“Regression Diagnostics”

“Polynomial Regression” Model = “Polynomial Regression” (but still considered to be linear regression in the beta coefficients)

Re-plot data on a “log-log” scale.

Re-plot data on a “log” scale (of Y only)..

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No)

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No)

“MAXIMUM LIKELIHOOD ESTIMATION” Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) = example of a general “link function” “MAXIMUM LIKELIHOOD ESTIMATION” (Note: Not based on LS implies “pseudo-R2,” etc.)

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) Suppose one of the predictor variables is binary… SUBTRACT!

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) Suppose one of the predictor variables is binary… SUBTRACT!

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) Suppose one of the predictor variables is binary…

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) Suppose one of the predictor variables is binary…

Binary outcome, e.g., “Have you ever had surgery?” (Yes / No) “log-odds” (“logit”) Suppose one of the predictor variables is binary… ODDS RATIO ………….. implies …………..

in population dynamics Unrestricted population growth (e.g., bacteria) Restricted population growth (disease, predation, starvation, etc.) Population size y obeys the following law Population size y obeys the following law, constant a > 0, and “carrying capacity” M. with constant a > 0. Let survival probability  = With initial condition Logistic growth Exponential growth