Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, November 12, 2013 Correlation and Regression.

Similar presentations


Presentation on theme: "Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, November 12, 2013 Correlation and Regression."— Presentation transcript:

1 Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, November 12, 2013 Correlation and Regression

2 Homework #11 due11/14 Ch 15 # 1, 2, 5, 6, 8, 9, 14

3 Homework #12 due11/19 Homework #12 due11/19 Chapter 16: 1, 2, 7, 8, 10, 22 (Use SPSS), 24 (HW #13 – the last homework – is due on 11/21)

4 Comparing computational and definitional formulas for SP The way we calculated it last time (definitional): Computational formula (from textbook): Either way, the formula for r is the same:

5 Computing r using z-scores If z scores were computed using population standard deviations (σ): If z-scores were computed using sample standard deviations (s):

6 Hypothesis testing with correlation H 0 : No population Correlation H A : There is a population correlation (Can also have directional hypothesis) Use table B6 in appendix B (gives critical values for r, at different sample sizes, based on t/F statistic that can be calculated)

7 Scatterplots with Excel & SPSS In SPSS, graphs menu=>legacy dialogues=>scatter/dot=>simple scatter Click on define, and select which variable you want on the x axis and which on the y axis. In Excel, insert menu=>chart=>(choose chart type and specify data range)

8 Correlations with SPSS & Excel SPSS: Analyze => correlate=> bivariate Then select the variables you want correlation(s) for (can select just one pair, or many variables to get a correlation matrix) Try this with height and shoe size in our data. Now try with height, shoe size, mother’s height, and number of shoes owned. Excel: Arrange data for two variables in two columns or rows & use formula bar to request a correlation: =correl(array1,array2)

9 Invalid inferences from correlations Why you should always look at the scatter plot before computing (and certainly before interpreting Pearson’s r): Correlations are greatly affected by range of scores in data – Consider height and age relationship – Restricted range example from text (IQ and creativity) or consider SAT and GPA Extreme scores can have dramatic effects on correlations – A single extreme score can radically change r, especially when your sample is small. Relations between variables may differ for subgroups, resulting in misleading r values for aggregate data Curvilinear relations not captured by Pearson’s r

10 What to do about a curvilinear pattern If pattern is monotonically increasing or decreasing, convert scores to ranks and compute r (using same formula) based on the rank scores. Result is called Spearman’s Rank Correlation Coefficient or Spearman’s Rho and can be requested in your spss output by checking the appropriate box when you select the variables for which you want correlations. If pattern is more complicated (u-shaped or s- shaped, for example), consult more advanced statistics resources.

11 From Correlation to Regression With correlation, we can examine whether variables X & Y are related With regression, we try to predict the value of one variable given what we know about the other variable and the relationship between the two.

12 Regression In correlation: it doesn’t matter which variable goes on the X-axis or the Y-axis Y X 1 2 3 4 5 6 123456 For regression this is NOT the case The variable that you are predicting goes on the Y-axis (criterion or “dependent” variable) Predicted variable Predicting variable The variable that you are making the prediction based on goes on the X-axis (predictor or “independent” variable) Quiz performance Hours of study

13 Regression Correlation: “Imagine a line through the points” Y X 1 2 3 4 5 6 123456 But there are lots of possible lines One line is the “best fitting line” Regression: compute the equation corresponding to this “best fitting line” Quiz performance Hours of study

14 The equation for a line A brief review of geometry Y = (X)(slope) + (intercept) 2.0 Y X 1 2 3 4 5 6 1234560 Y = intercept, when X = 0

15 The equation for a line A brief review of geometry Y = (X)(slope) + (intercept) 2.0 Change in Y Change in X = slope 0.5 Y X 1 2 3 4 5 6 123456 1 2 0

16 The equation for a line A brief review of geometry Y = (X)(slope) + (intercept) Y X 1 2 3 4 5 6 123456 0 Y = (X)(0.5) + 2.0

17 Regression A brief review of geometry Consider a perfect correlation Y = (X)(0.5) + (2.0) Y X 1 2 3 4 5 6 123456 Can make specific predictions about Y based on X X = 5 Y = ? Y = (5)(0.5) + (2.0) Y = 2.5 + 2 = 4.5 4.5

18 Regression Consider a less than perfect correlation Y X 1 2 3 4 5 6 123456 The line still represents the predicted values of Y given X Y = (X)(0.5) + (2.0) X = 5 Y = ? Y = (5)(0.5) + (2.0) Y = 2.5 + 2 = 4.5 4.5

19 Regression The “best fitting line” is the one that minimizes the error (differences) between the predicted scores (the line) and the actual scores (the points) Y X 1 2 3 4 5 6 123456 Rather than compare the errors from different lines and picking the best, we will directly compute the equation for the best fitting line

20 Regression The linear model Y = intercept + slope (X) + error Betas (β) are sometimes called parameters Come in two types: standardized unstandardized Now let’s go through an example computing these things

21 Scatterplot Using the dataset from our correlation example 6 1 2 5 6 3 4 3 2 X Y Y X 1 2 3 4 5 6 123 456

22 From when we computed Pearson’s r 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 2.4 -2.6 1.4 -0.6 0.0 2.0 -2.0 2.0 0.0 -2.0 0.0 4.8 5.2 2.8 0.0 1.2 5.76 6.76 1.96 0.36 4.0 0.0 4.0 14.015.2016.0 SS Y SS X SP

23 Computing regression line (with raw scores) 6 1 2 5 6 3 4 3 2 X Y 14.015.2016.0 SS Y SS X SP mean 3.64.0

24 Computing regression line (with raw scores) 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 Y X 1 2 3 4 5 6 123 456

25 Computing regression line (with raw scores) 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 Y X 1 2 3 4 5 6 123 456 The two means will be on the line

26 Computing regression line (standardized, using z-scores) Sometimes the regression equation is standardized. –Computed based on z-scores rather than with raw scores Mean 3.64.0 2.4 -2.6 1.4 -0.6 0.0 2.0 -2.0 2.0 0.0 -2.0 0.0 6 1 2 5 6 3 4 3 2 X Y 5.76 6.76 1.96 0.36 15.20 4.0 0.0 4.0 16.0 Std dev 1.741.79 0.0 1.1 -1.1 0.0 -1.1 1.1 0.0 1.38 -1.49 0.8 - 0.34

27 Computing regression line (standardized, using z- scores) Prediction model – Predicted Z score (on criterion variable) = standardized regression coefficient multiplied by Z score on predictor variable – Formula Sometimes the regression equation is standardized. –Computed based on z-scores rather than with raw scores 0.0 1.1 -1.1 0.0 -1.1 1.1 0.0 1.38 -1.49 0.8 - 0.34 –The standardized regression coefficient ( β ) In bivariate prediction, β = r

28 Computing regression line (with z-scores) mean ZYZY ZXZX 1 2 0 12 0.0 1.1 -1.1 0.0 -1.1 1.1 0.0 1.38 -1.49 0.8 - 0.34 -2 -2

29 Regression Also need a measure of error Y = X(.5) + (2.0) + error Y X 1 2 3 4 5 6 123456 Y X 1 2 3 4 5 6 123456 Same line, but different relationships (strength difference) Y = intercept + slope (X)+ error The linear equation isn’t the whole thing

30 Regression Error – Actual score minus the predicted score Measures of error – r 2 (r-squared) –Proportionate reduction in error Note: Total squared error when predicting from the mean = SS Total =SS Y –Squared error using prediction model = Sum of the squared residuals = SS residual = SS error

31 Exam III Results Bimodal Distribution 96, 82,4,4,6,7, 71,2,3,3,3,4,5,7,8,9, 66, 50,1,2,7 46,9 M = 71.02 s = 13.67 If you scored at least 70 you are “keeping up with the pack.” If you scored below 70, you need to put forth more effort (please see me if you want or need help!)

32 R-squared r 2 represents the percent variance in Y accounted for by X Y X 1 2 3 4 5 6 123456 Y X 1 2 3 4 5 6 123456 r = 0.8 r = 0.5r 2 = 0.64r 2 = 0.25 64% variance explained 25% variance explained

33 Computing Error around the line Compute the difference between the predicted values and the observed values (“residuals”) Square the differences Add up the squared differences Y X 1 2 3 4 5 6 123 456 Sum of the squared residuals = SS residual = SS error

34 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 Predicted values of Y (points on the line) Sum of the squared residuals = SS residual = SS error

35 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 6.2 = (0.92)(6)+0.688 Predicted values of Y (points on the line) Sum of the squared residuals = SS residual = SS error

36 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 6.2 = (0.92)(6)+0.688 1.6 = (0.92)(1)+0.688 5.3 = (0.92)(5)+0.688 3.45 = (0.92)(3)+0.688 3.45 = (0.92)(3)+0.688 Sum of the squared residuals = SS residual = SS error

37 Computing Error around the line Y X 1 2 3 4 5 6 123 456 Sum of the squared residuals = SS residual = SS error X Y 6 1 2 5 6 3 4 3 2 6.2 1.6 5.3 3.45 6.2 1.6 5.3 3.45

38 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 6.2 0.00 -0.20 0.40 0.70 0.55 -1.45 1.6 5.3 3.45 residuals Sum of the squared residuals = SS residual = SS error Quick check 6 - 6.2 = 2 - 1.6 = 6 - 5.3 = 4 - 3.45 = 2 - 3.45 =

39 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 6.2 0.00 0.04 0.16 0.49 0.30 2.10 3.09 -0.20 0.40 0.70 0.55 -1.45 1.6 5.3 3.45 SS ERROR Sum of the squared residuals = SS residual = SS error

40 Computing Error around the line 6 1 2 5 6 3 4 3 2 X Y mean 3.64.0 6.2 0.00 0.04 0.16 0.49 0.30 2.10 3.09 -0.20 0.40 0.70 0.55 -1.45 1.6 5.3 3.45 SS ERROR Sum of the squared residuals = SS residual = SS error 4.0 0.0 4.0 16.0 SS Y

41 Computing Error around the line Also (like r 2 ) represents the percent variance in Y accounted for by X 3.09 SS ERROR Sum of the squared residuals = SS residual = SS error 16.0 SS Y –Proportionate reduction in error In fact, in bivariate regression it is mathematically identical to r 2

42 Regression in SPSS Running the analysis in SPSS is pretty easy – Analyze: Regression: Linear – X or predictor variable(s) go into the ‘independent variable’ field – Y or predicted variable goes into the ‘dependent variable’ field You get a lot of output

43 Regression in SPSS The variables in the model r r 2 Unstandardized coefficients Slope (indep var name) Intercept (constant) Standardized coefficients We’ll get back to these numbers in a few weeks

44 In Excel With Data Analysis “Tool Pack” you can perform regression analysis With standard software package, you can get bivariate correlation (which is the same as the standardized regression coefficient), you can create a scatterplot, and you can request a trend line, which is a regression line (what is y and what is x in that case?)

45 Multiple Regression Multiple regression prediction models “fit”“residual”

46 Prediction in Research Articles Bivariate prediction models rarely reported Multiple regression results commonly reported

47 Multiple Regression Typically researchers are interested in predicting with more than one explanatory variable In multiple regression, an additional predictor variable (or set of variables) is used to predict the residuals left over from the first predictor.

48 Multiple Regression Y = intercept + slope (X) + error Bi-variate regression prediction models

49 Multiple Regression Multiple regression prediction models “fit” “residual” Y = intercept + slope (X) + error Bi-variate regression prediction models

50 Multiple Regression Multiple regression prediction models First Explanatory Variable Second Explanatory Variable Fourth Explanatory Variable whatever variability is left over Third Explanatory Variable

51 Multiple Regression Predict test performance based on: First Explanatory Variable Second Explanatory Variable Fourth Explanatory Variable whatever variability is left over Third Explanatory Variable Study time Test time What you eat for breakfast Hours of sleep

52 Multiple Regression Predict test performance based on: Study time Test time What you eat for breakfast Hours of sleep Typically your analysis consists of testing multiple regression models to see which “fits” best (comparing R 2 s of the models) versus For example:

53 Multiple Regression Response variable Total variability it test performance Total study time r =.6 Model #1: Some co-variance between the two variables R 2 for Model =.36 64% variance unexplained If we know the total study time, we can predict 36% of the variance in test performance

54 Multiple Regression Response variable Total variability it test performance Test time r =.1 Model #2: Add test time to the model Total study time r =.6 R 2 for Model =.49 51% variance unexplained Little co-variance between these test performance and test time We can explain more the of variance in test performance

55 Multiple Regression Response variable Total variability it test performance breakfast r =.0 Model #3: No co-variance between these test performance and breakfast food Total study time r =.6 Test time r =.1 R 2 for Model =.49 51% variance unexplained Not related, so we can NOT explain more the of variance in test performance

56 Multiple Regression Response variable Total variability it test performance breakfast r =.0 We can explain more the of variance But notice what happens with the overlap (covariation between explanatory variables), can’t just add r’s or r 2 ’s Total study time r =.6 Test time r =.1 Hrs of sleep r =.45 R 2 for Model =.60 40% variance unexplained Model #4: Some co-variance between these test performance and hours of sleep

57 Multiple Regression The “least squares” regression equation when there are multiple intercorrelated predictor (x) variables is found by calculating “partial regression coefficients” for each x A partial regression coefficient for x 1 shows the relationship between y and x 1 while statistically controlling for the other x variables (or holding the other x variables constant)

58 Multiple Regression The formula for the partial regression coefficient is : b 1 = (r Y1 -r Y2 r 12 )/(1-r 12 2 )*(s Y /s 1 ) Where r Y1 =correlation of x 1 and y r Y2 =correlation of x 2 and y r 12 =correlation of x 1 and x 2 s Y =standard deviation of y, s 1 =standard deviation of x 1

59 Multiple Regression Multiple correlation coefficient (R) is an estimate of the relationship between the dependent variable (y) and the best linear combination of predictor variables (correlation of y and y-pred.) R=Cov(y,y-pred)/s y s ypred R 2 tells you the amount of variance in y explained by the particular multiple regression model being tested.

60 Multiple Regression in SPSS Setup as before: Variables (explanatory and response) are entered into columns A couple of different ways to use SPSS to compare different models

61 Regression in SPSS Analyze: Regression, Linear

62 Multiple Regression in SPSS Method 1: enter all the explanatory variables together – Enter: All of the predictor variables into the Independent Variable field Predicted (criterion) variable into Dependent Variable field

63 Multiple Regression in SPSS The variables in the model r for the entire model r 2 for the entire model Unstandardized coefficients Coefficient for var1 (var name) Coefficient for var2 (var name)

64 Multiple Regression in SPSS The variables in the model r for the entire model r 2 for the entire model Standardized coefficients Coefficient for var1 (var name)Coefficient for var2 (var name)

65 Multiple Regression –Which coefficient to use, standardized or unstandardized? –Unstandardized coefficients are easier to use if you want to predict a raw score based on raw scores (no z-scores needed). –Standardized coefficients are nice to directly compare which variable is most “important” in the equation

66 Multiple Regression in SPSS Predicted (criterion) variable into Dependent Variable field First Predictor variable into the Independent Variable field Click the Next button Method 2: enter first model, then add another variable for second model, etc. –Enter:

67 Multiple Regression in SPSS Method 2 cont: – Enter: Second Predictor variable into the Independent Variable field Click Statistics

68 Multiple Regression in SPSS –Click the ‘R squared change’ box

69 Multiple Regression in SPSS The variables in the first model (math SAT) Shows the results of two models The variables in the second model (math and verbal SAT)

70 Multiple Regression in SPSS The variables in the first model (math SAT) r 2 for the first model Coefficients for var1 (var name) Shows the results of two models The variables in the second model (math and verbal SAT) Model 1

71 Multiple Regression in SPSS The variables in the first model (math SAT) Coefficients for var1 (var name) Coefficients for var2 (var name) Shows the results of two models r 2 for the second model The variables in the second model (math and verbal SAT) Model 2

72 Multiple Regression in SPSS The variables in the first model (math SAT) Shows the results of two models The variables in the second model (math and verbal SAT) Change statistics: is the change in r 2 from Model 1 to Model 2 statistically significant?

73 Cautions in Multiple Regression We can use as many predictors as we wish but we should be careful not to use more predictors than is warranted. –Simpler models are more likely to generalize to other samples. –If you use as many predictors as you have participants in your study, you can predict 100% of the variance. Although this may seem like a good thing, it is unlikely that your results would generalize to any other sample and thus they are not valid. –You probably should have at least 10 participants per predictor variable (and probably should aim for about 30).


Download ppt "Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, November 12, 2013 Correlation and Regression."

Similar presentations


Ads by Google