Download presentation
Presentation is loading. Please wait.
Published byVilho Kahma Modified over 5 years ago
1
Correlation and Regression Mathematics & Statistics Help University of Sheffield
2
Learning outcomes By the end of this session you should know about:
Approaches to analysis for simple continuous bivariate data By the end of this session you should be able to: Construct and interpret scatterplots in SPSS Identify when it is appropriate to use correlation Calculate a correlation coefficient in SPSS Interpret a correlation coefficient Identify when it appropriate to use linear regression Run a simple regression model in SPSS Interpret the results of a linear regression model
3
Download the slides from the MASH website
MASH > Resources > Statistics Resources > Workshop materials
4
Association between two continuous variables: correlation or regression?
Two basic questions: Is there a relationship? No causation is implied, simply association Use CORRELATION How can we use the value of one variable to predict the value of the other variable? May be causal, may not be Use REGRESSION
5
Correlation: are two continuous variables associated?
When examining the relationship between two continuous variables ALWAYS look at the scatterplot, to see visually the pattern of the relationship between them
6
Scatterplot: Relationship between two continuous variables:
Outlier Linear Explores the way the two co-vary (correlate): Positive / negative Linear / non-linear Strong / weak Presence of outliers The scatter graph or scatterplot is one of the most common graphs in statistics. It is used to explore the relationship between two continuous variables. Is there a linear or non-linear pattern? The summary statistic we calculate to represent this relationship is called the Pearson’s correlation co-efficient. Is the correlation positive or negative? Is the correlation, strong or weak? This graph will help us answer these questions. It will also help us detect any outliers, that is observations that are outside the norm, outside the pattern that the other observations show. As usual, the scatter graph can be built using SPSS or Excel or any other statistical software.
7
Scatterplots
8
Correlation Coefficient r
Measures strength of a linear relationship between 2 continuous variables, can take values between -1 to +1 r = 0.9 r = 0.01 r = -0.9
9
Correlation: Interpretation
An interpretation of the size of the coefficient has been described by Cohen (1992) as: Cohen, L. (1992). Power Primer. Psychological Bulletin, 112(1) Correlation coefficient value Effect size -0.3 to +0.3 Weak -0.5 to -0.3 or 0.3 to 0.5 Moderate -0.9 to -0.5 or 0.5 to 0.9 Strong -1.0 to -0.9 or 0.9 to 1.0 Very strong
10
Relationship is not assumed to be a causal one – it may be caused by other factors
Does chocolate make you clever or crazy? A paper in the New England Journal of Medicine claimed there was a relationship between chocolate and Nobel Prize winners
11
Chocolate and serial killers
What else is related to chocolate consumption?
12
Dataset for today: Birthweight_reduced_data
Factors affecting birth weight of babies Mother smokes = 1 Standard gestation = 40 weeks
13
Exercise 1: Gestational age and birthweight
Draw a line of best fit through the data (with roughly half the points above and half below). Describe the relationship Is the relationship: strong/ weak? positive/ negative? linear?
14
Exercise 2: Interpretation
Interpret the following correlation coefficients using Cohen’s classification and explain what they mean. Which correlations seem meaningful? Relationship Correlation Average IQ and chocolate consumption 0.27 Road fatalities and Nobel winners 0.55 Gross Domestic Product and Nobel winners 0.70 Mean temperature and Nobel winners -0.60
15
Scatterplot in SPSS Graphs Legacy Dialogs Scatter/Dot
16
Scatterplot in SPSS Graphs Legacy Dialogs Scatter/Dot
17
Use Spearman’s correlation for ordinal variables or skewed scale data
Correlation in SPSS Analyze Correlate Bivariate Pearson Use Spearman’s correlation for ordinal variables or skewed scale data
18
Scatterplot and correlation
SPSS output using reduced baby weight data set Pearson correlation r = 0.708 Strong relationship
19
Hypothesis test for the correlation coefficient
Can be done, the null hypothesis is that the population correlation r = 0 Not recommended as it is influenced by the number of observations Better to use Cohen’s interpretation
20
Hypothesis test: Influence of sample size
Value at which the correlation coefficient becomes significant at the 5% level (i.e. p<0.05) 10 20 50 100 150 0.63 0.44 0.28 0.20 0.16
21
And so what do correlations of 0.63 (n=10) and 0.16 (n=150) look like?
Correlation=0.63, p=0.048 (n=10) Correlation=0.16, p=0.04 (n=150)
22
Points to note Do not assume causality
Be careful comparing the correlation coefficient, r, from different studies with different n Do not assume the scatterplot looks the same outside the range of the axes Use Cohen’s scale to interpret, rather than the p- value Always examine the scatterplot!
24
Exercise 3a: Scatterplot
Use Recode > Transform into Different Variables to construct a variable for maternal smoking status (non-smoker / smoker) Construct a scatterplot for birthweight and gestational age? Use Set Markers by to distinguish between smokers and non-smokers Is there evidence of a linear relationship Interpret the correlation coefficient. What does it mean? Note: Think about which variable should be on the x axis (horizontal) and which should be on the y axis( vertical) If you double-click on the graph you can open the Graph dialog window and edit the chart, for example change the colours used for smokers and non-smokers
25
Exercise 3b: Scatterplot & Correlation
Construct a scatterplot and calculate Pearson’s correlation coefficient for birthweight and maternal pre- pregnancy weight? Is there evidence of a linear relationship Interpret the correlation coefficient. What does it mean? Note: think about which variable should be on the x axis (horizontal) and which should be on the y axis( vertical)
26
Association between two continuous variables: correlation or regression?
Two basic questions: Is there a relationship? No causation is implied, simply association Use CORRELATION How can we use the value of one variable to predict the value of the other variable? May be causal, may not be Use REGRESSION
27
Simple linear regression
Regression quantifies the relationship between two continuous variables It involves estimating the best straight line with which to summarise the association The relationship is represented by an equation, the regression equation It is useful when we want to look for significant relationships between two variables predict the value of one variable for a given value of the other
28
Independent / dependent variables
Does attendance have an association with exam score? Does temperature have an impact on the growth rate of a cell culture? DEPENDENT (outcome) variable (y) INDEPENDENT (explanatory/ predictor) variable (x) affects You will need to distinguish between independent (explanatory) variables and dependent (outcome) variables regarding the research question. The explanatory variables are thought to have an effect on the dependent variable and the distinction between the two is important when carrying out statistical analysis. For example, if you were investigating the relationship between attendance is exam score, the independent variable is the attendance and the dependent variable is the exam score.
29
Does gestational age have an association with birth weight?
30
Regression y = a + b x Simple linear regression looks at the relationship between two continuous variables by producing an equation for a straight line of the form You can use this to predict the value of the dependent (outcome) variable for any value of the independent (explanatory) variable Independent variable Dependent variable Intercept Slope
31
Birth weight example equation
Birth weight (y) = * gestational age (x) here, a = (intercept) b = (slope) i.e. for every extra week of gestation, birth weight increases by 0.16 kgs
32
b = 0.16 so extra 0.16 kgs for every extra week of gestation
Birth weight example - Slope Slope b is the average change in the Y variable for a change of one unit in the X variable b = 0.16 so extra 0.16 kgs for every extra week of gestation
33
Birth weight example - Intercept
Y Response variable (dependent variable) X Predictor / explanatory variable (independent variable)
34
Estimating the best fitting line
We try to fit the “best” straight line The standard way to do this is using a method called least squares using a computer Residuals = differences between observed and predicted values for each observation Least squares method chooses a line so that the sum of the squared residuals (averaged over all points) is minimised
35
Line of best fit Residuals = observed - predicted
36
Hypothesis testing in regression
Regression finds the best straight line with which to summarise an association It is useful when we want to look for significant relationships between variables The slope is tested for significance. If there is no relationship, the gradient of the line (b) would be 0; i.e. the regression line would be a horizontal line crossing the y axis at the average value for the y variable
37
Regression in SPSS Analyse Regression Linear
38
Output from SPSS: key regression table
P – value < 0.001 Y = X As p < 0.05, gestational age is a significant predictor of birth weight Weight increases by 0.16 kgs for each week of gestation
39
Output from SPSS: ANOVA table
ANOVA compares the null model (mean birth weight for all babies) with the regression model Null model: y = 3.31 Regression model: y = x
40
Output from SPSS: ANOVA table
Does a model containing gestational age predict significantly more accurately than just using the mean birth weight for all babies? Yes as p < 0.001 Total: number of subjects included in the analysis – 1
41
How reliable are predictions? Using R2
How much of the variation in birth weight is explained by the model including Gestational age? Proportion of the variation in birth weight explained by the model R2 = = 50% Predictions using the model are fairly reliable. Which other variables may help improve the fit of the model? Compare models using Adjusted R2, as this adjusts for the number of variables in the model
42
Exercise 4 Investigate whether mother’s pre-pregnancy weight and birth weight are associated using a simple linear regression
43
Exercise 4: regression Adjusted R2 =
Does the model result in reliable predictions? ANOVA p-value = Is the model an improvement on the null model (where every baby is predicted to be the mean weight)?
44
Exercise 4: Regression Pre-pregnancy weight coefficient and p-value:
Regression equation: Interpretation:
45
Assumptions for regression
Plot to check The relationship between the independent and dependent variable is linear Original scatter plot of the independent and dependent variable Homoscedasticity: The variance of the residuals about predicted responses should be the same for all predicted responses. Scatterplot of standardised predicted values and residuals. There should be no obvious patterns The residuals are normally distributed Plot the residuals in a histogram
46
Checking assumptions: normality of residuals
observed value minus value predicted by the model (fitted value) Yobs - Yfit i.e. the vertical lines on the plot below It is the residuals that need to be normally distributed, not the data
47
Checking assumptions: normality of residuals
Use standardised residuals to check the assumptions. Outliers are those values < -3 or > 3 Select histogram of residuals Scatterplot of predicted vs residuals
48
Checking assumptions: normality
Histogram looks approximately normally distributed When writing up, just say ‘normality checks were carried out on the residuals and the assumption of normality was met’
49
Predicted values against residuals
Are there any patterns as the predicted values increases? There is a problem with Homoscedasticity if the scatter is not random. These shapes are bad:
50
Exercise 5 Re-run the regression model, but this time, produce the residual plots. Do you think that the assumptions of normality of residuals and homogeneity of variance are met?
51
What if assumptions are not met?
Regression is fairly robust to violations of the assumptions If the residuals are heavily skewed or the residuals show different variances as predicted values increase, the data needs to be transformed Try taking the natural log (ln) of the dependent variable first. Then repeat the analysis and check the assumptions
52
Caveats Do not use the graph or regression model to predict outside of the range of observations Do not assume just because you have an equation that means that X causes Y As with correlation, it is always a good idea to have a look at the scatterplot
53
Effect of outliers on regression model
Original regression model: y = x R2 =0.502 Adjusted regression model: y = x R2 =0.323
54
Multiple regression Multiple regression has several categorical or scale independent variables: 𝑦=𝛼+ 𝛽 1 𝑥 1 + 𝛽 2 𝑥 2 +…+ 𝛽 𝑖 𝑥 𝑖 Effect of other variables is removed (controlled for) when assessing relationships You can only include binary categorical using: Analyse Regression Linear If you want to adjust for a categorical variable with more than 2 levels you need to use the General Linear Model procedure Analyse General Linear Model Univariate (not covered here)
55
Multiple regression Example: What factors affect the birth weight of babies? Dependent: Birth weight Possible independents: Gestational age, mothers’ pre-pregnancy weight, mothers height, mother smoking etc
56
Relationships with categorical
Identify smokers by different markers on the scatterplot Graphs Legacy Dialogs Scatter/Dot
57
Adding smoking status Identify smokers by different markers on the scatterplot Is there a difference between smokers and non-smokers?
58
Regression output from SPSS
Y = (gestation) – 0.298(smoker) Both p-values < 0.05 R2 has increased from to so 56.3% of variation explained with gestation and smoking. Gestational age (p < 0.001) and smoking status (p=0.024) are significant predictors of birth weight. Weight increases by 0.16 kgs for each week of gestation and decreases by 0.30 kgs for smokers Note: Smoker = 1 and Non-smoker = 0
59
Effect of smoking status
Binary variables affect the intercept only Note that if you have an interactions between a binary and scale variable the lines for the two groups will not be parallel
60
Comparing models using R2
Adding predictor variables will always increase R2. It’s the size of the change that’s important Use adjusted R2 as it makes an adjustment for the number of variables in a model
61
Multiple regression In addition to the standard linear regression checks, relationships BETWEEN independent variables should be assessed Multicollinearity is a problem where continuous independent variables are too related (r > 0.8) Relationships can be assessed using scatterplots and correlation for continuous variables
62
Exercise 6: correlations
Produce a correlation matrix for the correlations between Birthweight, Gestational age, Maternal height and Maternal pre-pregnancy weight: Analyse > Correlate > Bivariate & add the 4 variables to the Variables box:
63
Model selection If models are to be used for prediction, only significant predictors should be included unless they are being used as controls Methods include forward, backward and stepwise regression Backward means that the predictor with the highest p-value is removed and the model re-run. Keep going until only significant predictors are left
64
Important point By default, SPSS only includes cases with no missing values on any variable included in multiple regression. This can seriously reduce the number of cases being used To change this, select ‘Options’ from the linear regression options and then ‘Exclude cases pairwise’
65
Exercise 7 With birthweight as the outcome, run a series of regression models: Model 1: Gestational age Model 2: Gestational age and maternal smoking status Check the assumptions and interpret the output of Does the model give more reliable predictions than the model with just gestational age? Model 3: gestational age, maternal smoking status, maternal pre-pregnancy weight Model 4: gestational age, maternal smoking status, maternal pre-pregnancy weight, maternal height Note you will need to create a variable for smoking status based on the number of cigarettes that the mother smokes (assuming that 0 cigarettes indicates someone who does not smoke)
66
Exercise 7: model 1 summary
Variable Coefficient (β) P-value Significant? Constant Gestation Adjusted R2 = Interpretation:
67
Exercise 7: model 2 summary
Variable Coefficient (β) P-value Significant? Constant Gestation Smoker Adjusted R2 = Interpretation:
68
Exercise 7: model 3 summary
Variable Coefficient (β) P-value Significant? Constant Gestation Smoker Pre-pregnancy weight Adjusted R2 = Interpretation:
69
Exercise 7: model 4 summary
Variable Coefficient (β) P-value Significant? Constant Gestation Smoker Pre-pregnancy weight Height Adjusted R2 = Interpretation:
70
Exercise 7: Compare p-values
Model Gestation Smoking Weight Height Model 1: P < 0.001 Model 2: Model 1 + Smoker 0.028 Model 3: Model 2 + Weight Model 4: Model 3 + Height
71
Exercise 7: Compare R2 Model R2 Adjusted R2 Model 1: Gestation 0.499
0.486 Model 2: Model 1 + Smoker 0.558 0.535 Model 3: Model 2 + Weight Model 4: Model 3 + Height
72
Regression summary Use correlation to look at relationships between dependent and independent variables Scatterplots to look for a linear relationships Use regression to quantify the relationship between variables Check normality of residuals Check scatterplot of predicted vs residuals Interpret significance, coefficients and R2
73
Learning outcomes You should now know about: You should be able to:
Approaches to analysis for simple continuous bivariate data You should be able to: Construct and interpret scatterplots in SPSS Identify when it is appropriate to use correlation Calculate a correlation coefficient in SPSS Interpret a correlation coefficient Identify when it appropriate to use linear regression Run a simple regression model in SPSS Interpret the results of a linear regression model
74
Maths And Statistics Help
Statistics appointments: Mon-Fri (10am-1pm) Statistics drop-in: Mon-Fri (10am-1pm), Weds (4-7pm)
75
Resources: All resources are available in paper form at MASH or on the MASH website
76
Contacts Follow MASH on twitter: @mash_uos Staff
Jenny Freeman Basile Marquier Marta Emmett Website Follow MASH on
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.