Presentation is loading. Please wait.

Presentation is loading. Please wait.

Descriptions. Description Correlation – simply finding the relationship between two scores ○ Both the magnitude (how strong or how big) ○ And direction.

Similar presentations


Presentation on theme: "Descriptions. Description Correlation – simply finding the relationship between two scores ○ Both the magnitude (how strong or how big) ○ And direction."— Presentation transcript:

1 Descriptions

2 Description Correlation – simply finding the relationship between two scores ○ Both the magnitude (how strong or how big) ○ And direction (positive / negative)

3 Description  Whereas regression seeks to use one of the variables as the predictor Therefore you have an X variable (IV) - predictor And Y variable (DV) - criterion

4 Description  Predictor X – variables – more flexible than ANOVA Can be any combination of variables, continuous, Likert, categorical  Dependent Y-variables – usually continuous, but you can predict categorical variables Better with discriminant or log regression

5 Description  Still not causal design, unless you manipulate the X (IV) variable  However, sometimes very obvious which variable would be predictive Smoking predicts cancer

6 Research Questions

7  Usually want to know the relationship between IV and DV and the importance of each IV  OR Control for some variables variance and then see if other IVs add any additional prediction Compare sets of IV and how predictive they are (which is better)

8 Research Questions  How good is the equation? Is it better than chance? Or better than using the mean to predict scores?

9 Research Questions  Importance of IVs Which IVs are the most important? Which contribute the most prediction to the equation?

10 Research Questions  Adding IVs For example, PTSD scores are predictive of alcohol use After we control for these scores, do meaning in life scores help predict alcohol use?

11 Research Questions  Non-linear relationships can be assessed and determined So, you can use X 2 to help with curvilinear relationships that you might see when data screening

12 Research Questions  Controlling for other sets of IVs Using demographics to control for unequal groups or additional variance over being people  Comparing sets of IVs Using several IVs together to be predictive over another set of IVs

13 Research Questions  Making an equation to predict new people’s scores After you have shown that your IVs are predictive, using those scores to assess new people’s performance Entrance exams for school, military, etc

14

15 Equation  Y-hat = A + B1X1 + B2X2 + … Y hat = predicted value for each participant A = constant, value added to each score to predict participants scores @ zero (y- intercept)

16 Equation  Y-hat = A + B1X1 + B2X2 + … B = coefficient ○ Holding all other variables constant for every one unit increase in X there is a B unit increase in Y ○ Slope for that X variable given all others are zero

17 Equation  Standardized Equation Y-hat = βx1 + βx2 … Beta = standardized B (or z-score B if you like) For each 1 standard deviation increase in X, there is a B standard deviation increase in Y ○ Difficult to interpret ○ BUT! B is standardized to -1 to 1 so you can treat it as if it were r (which means you can tell direction and magnitude)

18 Equation  Pearson product – moment correlation = R R is the correlation between y and y-hat R 2 = variance accounted for in DV by all the IVs (not just one like r, but ALL of them).

19 SR  Semipartial correlations = sr = part in SPSS Unique contribution of IV to R2 for those IVs Increase in proportion of explained Y variance when X is added to the equation A/DV variance DV Variance IV 1 IV 2 A

20 PR  Partial correlation = pr = partial in SPSS Proportion in variance in Y not explained by other predictors but this X only A/B Pr > sr DV Variance IV 1 IV 2 A B

21

22 ANOVA = Regression  ANOVA = Regression with discrete variables However, you cannot easily create a ANOVA from a regression Must convert continuous variables into discrete variables, which causes you to lose variance More power with regression

23 Simple (SLR)  SLR involves only one IV and one DV. It’s called simple because there’s only ONE thing predicting. In this case, beta = r.

24 Multiple (MLR)  MLR uses several IVs and only one DV. You can use a mix of variables – continuous, categorical, Likert, etc. You can use MLR to figure out which IVs are the most important. ○ 3 Types MLR

25 Simultaneous/Standard  All of the variables are entered “at once”  Each variable assessed as if it were the last variable entered This “controls” for the other IVs, as we talked about the interpretation of B. Evaluates sr > 0?

26 Simultaneous/Standard  If you have two highly correlated IVs the one with the biggest sr gets all the variance  Therefore the other IV will get very little variance associated with it and look unimportant

27 Sequential/Hierarchical  IVs enter the regression equation in an order specified by the researcher  First IV is basically tested against r (since there’s nothing else in the equation it gets all the variance)  Next IVs are tested against pr (they only get the left over variance)

28 Sequential/Hierarchical  What order? Assigned by theoretical importance Or you can control for nuisance variables in the first step

29 Sequential/Hierarchical  Using SETS of IVs instead of individuals So, say you have a group of IVs that are super highly correlated but you don’t know how to combine them or want to eliminate them.  Instead you will process each step as a SET and you don’t care about each individual predictor

30 Stepwise/Statistical  Entry into the equation is solely based on statistical relationship and nothing to do with theory or your experiment

31 Stepwise/Statistical  Forward – biggest IV is added first, then each IV is added as long as it accounts for enough variance  Backward – all are entered in the equation at first, and then each one is removed if it doesn’t account for enough variance  Stepwise – mix between the two (adds them but then may later delete them if they are no longer important).

32

33 Number of People  Ratio of cases to IVs If you have less cases than IVs you will get a perfect solution (aka account for all the variance in the DV) But that doesn’t mean anything…

34 Number of People  Ratio of cases to IVs Gpower = for how many cases given alpha, power, predictors, etc. Rules of thumb = more than 50 + 8(K) (number of IVs) Or 104 + K (for testing importance of predictors)

35 Number of People  How many people? However…you can have too many people. Any correlation or predictor will be significant with very large N ○ Practical versus statistical significance

36 Missing Data  Continuous data – linear trend at point, mean replace, etc.  Categorical data – best to leave it out because you can’t guess at it.

37 Outliers  Now, since IVs are continuous, we want to make sure there are not outliers on both the IVs and DVs Mahalanobis

38 Outliers  Leverage – how much influence over the slope a point has Cut off rule of thumb = (2K+2)/N  Discrepancy – how far away from other data points a point is (no influence)  Cooks – influence – combination of both leverage and discrepancy Cut off rule of thumb = 4/(N-K-1)

39 Multicollinearity  If IVs are too highly correlated there are several issues SPSS may not run SPSS picks which variable to go first depending on the type of analysis  Check – bivariate correlation table of IVs (you want it to be correlated with DV!)

40 Normal/Linear  Normality – we want our IVs and DVs to be normally distributed Residual Histogram  Linearity – relationships between IV and DV should be linear or you will do a special X2 Normality PP Plot

41 Homogeneity/Homoscedasticity  Homogeneity – you want the IVs/DVs to have equal variances Residual Plot (equal spread up and down - raining)  Homoscedasticity – you want the errors to be spread evenly across the values of the other variables Residual Plot (equal spread up and down across the bottom – megaphones)

42 Theoretical Assumption  Independence of errors You need to know that the scores of the first person tested are not affecting the scores of the last person tested Mud on a scale

43

44 SLR  Data set 1  IV Books – number of books people read Attend – attendance for class  DV Grade – final grade in the class

45 SLR  Research Question: Does the number of books predict final grade in the course? Does attendance predict final grade in the course?

46 MLR - Simultaneous  Research Question Do books and attendance both predict final course grade? ○ Overall – together? ○ Individual predictors?

47 MLR – Hierarchical  Research question: What predicts how well people take care of their cars? We want to first control for demographics (age, gender) And then use extroversion to predict how well people take care of their cars.

48 MLR Hierarchical  So after controlling for demographics, does extroversion predict?

49 Interactions  Dummy Coding  Types Two categorical One categorical, One continuous Two continuous

50 Dummy Coding  A way to do ANOVA in regression If you have two levels, simply type them in as 0 and 1 If you have more than two levels, you need to enter each separately

51 Dummy Coding  More than two levels: You will need Levels – 1 columns F – value tells you the overall main effect B value – compares that group to the group coded as all zeros

52 Dummy Coding  After you enter each variable separately, then enter them as a set (or one simultaneous) regression  The significance of the overall model will tell if you if the main effect is significant  B gives you differences between groups (two levels)

53 Dummy Coding  How many friends do people have? This example is from ANOVA. IV: Health condition – excellent, fair or poor. DV: Number of Friends.

54 Dummy Coding  Since we have three groups or levels, we’ll need to recode this variable into 2 variables. One for excellent One for fair The blanks for poor.

55 Dummy Coding  Why not three? Because that would be repetitive.

56 Interactions  Interactions – well we automatically test for interactions in ANOVA, why not in regression? In regression an interaction says that there are differences in the slope of the line predicting Y from one IV depending on the level of the other IV

57 Interactions  Nominal variable interactions: So we have two categorical predictors. Example – create interaction term ○ Testing environment by Learning Environment.

58 Interactions - Nominal  Now that we’ve created our interaction terms, we can test them using a hierarchical regression Step one – main effects Step two – main effects and interactions

59 Interactions - Nominal  Now we examine step 1 for main effects  Step two for interactions You ignore the main effects in Step 2

60 Interactions - Nominal  What does all that mean?! After a significant ANOVA, you do a post hoc correct? Simple slopes – post hoc analyses for interactions in regression ○ These are “harder to get” than an ANOVA, but there are less “tests” to run so technically more powerful/less type 1 error

61 Interactions - Nominal  You will write out the equation and figure out the slopes/means/picture for each condition combination.  Equation = 30.8 + -8 (learning) + - 14.1 (testing) + 20.5(learning X testing) 

62 Interactions - Nominal  Now we’ll fill in the equation for all the combinations. Learning (0 or 1) Testing (0 or 1) Interaction (0 or 1 depending on the combination).

63 Interaction - Nominal Dry (0)Wet (1) Dry (0)30.816.7 Wet (1)22.829.2

64 Interactions - Mix  Data Set 4 IVs Events – number of events attended Status – low (0) versus high (1) DVs Stress levels

65 How to  Create interaction Transform > compute > multiply  Run regression as before Step 1 – main effects Step 2 – main effects and interaction

66 Interactions - Mix  LOW status, look at events slope. B =.121, β =.52, t(57) =3.94, p<.001, indicating that low status people feel more stress as the number of events they attend increases.  HIGH status, look at events slope.  B =.02, β =.10, t(57) =.55, p.=58, indicating that high status people feel the same amount of stress no matter how many events they attend.

67 Interaction - Mix Low EventsHigh Events Low Status20.9327.81 High Status17.7818.98

68 Interactions - continuous  Most likely combination since you are running a regression Create interaction term first (multiply them together) Books * Attendance Interaction to predict grades.

69 Interactions – continuous  Pick ONE variable to examine. Let’s go with attendance. You can get the AVERAGE slope for attendance and books. Since we picked attendance, we will look at the slope for books, β=-.532, t(37) = - 1.21, p=.24. So at average attendance, readings books do not increase your grade.  Let’s create hi and lo terms for ONE of the variables. AttendanceHI, AttendanceLO AttendanceHI by Books, AttendanceLO by Books.

70 Interactions - continuous  Now, we can’t just use 1 and 0 for different groups So we have to create “hi” and “lo” groups for one variable This theory is also backwards…for the hi group, you subtract 1 SD, for the lo group you add 1SD Basically you are bringing them up or down to the mean

71 Interaction

72 Mediation

73  Mediation occurs when the relationship between an X variable and a Y variable is eliminated or lowered when an additional Mediator variable is added to the equation.

74 Mediation Steps  Baron and Kenny Step 1 – use X to predict Y to get c pathway. Step 2 – use X to predict M to get a pathway. Step 3 – use X and M to predict Y to get b pathway. Step 4 – use the same regression to look at the c’ pathway.  Sobel test

75 Mediation Steps


Download ppt "Descriptions. Description Correlation – simply finding the relationship between two scores ○ Both the magnitude (how strong or how big) ○ And direction."

Similar presentations


Ads by Google