Terms  Between subjects = independent  Each subject gets only one level of the variable.  Repeated measures = within subjects = dependent = paired.

Slides:



Advertisements
Similar presentations
One-Way and Factorial ANOVA SPSS Lab #3. One-Way ANOVA Two ways to run a one-way ANOVA 1.Analyze  Compare Means  One-Way ANOVA Use if you have multiple.
Advertisements

MANOVA: Multivariate Analysis of Variance
Analysis of variance (ANOVA)-the General Linear Model (GLM)
ANOVA: Analysis of Variance
SPSS Series 3: Repeated Measures ANOVA and MANOVA
Analysis of variance (ANOVA)-the General Linear Model (GLM)
C82MST Statistical Methods 2 - Lecture 7 1 Overview of Lecture Advantages and disadvantages of within subjects designs One-way within subjects ANOVA Two-way.
Lecture 9: One Way ANOVA Between Subjects
Biol 500: basic statistics
One-way Between Groups Analysis of Variance
Repeated Measures ANOVA Used when the research design contains one factor on which participants are measured more than twice (dependent, or within- groups.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Inferential Statistics: SPSS
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
ANOVA Analysis of Variance.  Basics of parametric statistics  ANOVA – Analysis of Variance  T-Test and ANOVA in SPSS  Lunch  T-test in SPSS  ANOVA.
Repeated Measures Design
Lab 2: repeated measures ANOVA 1. Inferior parietal involvement in long term memory There is a hypothesis that different brain regions are recruited during.
SPSS Series 1: ANOVA and Factorial ANOVA
By Hui Bian Office for Faculty Excellence 1. K-group between-subjects MANOVA with SPSS Factorial between-subjects MANOVA with SPSS How to interpret SPSS.
Repeated Measures Chapter 13.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
A statistical method for testing whether two or more dependent variable means are equal (i.e., the probability that any differences in means across several.
 The idea of ANOVA  Comparing several means  The problem of multiple comparisons  The ANOVA F test 1.
12e.1 ANOVA Within Subjects These notes are developed from “Approaching Multivariate Analysis: A Practical Introduction” by Pat Dugard, John Todman and.
ANOVA (Analysis of Variance) by Aziza Munir
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
Multivariate Analysis. One-way ANOVA Tests the difference in the means of 2 or more nominal groups Tests the difference in the means of 2 or more nominal.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
 Slide 1 Two-Way Independent ANOVA (GLM 3) Chapter 13.
Inferential Statistics
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
Regression Chapter 16. Regression >Builds on Correlation >The difference is a question of prediction versus relation Regression predicts, correlation.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Within Subjects Analysis of Variance PowerPoint.
Single Factor or One-Way ANOVA Comparing the Means of 3 or More Groups Chapter 10.
ANOVA: Analysis of Variance.
11/19/2015Slide 1 We can test the relationship between a quantitative dependent variable and two categorical independent variables with a two-factor analysis.
+ Comparing several means: ANOVA (GLM 1) Chapter 11.
Repeated-measures designs (GLM 4) Chapter 13. Terms Between subjects = independent – Each subject gets only one level of the variable. Repeated measures.
Slide 1 Mixed ANOVA (GLM 5) Chapter 15. Slide 2 Mixed ANOVA Mixed: – 1 or more Independent variable uses the same participants – 1 or more Independent.
+ Comparing several means: ANOVA (GLM 1) Chapter 10.
Linear Regression Chapter 8. Slide 2 What is Regression? A way of predicting the value of one variable from another. – It is a hypothetical model of the.
Mixed ANOVA (GLM 5) Chapter 14. Mixed ANOVA Mixed: – 1 or more Independent variable uses the same participants (repeated measures) – 1 or more Independent.
Smoking Data The investigation was based on examining the effectiveness of smoking cessation programs among heavy smokers who are also recovering alcoholics.
ANCOVA. What is Analysis of Covariance? When you think of Ancova, you should think of sequential regression, because really that’s all it is Covariate(s)
Stats Lunch: Day 8 Repeated-Measures ANOVA and Analyzing Trends (It’s Hot)
Assumptions 5.4 Data Screening. Assumptions Parametric tests based on the normal distribution assume: – Independence – Additivity and linearity – Normality.
ONE-WAY BETWEEN-GROUPS ANOVA Psyc 301-SPSS Spring 2014.
Comparing Two Means Chapter 9. Experiments Simple experiments – One IV that’s categorical (two levels!) – One DV that’s interval/ratio/continuous – For.
Correlated-Samples ANOVA The Univariate Approach.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Between Subjects Analysis of Variance PowerPoint.
Smith/Davis (c) 2005 Prentice Hall Chapter Fifteen Inferential Tests of Significance III: Analyzing and Interpreting Experiments with Multiple Independent.
Linear Regression Chapter 7. Slide 2 What is Regression? A way of predicting the value of one variable from another. – It is a hypothetical model of the.
Analysis of Variance STAT E-150 Statistical Methods.
Handout Ten: Mixed Design Analysis of Variance EPSE 592 Experimental Designs and Analysis in Educational Research Instructor: Dr. Amery Wu Handout Ten:
Within Subject ANOVAs: Assumptions & Post Hoc Tests.
ANCOVA.
Simple ANOVA Comparing the Means of Three or More Groups Chapter 9.
ANOVA and Multiple Comparison Tests
PROFILE ANALYSIS. Profile Analysis Main Point: Repeated measures multivariate analysis One/Several DVs all measured on the same scale.
Chapter 9 Two-way between-groups ANOVA Psyc301- Spring 2013 SPSS Session TA: Ezgi Aytürk.
Between-Groups ANOVA Chapter 12. Quick Test Reminder >One person = Z score >One sample with population standard deviation = Z test >One sample no population.
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
Regression. Why Regression? Everything we’ve done in this class has been regression: When you have categorical IVs and continuous DVs, the ANOVA framework.
Multivariate vs Univariate ANOVA: Assumptions. Outline of Today’s Discussion 1.Within Subject ANOVAs in SPSS 2.Within Subject ANOVAs: Sphericity Post.
Comparing several means: ANOVA (GLM 1)
One way ANOVA One way Analysis of Variance (ANOVA) is used to test the significance difference of mean of one dependent variable across more than two.
Analysis of Variance: repeated measures
Presentation transcript:

Terms  Between subjects = independent  Each subject gets only one level of the variable.  Repeated measures = within subjects = dependent = paired  Everyone gets all the levels of the variable.  See confusion machine page 545

RM ANOVARM ANOVA  Now we need to control for correlated levels though …  Before all levels were separate people (independence)  Now the same person is in all levels, so you need to deal with that relationship.

RM ANOVARM ANOVA  Sensitivity  Unsystematic variance is reduced.  More sensitive to experimental effects.  Economy  Less participants are needed.  But, be careful of fatigue.

RM ANOVARM ANOVA  Back to this term: Sphericity  Relationship between dependent levels is similar  Similar variances between pairs of levels  Similar correlations between pairs of levels  Called compound symmetry  The test for Sphericity = Mauchley’s  It’s an ANOVA of the variance scores

RM ANOVARM ANOVA  It is hard to meet the assumption of Sphericity  In fact, most people ignore it.  Why?  Power is lessened when you do not have correlations between time points  Generally, we find Type 2 errors are acceptable

RM ANOVARM ANOVA  All other assumptions stand:  (basic data screening: accuracy, missing, outliers)  Outliers note … now you will screen all the levels … why?  Multicollinearity – only to make sure it’s not r =.999+  Normality  Linearity  Homogeneity/Homoscedasticity

RM ANOVARM ANOVA  What to do if you violate it (and someone forces you to fix it)?  Corrections – note these are DF corrections  which affect the cut off score (you have to go further)  which lowers the p-value

RM ANOVARM ANOVA  Corrections:  Greenhouse-Geisser  Huynh-Feldt  Which one?  When ε (sphericity estimate) is >.75 = Huynh-Feldt  Otherwise Greenhouse-Geisser  Other options: MANOVA, MLM

An ExampleAn Example  Are some Halloween ideas worse than others?  Four ideas tested by 8 participants:  Haunted house  Small costume (brr!)  Punch bowl of unknown drinks  House party  Outcome:  Bad idea rating (1-12 where 12 is this was dummmbbbb). Slide 10

Data

Variance ComponetsVariance Componets

Variance ComponentsVariance Components  SStotal = Me – Grand mean (so this idea didn’t change)  SSwithin = Me – My level mean (this idea didn’t change either)  BUT I’m in each level and that’s important, so …

Variance ComponentsVariance Components  SSwithin = SSm + SSr  SSm = My level – GM (same idea)  SSr = SSw – SSm (basically, what’s left over after calculating how different I am from my level, and how different my level is the from the grand mean)

Variance ComponentsVariance Components  SSbetween?  You will get this on your output and should ignore it if all IVs are repeated.  Represents individual differences between participants  SSb = SSt - SSw

Note  Please use the really great flow chart on page 556

SPSS  Quick note on data screening:  We’ve talked a lot about “not screening the IV”.  In repeated measures – each column is both and IV and a DV.  The IV is the levels (you can think of it as the variable names)  The DV is the scores within each column.  So you must screen all the scores.

SPSS  Quick note on data screening:  One way to help keep this straight:  Did the person in the experiment “make” that score?  If yes  screen it  If no  don’t screen it  Examples of no:  Gender, ethnicity, experimental group

SPSS

SPSS  Analyze > General Linear Model > Repeated Measures

SPSS  Give the IV an overall name  Within Subject Factor Name  Indicate the number of levels (columns)  Hit add  Hit Define

SPSS

SPSS  You now have spots for all the levels:  Important: SPSS assumes the order is important for some types of contrasts (trend analysis) and for two-way designs.  If there’s no order, don’t worry about it.  If it’s a time thing, put them in order.

SPSS  Move over the levels.

SPSS  Contrasts:  These have the exact same rules we’ve described before (chapter 11 notes)  Polynomial is still a trend analysis.

SPSS  For fun, click post hoc.  BOO!

SPSS

SPSS  Hit options  Move over the IV.  Click descriptive statistics, estimates of effect size.  Homogeneity?  We do not have between subjects, so you can click this button, but it will not give you any output (Levene’s).  I usually click it because I forget  won’t hurt you and you won’t forget it on between subjects or mixed designs.

SPSS \\

SPSS  See compare main effects?  Click it!  LSD = Tukey LSD = no correction = dependent t test without the t values.  Bonferroni and Sidak are exactly the same as before.

SPSS

Post HocsPost Hocs  Bonferroni / Sidak are suggested to be the best, especially if you don’t meet Sphericity  Tukey is good when you meet Sphericity

SPSS  Warning because I asked for Levene’s.

SPSS  Within-subjects factors – a way to check my levels are entered correctly.  Descriptive statistics – good for calculating Cohen’s d average standard deviation, remembering n for Tukey

SPSS

SPSS  Multivariate box – in general, you’ll ignore this

SPSS

Correcting for SphericityCorrecting for Sphericity Slide 38 df = 3, 21

SPSS  Within subjects effects – the main ANOVA box.

SPSS  What to look at?  Under source = IV name = SSmodel  Error = SSresidual  Actually hides all the rest from you  Use only ONE line – pick based on sphericity issues

SPSS  Contrasts – you will also get trend analyses, ignore if that’s not what you are interested in testing

SPSS  Between subjects box – ignore unless you have between subjects factors (mixed designs).

SPSS  Marginal means

SPSS  Pairwise comparisons = post hoc

Post Hoc OptionsPost Hoc Options  You can also run:  Tukey LSD, but use a corrected Tukey HSD/Fisher- Hayter mean difference score  RM anovas on each pairwise (2 at a time) combination and use a corrected F critical from Scheffe  Run dependent t-tests and apply any correction

Post Hoc OptionsPost Hoc Options  Things to get straight:  Post hoc test: dependent t  Why? Because it’s repeated measures data  Post hoc correction: you pick: Bonferroni, Sidak, Tukey, FH, Scheffe

Effect sizeEffect size  Remember with a one-way design, eta = partial eta = R squared  Omega squared calculation: (that’s a little easier than the book one):

What is Two-Way Repeated Measures ANOVA?  Two Independent Variables  Two-way = 2 IVs  Three-Way = 3 IVs  The same participants in all conditions.  Repeated Measures = ‘same participants’  A.k.a. ‘within-subjects’ Slide 49

An ExampleAn Example  Field (2013): Effects of advertising on evaluations of different drink types.  IV 1 (Drink): Beer, Wine, Water  IV 2 (Imagery): Positive, negative, neutral  Dependent Variable (DV): Evaluation of product from -100 dislike very much to +100 like very much) Slide 50

Slide 51 SS T Variance between all participants SS M Within-Particpant Variance Variance explained by the experimental manipulations SS R Between- Participant Variance SS A Effect of Drink SS B Effect of Imagery SS A  B Effect of Interaction SS RA Error for Drink SS RB Error for Imagery SS RA  B Error for Interaction

SPSS  Analyze > GLM > repeated measures

SPSS  Label the IVs  Remember that each IV gets its own label (so do not do one variable with the number of columns)  Levels = Levels of each IV  Hit Add

SPSS

SPSS  Now the numbers matter  First variable = first number in the (#, #)  Second variable = second number in the (#, #)  So (1,1) should be  IV 1 – Level 1  IV 2 – Level 1  Make sure they are ordered properly.

SPSS

SPSS

SPSS  Under contrasts, you will automatically get polynomial (trend), but you could change it  The descriptions of them are in chapter 11 notes.

SPSS  Plots – since we have two variables, we can get plots to help us just see what’s going on in the experiment.

SPSS

SPSS  Under options:  Move the variables over!  Click compare main effects  Pick your test (remember we talked a lot about why I think dependent t is the shiz BUT that’s not true when you have multiple variables … why?)

SPSS  Under options  Remember we also talked about always asking for:  Descriptives  Effect size  Homogeneity because it won’t hurt you to get the error, but at least you won’t forget.

SPSS

SPSS  Hit ok!  Output galore!

Within Subjects FactorsWithin Subjects Factors  Did I line it all up correctly?  What the 1, 2, 3 labels mean

Descriptives  These are condition means – good for Cohen’s d because of SD

Multivariate TestsMultivariate Tests  Ignore this box – unless you decide to correct for Sphericity this way!

Sphericity

Sphericity  If we wanted to correct – we’d really do that first one … since epsilon is <.75 we would use Greenhouse- Geisser

Main effect 1Main effect 1  F (2, 38) = 5.11, p =.01, partial n 2 =.21  F (1.15, 21.93) = 5.11, p =.03, partial n 2 =.21

Main effect 2Main effect 2  F (2, 38) = , p <.001, partial n 2 =.87

Interaction  F (4, 76) = 17.16, p <.001, partial n 2 =.47

Contrasts  Remember these only make sense if:  You selected particular ones you were interested in  You had a reason to think there was a trend (i.e. time based or slightly continuous levels)

Between subjects boxBetween subjects box  Ignore this box on totally repeated designs.

Marginal MeansMarginal Means

 Before we used dependent t to analyze the effects across levels.  Now, it’s easier to ask SPSS to do marginal means analyses because it automatically calculates those means for you  You can also create new average columns that are those means (i.e. average all the levels of one IV to create a WATER level)

Interaction MeansInteraction Means

Plots

Simple effect analysisSimple effect analysis  Pick a direction – across or down!  How many comparisons does that mean we have to do?

Simple effectsSimple effects  Test = dependent t (because it’s repeated measures data)  Post Hoc = pick one!  Let’s do FH

Correction  How many means?  3X3 anova = 9 means  FH = means – 1 for 9  DF residual = 76 (remember interaction)  Q = 4.40  Q* sqrt(msresidual / n)  4.40 * sqrt(38.25 / 20) = 6.08

Run the analysisRun the analysis  Analyze > compare means > paired samples

Example First two are significant, last one is not because 5.55 < 6.08.

Effect sizesEffect sizes  Partial eta squared or omega squared for each effect  Cohen’s d for post hoc/simple effects  Remember there are two types, so you have to say which denominator you are using