Independent Samples ANOVA. Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.The Equal Variance Assumption 3.Cumulative.

Slides:



Advertisements
Similar presentations
ANALYSIS OF VARIANCE (ONE WAY)
Advertisements

Chapter 10 Analysis of Variance (ANOVA) Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social.
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
Analysis of Variance (ANOVA) Statistics for the Social Sciences Psychology 340 Spring 2010.
T-Tests.
t-Tests Overview of t-Tests How a t-Test Works How a t-Test Works Single-Sample t Single-Sample t Independent Samples t Independent Samples t Paired.
T-Tests.
Part IVA Analysis of Variance (ANOVA) Dr. Stephen H. Russell Weber State University.
Independent Sample T-test Formula
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Statistics Are Fun! Analysis of Variance
Independent Samples and Paired Samples t-tests PSY440 June 24, 2008.
Intro to Statistics for the Behavioral Sciences PSYC 1900
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Independent Sample T-test Often used with experimental designs N subjects are randomly assigned to two groups (Control * Treatment). After treatment, the.
Analysis of Variance & Multivariate Analysis of Variance
Today Concepts underlying inferential statistics
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Richard M. Jacobs, OSA, Ph.D.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Analysis of Variance. ANOVA Probably the most popular analysis in psychology Why? Ease of implementation Allows for analysis of several groups at once.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Part IV Significantly Different: Using Inferential Statistics
ANOVA Greg C Elvers.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Chapter 12: Introduction to Analysis of Variance
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
One-way Analysis of Variance 1-Factor ANOVA. Previously… We learned how to determine the probability that one sample belongs to a certain population.
ANOVA (Analysis of Variance) by Aziza Munir
Testing Hypotheses about Differences among Several Means.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Inference and Inferential Statistics Methods of Educational Research EDU 660.
I. Statistical Tests: A Repetive Review A.Why do we use them? Namely: we need to make inferences from incomplete information or uncertainty þBut we want.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
INTRODUCTION TO ANALYSIS OF VARIANCE (ANOVA). COURSE CONTENT WHAT IS ANOVA DIFFERENT TYPES OF ANOVA ANOVA THEORY WORKED EXAMPLE IN EXCEL –GENERATING THE.
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
ANOVA: Analysis of Variance.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: One-way ANOVA Marshall University Genomics Core.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
ANOVA P OST ANOVA TEST 541 PHL By… Asma Al-Oneazi Supervised by… Dr. Amal Fatani King Saud University Pharmacy College Pharmacology Department.
Psy 230 Jeopardy Related Samples t-test ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
Statistics for the Social Sciences Psychology 340 Spring 2009 Analysis of Variance (ANOVA)
Regression. Outline of Today’s Discussion 1.Coefficient of Determination 2.Regression Analysis: Introduction 3.Regression Analysis: SPSS 4.Regression.
Chi-Square Analyses.
Independent Samples T-Test. Outline of Today’s Discussion 1.About T-Tests 2.The One-Sample T-Test 3.Independent Samples T-Tests 4.Two Tails or One? 5.Independent.
Research Methods and Data Analysis in Psychology Spring 2015 Kyle Stephenson.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Within Subject ANOVAs: Assumptions & Post Hoc Tests.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Simple ANOVA Comparing the Means of Three or More Groups Chapter 9.
Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.Introduction To Basic Ratios 3.Basic Ratios In Excel 4.Cumulative.
Formula for Linear Regression y = bx + a Y variable plotted on vertical axis. X variable plotted on horizontal axis. Slope or the change in y for every.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Chapter 12 Introduction to Analysis of Variance
Multivariate vs Univariate ANOVA: Assumptions. Outline of Today’s Discussion 1.Within Subject ANOVAs in SPSS 2.Within Subject ANOVAs: Sphericity Post.
Comparing several means: ANOVA (GLM 1)
Significance and t testing
I. Statistical Tests: Why do we use them? What do they involve?
Presentation transcript:

Independent Samples ANOVA

Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.The Equal Variance Assumption 3.Cumulative Type I Error & Post Hoc Tests

The Research Cycle Real World Research Representation Research Results Research Conclusions Abstraction Data Analysis MethodologyGeneralization ***

Part 1 Independent Samples ANOVA: Conceptual Introduction

Independent Samples ANOVA 1.Potential Pop Quiz Question: In your own words, explain why a researcher would choose to use an ANOVA rather than a t-test? 2.Ratios, Ratios, Ratios (that is, one number “over” another number) 3.Let’s consider some concepts that you already know…but probably don’t think of as ratios…

Independent Samples ANOVA A z-score is a ratio! We consider the difference (numerator) within the “context” of the variability (denominator).

Independent Samples ANOVA A t statistic is a ratio! We consider the difference (numerator) within the “context” of the variability (denominator).

Independent Samples ANOVA 1.Like the Z-score and the t statistic, the ANOVA (analysis of variance) is also a ratio. 2.Like the t-test, the ANOVA is used to evaluate the difference between means, within the context of the variability. 3.The ANOVA is distinct from the t-test, however, in that the ANOVA can compare multiple means to each other (the t-test can only do two at a time). 4.Let’s compare the t-test & ANOVA a little further…

Independent Samples ANOVA The ANOVA is also called the F Statistic. Cool Factoid: t 2 = F

Independent Samples ANOVA 1.Scores vary! It’s a fact of life. :-) 2.Let’s consider three sources of variability, and then return to how ANOVA deals with these….

Independent Samples ANOVA 1.One source of variability is the treatment effect - the different levels of the independent variable may cause variations in scores. 2.Another source of variability is individual differences - participants enter an experiment with different abilities, motivation levels, experiences, etc.. People are unique! This causes scores to vary. 3.A third source of variability is experimental error - whenever we make a measurement, there is some amount of error (e.g., unreliable instruments, chance variations in the sample). The error in our measurements causes scores to vary.

Independent Samples ANOVA 1.Researchers try to minimize those last two sources of variability (individual differences, and experimental error). 2.They do this by “control” and by “balancing”, as we saw in the last section. 3.Individual differences and experimental error cause variability, both within and between treatment groups. 4.By contrast, treatment effects only cause variability between treatment groups…

Independent Samples ANOVA Notice that the treatment effect pertains only to the between group variability.

Independent Samples ANOVA The ANOVA is also called the F Statistic. Remember: It’s just another ratio. …no big wup! ;) Let’s unpack this thing… F = Variability Between Groups Variability Within Groups

Independent Samples ANOVA Here’s the ANOVA, “unpacked”. The ANOVA is also called the F Statistic. F = Treatment Effects + Individual Differences + Experimental Error Individual Differences + Experimental Error

Independent Samples ANOVA The ANOVA is also called the F Statistic. Here’s the ANOVA, “unpacked” in a different way. Error variation includes both individual differences & experimental error. F = Systematic Variation + Error Variation Error Variation

Independent Samples ANOVA The ANOVA is also called the F Statistic. Here’s the ANOVA, “unpacked” in a different way. Systematic variation includes only the treatment effects.. F = Error Variation + Systematic Variation Error Variation

Independent Samples ANOVA This is what the null hypothesis predicts. It states that treatment effects are zero. The ANOVA is also called the F Statistic. F = 0 + Individual Differences + Experimental Error Individual Differences + Experimental Error

Independent Samples ANOVA The null hypothesis for the ANOVA. In the population, all means are equal. H 0 :  1 =  2 =  3 =  4

Independent Samples ANOVA The alternate hypothesis for the ANOVA. In the population, NOT all means are equal. H 1 : Not H 0

Independent Samples ANOVA 1.Let’s consider a set of pictures, to further develop some intuitions about the F statistic (i.e., the ANOVA). 2.Remember that the F statistic is this ratio: 3.Between Grp Variance / Within Grp Variance

Independent Samples ANOVA Total Variance Here, the variances are equal F ratio = 1 Retain Ho.

Independent Samples ANOVA Here, the variances aren’t equal F ratio < 1 Retain Ho. Total Variance

Independent Samples ANOVA Here, the variances aren’t equal F ratio > 1 Reject Ho! Total Variance

Independent Samples ANOVA 1.You remember our old friend, the variance?… 2.To get the variance we need to determine the sum of squares, then divide by the degrees of freedom (n for a population, n-1 for a sample). 3.Let’s see the total variance, and how ANOVA breaks it down into smaller portions…

Independent Samples ANOVA The sums of squares for the ANOVA, which is also called the F Statistic.

Independent Samples ANOVA The degrees of freedom for the ANOVA, which is also called the F Statistic.

Independent Samples ANOVA The summary table for the ANOVA, which is also called the F Statistic.

Independent Samples ANOVA Would someone explain this, and relate it to ANOVA?

Part 2 The Equal Variance Assumption

1.The ANOVA is based on a few assumptions, one of which is most important… 2.The Equal Variance Assumption - For the ANOVA to be appropriate, the variance (dispersion) must be comparable across the groups or conditions. 3.SPSS can help us to understand whether to retain the assumption, or reject the assumption….

The Equal Variance Assumption In our SPSS output, we can view the “Test” of Homogeneity of Variances. SPSS computes a special statistic called the Levene Statistic.

The Equal Variance Assumption As always, when the the observed alpha level, “Sig.” < 0.05, we reject something! When the “Sig.” > 0.05, we retain something! Here, we RETAIN, the equal variance assumption.

The Equal Variance Assumption So, there are TWO STEPS!!!!! Step 1: We decide whether to retain or reject the equal variance assumption. If we reject, we can’t use ANOVA (perhaps a non-parametric test c/b used). If we retain, we go on to step 2. Step 2: We decide whether to retain or reject the null hypothesis for our study, namely that all  ’s are equal. For this we need to look at the ANOVA table…

The Equal Variance Assumption If the observed alpha level, ‘Sig.’ < 0.05, We can reject, the study’s null hypothesis. Otherwise we retain the null hypothesis. In this example, would we retain or reject?

Part 3 Cumulative Type I Error & Post Hoc Tests

Cumulative Type 1 Error & Post Hoc Tests 1.The ANOVA is very flexible in that it allows us to compare more than 2 groups (or conditions) simultaneously. 2.The overall ANOVA is called the “omnibus ANOVA” or “omnibus F”: This is the test in which all means of interest are compared simultaneously. 3.An omnibus F merely indicates that at least one of the means is different another mean. 4.The omnibus F does NOT indicate which one is different from which!

Cumulative Type 1 Error & Post Hoc Tests 1.If we conduct an experiment a sufficiently large number of times…we are bound to find a “significant” F-value…just by chance! 2.In other words, as we run more and more statistical comparisons, the probability of finding a “significant” result accumulates… 3.Cumulative Type 1 error - (also called “familywise” type 1 error) the increase in the likelihood of erroneously rejecting the null hypothesis when multiple statistical comparisons are made.

Cumulative Type 1 Error & Post Hoc Tests 1.To guard against the cumulative type 1 error, there are various procedures for “correction”, i.e., controlling type 1 error all called ‘Post Hoc Tests’. 2.Each procedure for correcting cumulative type 1 error involves a slight modification to the critical value (i.e., the number to beat). 3.Specifically, the critical value is increased so that it becomes “harder to beat, the number to beat”. :-) 4.Three of the more common “correction” procedures (i.e., Post Hoc Tests) are the Scheffe, the Tukey, and the Dunnet.

Cumulative Type 1 Error & Post Hoc Tests 1.If your F statistic is still significant after the critical value has been “corrected” by one of these post hoc tests tests, you have made a strong case. And remember, the burden of proof is on you. 2.We will not go into the differences among these three post hoc tests here…but the Scheffe is considered the most conservative “check” on cumulative type 1 error.

Cumulative Type 1 Error & Post Hoc Tests 1.To summarize, the post hoc tests allow us to do two things. 2.First, post hoc tests allow us to see exactly which pairs of means differ from each other (the omnibus F can’t do that when there are more than 2 means). 3.Second, post hoc tests control for cumulative type 1 error.

Cumulative Type 1 Error & Post Hoc Tests Post Hoc Tests in SPSS