Chapter 12: Introduction to Analysis of Variance

Slides:



Advertisements
Similar presentations
Intro to ANOVA.
Advertisements

Chapter 10: The t Test For Two Independent Samples
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
ANALYSIS OF VARIANCE (ONE WAY)
Chapter 11: The t Test for Two Related Samples
1 Chapter 20: Statistical Tests for Ordinal Data.
Statistical Issues in Research Planning and Evaluation
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
ANOVA: Analysis of Variance
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Statistics Are Fun! Analysis of Variance
Chapter 3 Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences
Lecture 11 Introduction to ANOVA.
Inferences About Process Quality
S519: Evaluation of Information Systems
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Chapter 9: Introduction to the t statistic
1 Chapter 13: Introduction to Analysis of Variance.
Advanced Research Methods in Psychology - lecture - Matthew Rockloff
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
Repeated ANOVA. Outline When to use a repeated ANOVA How variability is partitioned Interpretation of the F-ratio How to compute & interpret one-way ANOVA.
Hypothesis Testing II The Two-Sample Case.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 14: Repeated-Measures Analysis of Variance.
Analysis of Variance ( ANOVA )
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
© Copyright McGraw-Hill CHAPTER 12 Analysis of Variance (ANOVA)
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Introduction to ANOVA Introduction to Statistics Chapter 13 Apr 13-15, 2010 Classes #23-24.
One-way Analysis of Variance 1-Factor ANOVA. Previously… We learned how to determine the probability that one sample belongs to a certain population.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Testing Hypotheses about Differences among Several Means.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
PRED 354 TEACH. PROBILITY & STATIS. FOR PRIMARY MATH Lesson 12 Repeated-Measures Analysis of Variance (ANOVA)
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
Psy 230 Jeopardy Related Samples t-test ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
McGraw-Hill, Bluman, 7th ed., Chapter 12
McGraw-Hill, Bluman, 7th ed., Chapter 12
IE241: Introduction to Design of Experiments. Last term we talked about testing the difference between two independent means. For means from a normal.
The Analysis of Variance ANOVA
Statistics for Political Science Levin and Fox Chapter Seven
1 Chapter 14: Repeated-Measures Analysis of Variance.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Oneway/Randomized Block Designs Q560: Experimental Methods in Cognitive Science Lecture 8.
Chapter 8: Introduction to Hypothesis Testing. Hypothesis Testing A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.Introduction To Basic Ratios 3.Basic Ratios In Excel 4.Cumulative.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Chapter 14: Analysis of Variance One-way ANOVA Lecture 9a Instructor: Naveen Abedin Date: 24 th November 2015.
1 Statistics for the Behavioral Sciences (5 th ed.) Gravetter & Wallnau Chapter 13 Introduction to Analysis of Variance (ANOVA) University of Guelph Psychology.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Stats/Methods II JEOPARDY. Jeopardy Estimation ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Chapter 9 Hypothesis Testing Understanding Basic Statistics Fifth Edition By Brase and Brase Prepared by Jon Booze.
Chapter 10: The t Test For Two Independent Samples.
Chapter 12 Introduction to Analysis of Variance
Chapter 10: Analysis of Variance (ANOVA). t test --Uses the t statistic to compare 2 means One-Way ANOVA --Also know as the F test --Uses the F statistic.
Analysis of Variance (ANOVA)
Chapter 12: Introduction to Analysis of Variance
Chapter 13: Repeated-Measures Analysis of Variance
Presentation transcript:

Chapter 12: Introduction to Analysis of Variance

The Logic and the Process of Analysis of Variance Chapter 12 presents the general logic and basic formulas for the hypothesis testing procedure known as analysis of variance (ANOVA). The purpose of ANOVA is much the same as for t tests: the goal of t tests is to compare distances; between sample means and between distances computed for the standard error. The goal of ANOVA is to determine whether the variances (measures of squared distance) between means are sufficiently large to justify a conclusion that there are mean differences between the populations from which the samples were obtained.

The Logic and the Process of Analysis of Variance (cont'd.) The difference between ANOVA and the t tests is that ANOVA can be used in situations where there are two or more means being compared, whereas the t tests are limited to situations where only two means are involved. When only two means are involved, either test can be used, since these two techniques always result in the same statistical decision (p. 420). ANOVA is necessary to protect researchers from excessive risk of a Type I error (rejecting a null hypothesis that is actually true) in situations where a study is comparing more than two population means.

The Logic and the Process of Analysis of Variance (cont'd.) These situations would require a series of several t tests to evaluate all of the mean differences. (Remember, a t test can compare only two means at a time.) Although each t test can be done with a specific α-level (risk of Type I error), the α-levels accumulate over a series of tests so that the final experiment-wide α-level can be quite large. For this reason, ANOVA is the most practical method for comparing multiple sample means. ANOVA allows researchers to evaluate all of the mean differences in a single hypothesis test using a single α-level, and thus keeps the risk of a Type I error under control no matter how many different means are being compared.

The Logic and the Process of Analysis of Variance (cont'd.) Although ANOVA can be used in a variety of different research situations, this chapter presents only independent-measures designs involving only one independent variable. Because ANOVA requires so many different formulas, it can be problematic to keep track of formulas and numbers, let alone calculating all of them. For this reason, we are learning how ANOVA is organized, but not requiring you to actually calculate all the formulas. In other words, you need to know about ANOVA, and how to interpret its results. Those of you who want to actually work with ANOVA can do so using the problems given in your text and your SPSS software.

The Logic and the Process of Analysis of Variance (cont'd.)

The Logic and the Process of Analysis of Variance (cont'd.)  

The Logic and the Process of Analysis of Variance (cont'd.) Thus, the F-ratio has the same basic structure as the independent-measures t statistic presented in Chapter 10. obtained mean differences (including treatment effects) MSbetween F = ────────────────────────────────────── = ─────── differences expected by chance (without treatment effects) MSwithin

The Logic and the Process of Analysis of Variance (cont'd.)

The Logic and the Process of Analysis of Variance (cont'd.) A large value for the F-ratio indicates that the obtained sample mean differences are greater than would be expected if the treatments had no effect. Each of the sample variances, MS values, in the F-ratio is computed using the basic formula for sample variance: SS sample variance = S2 = ── df

The Logic and the Process of Analysis of Variance (cont'd.) To obtain the SS and df values, you must go through an analysis that separates the total variability for the entire set of data into two basic components: within-treatment variability (which will be the denominator) and between-treatment variability (which will become the numerator of the F-ratio).

The Logic and the Process of Analysis of Variance (cont'd.) The two components of the F-ratio can be described as follows: Within-Treatments Variability: MSwithin measures the size of the differences that exist inside each of the samples. Because all the individuals in a sample receive exactly the same treatment, any differences (or variance) within a sample cannot be caused by different treatments.

The Logic and the Process of Analysis of Variance (cont'd.) Thus, these differences are caused by only one source: Chance or Error: The unpredictable differences that exist between individual scores are not caused by any systematic factors and are simply considered to be random chance or error.

The Logic and the Process of Analysis of Variance (cont'd.) Between-Treatments Variability: MSbetween measures the size of the differences between the sample means. For example, suppose that three treatments, each with a sample of n = 5 subjects, have means of M1 = 1, M2 = 2, and M3 = 3. Notice that the three means are different; that is, they are variable.

The Logic and the Process of Analysis of Variance (cont'd.) By computing the variance for the three means we can measure the size of the differences. Although it is possible to compute a variance for the set of sample means, it usually is easier to use the total, T, for each sample instead of the mean, and compute variance for the set of T values.

The Logic and the Process of Analysis of Variance (cont'd.) Logically, the differences (or variance) between means can be caused by two sources: Treatment Effects: If the treatments have different effects, this could cause the mean for one treatment to be higher (or lower) than the mean for another treatment. Chance or Sampling Error: If there is no treatment effect at all, you would still expect some differences between samples. Mean differences from one sample to another are an example of random, unsystematic sampling error.

The Logic and the Process of Analysis of Variance (cont'd.) Considering these sources of variability, the structure of the F-ratio becomes: treatment effect + random differences F = ────────────────────── random differences

The Logic and the Process of Analysis of Variance (cont'd.) When the null hypothesis is true and there are no differences between treatments, the F-ratio is balanced. That is, when the "treatment effect" is zero, the top and bottom of the F-ratio are measuring the same variance. In this case, you should expect an F-ratio near 1.00. When the sample data produce an F-ratio near 1.00, we will conclude that there is no significant treatment effect.

The Logic and the Process of Analysis of Variance (cont'd.) On the other hand, a large treatment effect will produce a large value for the F-ratio. Thus, when the sample data produce a large F-ratio we will reject the null hypothesis and conclude that there are significant differences between treatments. To determine whether an F-ratio is large enough to be significant, you must select an α-level, find the df values for the numerator and denominator of the F-ratio, and consult the F-distribution table to find the critical value. (see appendix B for statistical tables)

The Logic and the Process of Analysis of Variance (cont'd.)

Analysis of Variance and Post Tests The null hypothesis for ANOVA states that for the general population there are no mean differences among the treatments being compared; H0: μ1 = μ2 = μ3 = . . . When the null hypothesis is rejected, the conclusion is that there are significant mean differences. However, the ANOVA simply establishes that differences exist, it does not indicate exactly which treatments are different.

Measuring Effect Size for an Analysis of Variance As with other hypothesis tests, an ANOVA evaluates the significance of the sample mean differences; that is, are the differences bigger than would be reasonable to expect just by chance. With large samples, however, it is possible for relatively small mean differences to be statistically significant. Thus, the hypothesis test does not necessarily provide information about the actual size of the mean differences.

Measuring Effect Size for an Analysis of Variance (cont'd.) To supplement the hypothesis test, it is recommended that you calculate a measure of effect size. For an analysis of variance the common technique for measuring effect size is to compute the percentage of variance that is accounted for by the treatment effects.

Measuring Effect Size for an Analysis of Variance (cont'd.) For the t statistics, this percentage was identified as r2, but in the context of ANOVA the percentage is identified as η2 (the Greek letter eta, squared). The formula for computing effect size is: SSbetween treatments η2 = ─────────── SStotal

Post Hoc Tests With more than two treatments, this creates a problem. Specifically, you must follow the ANOVA with additional tests, called post hoc tests, to determine exactly which treatments are different and which are not. The Tukey’s HSD and Scheffé test are examples of post hoc tests. These tests are done after an ANOVA where H0 is rejected with more than two treatment conditions. The tests compare the treatments, two at a time, to test the significance of the mean differences.