COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.

Slides:



Advertisements
Similar presentations
Intro to ANOVA.
Advertisements

Chapter 10: The t Test For Two Independent Samples
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
1 Chapter 20: Statistical Tests for Ordinal Data.
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
ANOVA: Analysis of Variance
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Analysis of Variance: Inferences about 2 or More Means
Statistics Are Fun! Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences
Lecture 11 Introduction to ANOVA.
Lecture 9: One Way ANOVA Between Subjects
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 9: Introduction to the t statistic
1 Chapter 13: Introduction to Analysis of Variance.
COURSE: JUST 3900 Tegrity Presentation Developed By: Ethan Cooper Final Exam Review.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
Repeated ANOVA. Outline When to use a repeated ANOVA How variability is partitioned Interpretation of the F-ratio How to compute & interpret one-way ANOVA.
Repeated Measures ANOVA
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
Chapter 13: Introduction to Analysis of Variance
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Chapter 12: Introduction to Analysis of Variance
COURSE: JUST 3900 TIPS FOR APLIA Developed By: Ethan Cooper (Lead Tutor) John Lohman Michael Mattocks Aubrey Urwick Chapter : 10 Independent Samples t.
© Copyright McGraw-Hill CHAPTER 12 Analysis of Variance (ANOVA)
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Copyright © 2004 Pearson Education, Inc.
Testing Hypotheses about Differences among Several Means.
Analysis of Variance 1 Dr. Mohammed Alahmed Ph.D. in BioStatistics (011)
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
Psy 230 Jeopardy Related Samples t-test ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
McGraw-Hill, Bluman, 7th ed., Chapter 12
McGraw-Hill, Bluman, 7th ed., Chapter 12
The Analysis of Variance ANOVA
Statistics for Political Science Levin and Fox Chapter Seven
1 Chapter 14: Repeated-Measures Analysis of Variance.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Oneway/Randomized Block Designs Q560: Experimental Methods in Cognitive Science Lecture 8.
Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.Introduction To Basic Ratios 3.Basic Ratios In Excel 4.Cumulative.
1 Statistics for the Behavioral Sciences (5 th ed.) Gravetter & Wallnau Chapter 13 Introduction to Analysis of Variance (ANOVA) University of Guelph Psychology.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
Stats/Methods II JEOPARDY. Jeopardy Estimation ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Chapter 10: The t Test For Two Independent Samples.
Chapter 12 Introduction to Analysis of Variance
Chapter 14 Repeated Measures and Two Factor Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Seventh.
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Econ 3790: Business and Economic Statistics
Analysis of Variance (ANOVA)
Chapter 12: Introduction to Analysis of Variance
Chapter 13: Repeated-Measures Analysis of Variance
Presentation transcript:

COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology Chapter 12: Introduction to Analysis of Variance © DO NOT CITE, QUOTE, REPRODUCE, OR DISSEMINATE WITHOUT WRITTEN PERMISSION FROM THE AUTHOR: Dr. John J. Kerbs can be ed for permission at

The Logic and the Process of Analysis of Variance Chapter 12 presents the general logic and basic formulas for the hypothesis testing procedure known as analysis of variance (ANOVA). Chapter 12 presents the general logic and basic formulas for the hypothesis testing procedure known as analysis of variance (ANOVA). The purpose of ANOVA is much the same as the t tests presented in the preceding chapters: the goal is to determine whether the mean differences that are obtained for sample data are sufficiently large to justify a conclusion that there are mean differences between the populations from which the samples were obtained. The purpose of ANOVA is much the same as the t tests presented in the preceding chapters: the goal is to determine whether the mean differences that are obtained for sample data are sufficiently large to justify a conclusion that there are mean differences between the populations from which the samples were obtained.

The Logic & Process of ANOVA The difference between ANOVA and the t tests is that ANOVA can be used in situations where there are two or more means being compared, whereas the t tests are limited to situations where only two means are involved. The difference between ANOVA and the t tests is that ANOVA can be used in situations where there are two or more means being compared, whereas the t tests are limited to situations where only two means are involved. ANOVA is necessary to protect researchers from excessive risk of a Type I error in situations where a study is comparing more than two population means. ANOVA is necessary to protect researchers from excessive risk of a Type I error in situations where a study is comparing more than two population means.

These situations would require a series of several t tests to evaluate all of the mean differences. (Remember, a t test can compare only two means at a time.) These situations would require a series of several t tests to evaluate all of the mean differences. (Remember, a t test can compare only two means at a time.) Although each t test can be done with a specific α- level (risk of Type I error), the α-levels accumulate over a series of tests so that the final experiment- wise α-level can be quite large. Although each t test can be done with a specific α- level (risk of Type I error), the α-levels accumulate over a series of tests so that the final experiment- wise α-level can be quite large. Note: While experiment-wise Type I error does accumulate, it is not a simple additive process. Note: While experiment-wise Type I error does accumulate, it is not a simple additive process. The Logic & Process of ANOVA

ANOVA allows researcher to evaluate all of the mean differences in a single hypothesis test using a single α-level and, thereby, keeps the risk of a Type I error under control no matter how many different means are being compared. ANOVA allows researcher to evaluate all of the mean differences in a single hypothesis test using a single α-level and, thereby, keeps the risk of a Type I error under control no matter how many different means are being compared. Although ANOVA can be used in a variety of different research situations, this chapter presents only independent-measures designs involving only one independent variable. Although ANOVA can be used in a variety of different research situations, this chapter presents only independent-measures designs involving only one independent variable. The Logic & Process of ANOVA

Typical Research Design for Analysis of Variance

ANOVA TERMS In ANOVA, the variable (independent or quasi- independent) that designates the groups being compared is called a factor. In ANOVA, the variable (independent or quasi- independent) that designates the groups being compared is called a factor. In ANOVA, the individual groups or treatment conditions that are used to make up a factor are called levels of the factor. In ANOVA, the individual groups or treatment conditions that are used to make up a factor are called levels of the factor. Example: A study that looks at three different telephone conditions would have three levels of the factor. Example: A study that looks at three different telephone conditions would have three levels of the factor.

Hypotheses for ANOVA There are multiple means involved and so the hypotheses can read as follows: There are multiple means involved and so the hypotheses can read as follows: H 0 : µ 1 = µ 2 = µ 3 H 0 : µ 1 = µ 2 = µ 3 H 1 : There is at least one mean difference among the populations - - or H 1 : There is at least one mean difference among the populations - - or H 1 : µ 1 ≠ µ 2 ≠ µ 3 (All three means are different) - - or H 1 : µ 1 ≠ µ 2 ≠ µ 3 (All three means are different) - - or H 1 : µ 1 = µ 3, but µ 2 is different - - or H 1 : µ 1 = µ 3, but µ 2 is different - - or H 1 : µ 1 = µ 2, but µ 3 is different - - or H 1 : µ 1 = µ 2, but µ 3 is different - - or H 1 : µ 2 = µ 3, but µ 1 is different. H 1 : µ 2 = µ 3, but µ 1 is different.

The test statistic for ANOVA is an F-ratio, which is a ratio of two sample variances. In the context of ANOVA, the sample variances are called mean squares, or MS values. The test statistic for ANOVA is an F-ratio, which is a ratio of two sample variances. In the context of ANOVA, the sample variances are called mean squares, or MS values. The top of the F-ratio, MS between, measures the size of mean differences between samples. The bottom of the ratio, MS within, measures the magnitude of differences that would be expected without any treatment effects. The top of the F-ratio, MS between, measures the size of mean differences between samples. The bottom of the ratio, MS within, measures the magnitude of differences that would be expected without any treatment effects. The Logic & Process of ANOVA: Understanding the F-Ratio

NOTE: The denominator for an F-Ratio is called an Error Term (Variance Caused By Random Differences)

The Logic & Process of ANOVA: Understanding Total Variability

A large value for the F-ratio indicates that the obtained sample mean differences are greater than would be expected if the treatments had no effect. A large value for the F-ratio indicates that the obtained sample mean differences are greater than would be expected if the treatments had no effect. Each of the sample variances, MS values, in the F-ratio is computed using the basic formula for sample variance: Each of the sample variances, MS values, in the F-ratio is computed using the basic formula for sample variance: SS SS sample variance = S 2 = ── df df The Logic & Process of ANOVA: Understanding F-Ratio Values

To obtain the SS and df values, you must go through an analysis that separates the total variability for the entire set of data into two basic components: To obtain the SS and df values, you must go through an analysis that separates the total variability for the entire set of data into two basic components: within-treatment variability (which will be the denominator) and within-treatment variability (which will be the denominator) and between-treatment variability (which will become the numerator of the F-ratio). between-treatment variability (which will become the numerator of the F-ratio). The Logic & Process of ANOVA: Within & Between Treatment Variability

The two components of the F-ratio. The first component is Within-Treatment Variability: The two components of the F-ratio. The first component is Within-Treatment Variability: Within-Treatments Variability: MS within measures the size of the differences that exist inside each of the samples. Within-Treatments Variability: MS within measures the size of the differences that exist inside each of the samples. Because all the individuals in a sample receive exactly the same treatment, any differences (or variance) within a sample cannot be caused by different treatments. Because all the individuals in a sample receive exactly the same treatment, any differences (or variance) within a sample cannot be caused by different treatments. Thus, these differences are caused by only one source: Thus, these differences are caused by only one source: Chance or Error: The unpredictable differences that exist between individual scores are not caused by any systematic factors and are simply considered to be random chance or error. Chance or Error: The unpredictable differences that exist between individual scores are not caused by any systematic factors and are simply considered to be random chance or error. The Logic & Process of ANOVA: Within Treatment Variability

The two components of the F-ratio. The second component is Between-Treatment Variability: The two components of the F-ratio. The second component is Between-Treatment Variability: Between-Treatments Variability: MS between measures the size of the differences between the sample means. For example, suppose that three treatments, each with a sample of n = 5 subjects, have means of M 1 = 1, M 2 = 2, and M 3 = 3. Between-Treatments Variability: MS between measures the size of the differences between the sample means. For example, suppose that three treatments, each with a sample of n = 5 subjects, have means of M 1 = 1, M 2 = 2, and M 3 = 3. Notice that the three means are different; that is, they are variable. Notice that the three means are different; that is, they are variable. The Logic & Process of ANOVA: Between Treatment Variability

By computing the variance for the three means we can measure the size of the differences. By computing the variance for the three means we can measure the size of the differences. Although it is possible to compute a variance for the set of sample means, it usually is easier to use the total, T, for each sample instead of the mean, and compute variance for the set of T values. Although it is possible to compute a variance for the set of sample means, it usually is easier to use the total, T, for each sample instead of the mean, and compute variance for the set of T values. The Logic & Process of ANOVA

Logically, the differences (or variance) between means can be caused by two sources: Logically, the differences (or variance) between means can be caused by two sources: Treatment Effects: If the treatments have different effects, this could cause the mean for one treatment to be higher (or lower) than the mean for another treatment. Treatment Effects: If the treatments have different effects, this could cause the mean for one treatment to be higher (or lower) than the mean for another treatment. Chance or Sampling Error: If there is no treatment effect at all, you would still expect some differences between samples. Mean differences from one sample to another are an example of random, unsystematic sampling error. Chance or Sampling Error: If there is no treatment effect at all, you would still expect some differences between samples. Mean differences from one sample to another are an example of random, unsystematic sampling error. The Logic & Process of ANOVA: Two Sources of Variance

Considering these sources of variability, the structure of the F-ratio becomes: Considering these sources of variability, the structure of the F-ratio becomes: treatment effect + random differences treatment effect + random differences F = ────────────────────── random differences random differences The Logic & Process of ANOVA: Back to the F-Ratio

When the null hypothesis is true and there are no differences between treatments, the F-ratio is balanced. When the null hypothesis is true and there are no differences between treatments, the F-ratio is balanced. That is, when the "treatment effect" is zero, the top and bottom of the F-ratio are measuring the same variance. That is, when the "treatment effect" is zero, the top and bottom of the F-ratio are measuring the same variance. In this case, you should expect an F-ratio near When the sample data produce an F- ratio near 1.00, we will conclude that there is no significant treatment effect. In this case, you should expect an F-ratio near When the sample data produce an F- ratio near 1.00, we will conclude that there is no significant treatment effect. The Logic & Process of ANOVA: F-Ratio Values

On the other hand, a large treatment effect will produce a large value for the F-ratio. Thus, when the sample data produce a large F-ratio we will reject the null hypothesis and conclude that there are significant differences between treatments. On the other hand, a large treatment effect will produce a large value for the F-ratio. Thus, when the sample data produce a large F-ratio we will reject the null hypothesis and conclude that there are significant differences between treatments. To determine whether an F-ratio is large enough to be significant, you must select an α-level, find the df values for the numerator and denominator of the F- ratio, and consult the F-distribution table to find the critical value. To determine whether an F-ratio is large enough to be significant, you must select an α-level, find the df values for the numerator and denominator of the F- ratio, and consult the F-distribution table to find the critical value. The Logic & Process of ANOVA: F-Ratio Values

The Logic & Process of ANOVA: Structure & Sequence of ANOVA Calculations

Analysis of Variance & Post Tests The null hypothesis for ANOVA states that for the general population there are no mean differences among the treatments being compared; H 0 : μ 1 = μ 2 = μ 3 =... The null hypothesis for ANOVA states that for the general population there are no mean differences among the treatments being compared; H 0 : μ 1 = μ 2 = μ 3 =... When the null hypothesis is rejected, the conclusion is that there are significant mean differences. When the null hypothesis is rejected, the conclusion is that there are significant mean differences. However, the ANOVA simply establishes that differences exist, it does not indicate exactly which treatments are different. However, the ANOVA simply establishes that differences exist, it does not indicate exactly which treatments are different.

ANOVA Calculations & Notations: Learn Your Terms!!! k is used to identify the number of treatment conditions k is used to identify the number of treatment conditions n is used to identify the number of scores in each treatment condition n is used to identify the number of scores in each treatment condition N is used to identify the total number scores in the entire study N is used to identify the total number scores in the entire study N = kn, when samples are the same size N = kn, when samples are the same size T stands for treatment total and is calculated by ∑X, which equals the sum of the scores for each treatment condition T stands for treatment total and is calculated by ∑X, which equals the sum of the scores for each treatment condition G stands for the sum of all scores in a study (Grand Total) G stands for the sum of all scores in a study (Grand Total) Calculate by adding up all N scores or by adding treatment total (G=∑T) Calculate by adding up all N scores or by adding treatment total (G=∑T) You will also need SS and M for each sample, and ∑X 2 for the entire set of all scores. You will also need SS and M for each sample, and ∑X 2 for the entire set of all scores.

ANOVA Calculations: Step 1 (Analysis of Sum of Squares)

ANOVA Calculations: Step 2 (Analysis of DF) Calculate Total Degrees of Freedom (df total ) Calculate Total Degrees of Freedom (df total ) df total = N – 1 df total = N – 1 Calculate Within-Treatment Degrees of Freedom (df within ) Calculate Within-Treatment Degrees of Freedom (df within ) df within = ∑(n-1) = ∑df in each treatment or df within = ∑(n-1) = ∑df in each treatment or df within = N – k df within = N – k Calculate Between-Treatments Degrees of Freedom (df between ) Calculate Between-Treatments Degrees of Freedom (df between ) df between = k – 1 df between = k – 1 Check to see if df total = df within + df between Check to see if df total = df within + df between

ANOVA Calculations: Step 3 (Calculation of Variances)

ANOVA Calculations: Step 4 (Calculate F-Ratio)

Measuring Effect Size for an Analysis of Variance As with other hypothesis tests, an ANOVA evaluates the significance of the sample mean differences; that is, are the differences bigger than would be reasonable to expect just by chance. As with other hypothesis tests, an ANOVA evaluates the significance of the sample mean differences; that is, are the differences bigger than would be reasonable to expect just by chance. With large samples, however, it is possible for relatively small mean differences to be statistically significant. With large samples, however, it is possible for relatively small mean differences to be statistically significant. Thus, the hypothesis test does not necessarily provide information about the actual size of the mean differences. Thus, the hypothesis test does not necessarily provide information about the actual size of the mean differences.

Measuring Effect Size for an Analysis of Variance (cont'd.) To supplement the hypothesis test, it is recommended that you calculate a measure of effect size. To supplement the hypothesis test, it is recommended that you calculate a measure of effect size. For an analysis of variance the common technique for measuring effect size is to compute the percentage of variance that is accounted for by the treatment effects. For an analysis of variance the common technique for measuring effect size is to compute the percentage of variance that is accounted for by the treatment effects.

Measuring Effect Size for an Analysis of Variance (cont'd.) For the t statistics, this percentage was identified as r 2, but in the context of ANOVA the percentage is identified as η 2 (the Greek letter eta, squared). For the t statistics, this percentage was identified as r 2, but in the context of ANOVA the percentage is identified as η 2 (the Greek letter eta, squared). The formula for computing effect size is: The formula for computing effect size is: SS between treatments η 2 = ─────────── SS total SS total

Post Hoc Tests With more than two treatments, this creates a problem. Specifically, you must follow the ANOVA with additional tests, called post hoc tests, to determine exactly which treatments are different and which are not. With more than two treatments, this creates a problem. Specifically, you must follow the ANOVA with additional tests, called post hoc tests, to determine exactly which treatments are different and which are not. The Tukey’s HSD and Scheffé test are examples of post hoc tests. The Tukey’s HSD and Scheffé test are examples of post hoc tests. These tests are done after an ANOVA where H 0 is rejected with more than two treatment conditions. The tests compare the treatments, two at a time, to test the significance of the mean differences. These tests are done after an ANOVA where H 0 is rejected with more than two treatment conditions. The tests compare the treatments, two at a time, to test the significance of the mean differences.

Post Hoc Tests

Scheffe Test Scheffe Test Uses extremely cautious approach to reducing Type I Error Uses extremely cautious approach to reducing Type I Error One of the safest possible post-hoc tests because it has one of the smallest risks of Type I Error One of the safest possible post-hoc tests because it has one of the smallest risks of Type I Error Uses an F-Ratio for two treatments Uses an F-Ratio for two treatments Numerator = MS between using two treatments Numerator = MS between using two treatments Denominator = MS within for overall ANOVA Denominator = MS within for overall ANOVA

Post Hoc Tests

Assumptions - Independent Measures ANOVA Assumptions - Independent Measures ANOVA 1. The observations within each sample must be independent (see page 254) 1. The observations within each sample must be independent (see page 254) 2. The populations from which the samples are selected must be normal 2. The populations from which the samples are selected must be normal 3. The populations from which the samples are selected must have equal variances (homogeneity of variances) 3. The populations from which the samples are selected must have equal variances (homogeneity of variances) Do not be too concerned about assumptions concerning normality when you study large sample, but do be concerned if there is reason to believe that the assumption has been violated. Do not be too concerned about assumptions concerning normality when you study large sample, but do be concerned if there is reason to believe that the assumption has been violated. If you suspect that the assumption of homogeneity of variances is violated, please use Hartley’s F-max test as discussed in Ch. 10. If you suspect that the assumption of homogeneity of variances is violated, please use Hartley’s F-max test as discussed in Ch. 10.

Post Hoc Tests Assumptions - Independent Measures ANOVA Assumptions - Independent Measures ANOVA If you suspect that one or more of the 3 key assumptions for ANOVA have been violated or you have large sample variance that prevents you from finding significant results: If you suspect that one or more of the 3 key assumptions for ANOVA have been violated or you have large sample variance that prevents you from finding significant results: Transform original scores to ranks, which leaves you with ordinal data Transform original scores to ranks, which leaves you with ordinal data Use the Kruskal-Wallis Test (see Appendix E for more details) Use the Kruskal-Wallis Test (see Appendix E for more details)