1 Chapter 20: Statistical Tests for Ordinal Data.

Slides:



Advertisements
Similar presentations
The t Test for Two Independent Samples
Advertisements

Introductory Mathematics & Statistics for Business
Prepared by Lloyd R. Jaisingh
Overview of Lecture Parametric vs Non-Parametric Statistical Tests.
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
SADC Course in Statistics Tests for Variances (Session 11)
Non-parametric tests, part A:
Elementary Statistics
Chapter 10: The t Test For Two Independent Samples
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Non-Parametric Statistics
Non-parametric statistics
Chi-Square and Analysis of Variance (ANOVA)
The Kruskal-Wallis H Test
Chapter 15: Two-Factor Analysis of Variance
Analysis of Variance Chapter 12 . McGraw-Hill/Irwin
Chapter Thirteen The One-Way Analysis of Variance.
Chapter 18: The Chi-Square Statistic
ABOUT TWO INDEPENDENT POPULATIONS
Chapter 11: The t Test for Two Related Samples
Chapter 16: Correlation.
1 1 Slide 統計學 Spring 2004 授課教師:統計系余清祥 日期: 2004 年 5 月 18 日 第十三週:無母數方法.
16- 1 Chapter Sixteen McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.
Chapter 16 Introduction to Nonparametric Statistics
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Ordinal Data. Ordinal Tests Non-parametric tests Non-parametric tests No assumptions about the shape of the distribution No assumptions about the shape.
statistics NONPARAMETRIC TEST
Chapter Seventeen HYPOTHESIS TESTING
Non-parametric equivalents to the t-test Sam Cromie.
Statistics 07 Nonparametric Hypothesis Testing. Parametric testing such as Z test, t test and F test is suitable for the test of range variables or ratio.
Chapter 9: Introduction to the t statistic
1 Chapter 13: Introduction to Analysis of Variance.
Non-parametric statistics
ABOUT TWO DEPENDENT POPULATIONS
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
NONPARAMETRIC STATISTICS
Statistics 11 Correlations Definitions: A correlation is measure of association between two quantitative variables with respect to a single individual.
COURSE: JUST 3900 TIPS FOR APLIA Developed By: Ethan Cooper (Lead Tutor) John Lohman Michael Mattocks Aubrey Urwick Chapter : 10 Independent Samples t.
Go to index Two Sample Inference for Means Farrokh Alemi Ph.D Kashif Haqqi M.D.
Chapter 14 Nonparametric Tests Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social & Behavioral.
Chapter 19: The Binomial Test
Ordinally Scale Variables
Stats 2022n Non-Parametric Approaches to Data Chp 15.5 & Appendix E.
Nonparametric Statistics. In previous testing, we assumed that our samples were drawn from normally distributed populations. This chapter introduces some.
1 Nonparametric Statistical Techniques Chapter 17.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
CHI SQUARE TESTS.
CD-ROM Chap 16-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition CD-ROM Chapter 16 Introduction.
NON-PARAMETRIC STATISTICS
NONPARAMETRIC STATISTICS In general, a statistical technique is categorized as NPS if it has at least one of the following characteristics: 1. The method.
Chapter Fifteen Chi-Square and Other Nonparametric Procedures.
Lesson Test to See if Samples Come From Same Population.
Chapter 13 Understanding research results: statistical inference.
1 Nonparametric Statistical Techniques Chapter 18.
Chapter 10: The t Test For Two Independent Samples.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
Part Four ANALYSIS AND PRESENTATION OF DATA
Non-Parametric Tests 12/1.
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Non-Parametric Tests 12/1.
Non-Parametric Tests 12/6.
Non-Parametric Tests.
十二、Nonparametric Methods (Chapter 12)
Non-parametric tests, part A:
Model Diagnostics and Tests
Test to See if Samples Come From Same Population
Chapter 18: The Chi-Square Statistic
Chapter Sixteen McGraw-Hill/Irwin
Presentation transcript:

1 Chapter 20: Statistical Tests for Ordinal Data

2 Tests for Ordinal Data Chapter 20 introduces four statistical techniques that have been developed specifically for use with ordinal data; that is, data where the measurement procedure simply arranges the subjects into a rank-ordered sequence. The statistical methods presented in this chapter can be used when the original data consist of ordinal measurements (ranks), or when the original data come from an interval or ratio scale but are converted to ranks because they do not satisfy the assumptions of a standard parametric test such as the t statistic.

3 Tests for Ordinal Data (cont.) Four statistical methods are introduced: 1. The Mann ‑ Whitney test evaluates the difference between two treatments or two populations using data from an independent- measures design; that is, two separate samples. 2. The Wilcoxon test evaluates the difference between two treatment conditions using data from a repeated-measures design; that is, the same sample is tested/measured in both treatment conditions.

4 Tests for Ordinal Data (cont.) 3.The Kruskal-Wallis test evaluates the differences between three or more treatments (or populations) using a separate sample for each treatment condition. 4. The Friedman test evaluates the differences between three or more treatments for studies using the same group of participants in all treatments (a repeated-measures study).

5 The Mann-Whitney U Test The Mann-Whitney test can be viewed as an alternative to the independent-measures t test. The test uses the data from two separate samples to test for a significant difference between two treatments or two populations. However, the Mann-Whitney test only requires that you are able to rank order the individual scores; there is no need to compute means or variances.

6 The Mann-Whitney U Test (cont.) The null hypothesis for the Mann-Whitney test simply states that there is no systematic or consistent difference between the two treatments (populations) being compared.

8 The Mann-Whitney U Test (cont.) The calculation of the Mann-Whitney U statistic requires: 1.Combine the two samples and rank order the individuals in the combined group.

9 The Mann-Whitney U Test (cont.) 2. Once the complete set is rank ordered, you can compute the Mann-Whitney U by either a. Find the sum of the ranks for each of the original samples, and use the formula to compute a U statistic for each sample. b. Compute a score for each sample by viewing the ranked data (1st, 2nd, etc.) as if they were finishing positions in a race. Each subject in sample A receives one point for every individual from sample B that he/she beats. The total number of points is the U statistic for sample A. In the same way, a U statistic is computed for sample B.

10 The Mann-Whitney U Test (cont.) 3.The Mann-Whitney U is the smaller of the two U statistics computed for the samples.

11 The Mann-Whitney U Test (cont.) If there is a consistent difference between the two treatments, the two samples should be distributed at opposite ends of the rank ordering. In this case, the final U value should be small. At the extreme, when there is no overlap between the two samples, you will obtain U = 0. Thus, a small value for the Mann-Whitney U indicates a difference between the treatments.

12 The Mann-Whitney U Test (cont.) To determine whether the obtained U value is sufficiently small to be significant, you must consult the Mann-Whitney table. For large samples, the obtained U statistic can be converted to a z-score and the critical region can be determined using the unit normal table.

14 The Wilcoxon T test The Wilcoxon test can be viewed as an alternative to the repeated-measures t test. The test uses the data from one sample where each individual has been observed in two different treatment conditions to test for a significant difference between the two treatments. However, the Wilcoxon test only requires that you are able to rank order the difference scores; there is no need to measure how much difference exists for each subject, or to compute a mean or variance for the difference scores.

15 The Wilcoxon T test (cont.) The null hypothesis for the Wilcoxon test simply states that there is no systematic or consistent difference between the two treatments being compared.

17 The Wilcoxon T test (cont.) The calculation of the Wilcoxon T statistic requires: 1. Observe the difference between treatment 1 and treatment 2 for each subject. 2. Rank order the absolute size of the differences without regard to sign (increases are positive and decreases are negative). 3. Find the sum of the ranks for the positive differences and the sum of the ranks for the negative differences. 4. The Wilcoxon T is the smaller of the two sums.

18 The Wilcoxon T test (cont.) If there is a consistent difference between the two treatments, the difference scores should be consistently positive (or consistently negative). At the extreme, all the differences will be in the same direction and one of the two sums will be zero. (If there are no negative differences then ΣRanks = 0 for the negative differences.) Thus, a small value for T indicates a difference between treatments.

19 The Wilcoxon T test (cont.) To determine whether the obtained T value is sufficiently small to be significant, you must consult the Wilcoxon table. For large samples, the obtained T statistic can be converted to a z-score and the critical region can be determined using the unit normal table.

20 The Kruskal-Wallis Test The Kruskal-Wallis test can be viewed as an alternative to a single-factor, independent-measures analysis of variance. The test uses data from three or more separate samples to evaluate differences among three or more treatment conditions.

21 The Kruskal-Wallis Test (cont.) The test requires that you are able to rank order the individuals but does not require numerical scores. The null hypothesis for the Kruskal-Wallis test simply states that there are no systematic or consistent differences among the treatments being compared.

22 The Kruskal-Wallis Test (cont.) The calculation of the Kruskal-Wallis statistic requires: 1. Combine the individuals from all the separate samples and rank order the entire group. 2. Regroup the individuals into the original samples and compute the sum of the ranks (T) for each sample. 3. A formula is used to compute the Kruskal-Wallis statistic which is distributed as a chi-square statistic with degrees of freedom equal to the number of samples minus one. The obtained value must be greater than the critical value for chi-square to reject H 0 and conclude that there are significant differences among the treatments.

25 The Friedman test The Friedman test can be viewed as an alternative to a single-factor, repeated- measures analysis of variance. The test uses data from one sample to evaluate differences among three or more treatment conditions.

26 The Friedman test (cont.) The test requires that you are able to rank order the individuals across treatments but does not require numerical scores. The null hypothesis for the Friedman test simply states that there are no systematic or consistent differences among the treatments being compared.

27 The Friedman test (cont.) The calculation of the Friedman statistic requires: 1. Each individual (or the individual’s scores) must be ranked across the treatment conditions. 2. The sum of the ranks (R) is computed for each treatment. 3. A formula is used to compute the Friedman test statistic which is distributed as chi-square with degrees of freedom equal to the number of treatments minus one. The obtained value must be greater than the critical value for chi-square to reject H 0 and conclude that there are significant differences among the treatments.