NONPARAMETRIC STATISTICS

Slides:



Advertisements
Similar presentations
Prepared by Lloyd R. Jaisingh
Advertisements

The Kruskal-Wallis H Test
1 Chapter 20: Statistical Tests for Ordinal Data.
Kruskal Wallis and the Friedman Test.
Chi square.  Non-parametric test that’s useful when your sample violates the assumptions about normality required by other tests ◦ All other tests we’ve.
Hypothesis Testing IV Chi Square.
4.2.2 Inductive Statistics 1 UPA Package 4, Module 2 INDUCTIVE STATISTICS.
Lecture 10 Non Parametric Testing STAT 3120 Statistical Methods I.
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Kruskall-Wallis and Friedman Tests Non-parametric statistical tests exist for.
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Chapter 12 Chi-Square Tests and Nonparametric Tests
Statistics Are Fun! Analysis of Variance
1 Pertemuan 11 Analisis Varians Data Nonparametrik Matakuliah: A0392 – Statistik Ekonomi Tahun: 2006.
Ch 15 - Chi-square Nonparametric Methods: Chi-Square Applications
11-2 Goodness-of-Fit In this section, we consider sample data consisting of observed frequency counts arranged in a single row or column (called a one-way.
Hypothesis Testing :The Difference between two population mean :
Chapter 9: Introduction to the t statistic
Chapter 8 Introduction to Hypothesis Testing
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
Statistics 11 Correlations Definitions: A correlation is measure of association between two quantitative variables with respect to a single individual.
CHAPTER 14: Nonparametric Methods
Chapter 14 Nonparametric Statistics. 2 Introduction: Distribution-Free Tests Distribution-free tests – statistical tests that don’t rely on assumptions.
Analysis of variance Petter Mostad Comparing more than two groups Up to now we have studied situations with –One observation per object One.
Confidence intervals and hypothesis testing Petter Mostad
1 Nonparametric Statistical Techniques Chapter 17.
Previous Lecture: Phylogenetics. Analysis of Variance This Lecture Judy Zhong Ph.D.
Kruskal-Wallis H TestThe Kruskal-Wallis H Test is a nonparametric procedure that can be used to compare more than two populations in a completely randomized.
Friedman F r TestThe Friedman F r Test is the nonparametric equivalent of the randomized block design with k treatments and b blocks. All k measurements.
NON-PARAMETRIC STATISTICS
© Copyright McGraw-Hill 2004
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
CHAPTER 10 ANOVA - One way ANOVa.
Correlation. u Definition u Formula Positive Correlation r =
Copyright © 2010, 2007, 2004 Pearson Education, Inc Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
Lesson Test to See if Samples Come From Same Population.
SUMMARY EQT 271 MADAM SITI AISYAH ZAKARIA SEMESTER /2015.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Lecture Slides Elementary Statistics Tenth Edition and the.
Chapter 11: Categorical Data n Chi-square goodness of fit test allows us to examine a single distribution of a categorical variable in a population. n.
Chapter 11 Analysis of Variance
Chapter 11 Created by Bethany Stubbe and Stephan Kogitz.
Test of independence: Contingency Table
Chapter 12 Chi-Square Tests and Nonparametric Tests
Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc.
Part Four ANALYSIS AND PRESENTATION OF DATA
Statistics for Managers Using Microsoft Excel 3rd Edition
Hypothesis Testing I The One-sample Case
Statistics Analysis of Variance.
SA3202 Statistical Methods for Social Sciences
Nonparametric Three or more groups.
Part IV Significantly Different Using Inferential Statistics
Lecture Slides Elementary Statistics Twelfth Edition
Chapter 9 Analysis of Variance
Chapter 10 Analyzing the Association Between Categorical Variables
One-Way Analysis of Variance
Test to See if Samples Come From Same Population
Hypothesis Tests for a Standard Deviation
Testing Hypotheses about a Population Proportion
Statistics II: An Overview of Statistics
What are their purposes? What kinds?
Hypothesis Testing: The Difference Between Two Population Means
Chapter Sixteen McGraw-Hill/Irwin
Chapter 15 Analysis of Variance
CLASS 6 CLASS 7 Tutorial 2 (EXCEL version)
Testing a Claim About a Standard Deviation or Variance
Introduction to SAS Essentials Mastering SAS for Data Analytics
Testing Hypotheses about a Population Proportion
Presentation transcript:

NONPARAMETRIC STATISTICS Assoc.Prof.Dr. Fikri GÖKPINAR

7.1 OBJECTIVES In this lecture, you will learn the following items: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.1 OBJECTIVES In this lecture, you will learn the following items: How to perform Friedman’s S test for more than two dependent samples. How to use SPSS® perform Friedman’s S test for more than two dependent samples. How to perform a pairwise comparison for Test for Friedman’s S test. How to use SPSS® perform a pairwise comparison for Test for Friedman’s S test.

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.2 INTRODUCTION While comparing treatments, if same units or same type of units are used in every groups, these unit are called block and this kind of design are called Randomized Complete Block Design. Friedman S test can be described as a nonparametric version of Randomized Complete Block Design.  

7.3 FRIEDMAN S TEST Assumptions: There are c treatments and n blocks. LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST Assumptions: There are c treatments and n blocks. There are no interaction between blocks and treatments. Hypotheses: The Hypotheses can be given as follows Ho: 𝑀 1 = 𝑀 2 =… 𝑀 𝑘 H1: 𝑀 𝑖 ≠ 𝑀 𝑗 i𝑗   Test Statistics: Let Xij, i=1,2,…c, j=1,2,…n, be a random sample from c dependent population

7.3 FRIEDMAN S TEST Treatment 1 Treatment 2 … Treatment c 𝑋 11 𝑋 21 LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST Treatment 1 Treatment 2 … Treatment c 𝑋 11 𝑋 21 𝑋 𝑐1 𝑋 12 𝑋 22 𝑋 𝑐2 𝑋 1𝑛 𝑋 2𝑛 𝑋 𝑐𝑛

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST Each blocks are ranked from 1 to c and these are indicated with 𝑅 𝑖𝑗 . The ranks of the sample can be given as follows:   Treatment 1 Treatment 2 … Treatment c 𝑅 11 𝑅 21 𝑅 𝑐1 𝑅 12 𝑅 22 𝑅 𝑐2 𝑅 1𝑛 𝑅 2𝑛 𝑅 𝑐𝑛

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST By using these definitons, Friedman’s S statistics can be given as follows: 𝑆= 12𝑛 𝑐 𝑐+1 𝑖=1 𝑐 𝑅 𝑖. − 𝑅 .. 2 or 𝑆= 12 𝑛𝑐 𝑐+1 𝑖=1 𝑐 𝑅 𝑖. 2 −3𝑛(𝑐+1)

7.3 FRIEDMAN S TEST Decision Rule: LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST Decision Rule: Observed value of S based on sample data can be given as 𝑆 ℎ . Also let 𝑆 𝛼 be the critical value obtained table.   By using these definitions the decision rules can be given as follows: 𝐻 1 𝐻 0 is rejected 𝑀 𝑖 ≠ 𝑀 𝑗 i𝑗 𝑆 ℎ ≥ 𝑆 𝛼

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.3 FRIEDMAN S TEST  Example: Six raters evaluated four restaurants. The results of the experiment are displayed in Table. Under nonnormality assumption of the service ratings, test the equality of the service rating points of the restaurants at significance level %10.   Rater R1 R2 R3 R4 1 70(2) 61(1) 82(4) 81(3) 2 77(3) 66(1) 74(2) 84(4) 3 76(3) 69(1) 73(2) 91(4) 4 80(3) 63(1) 75(2) 96(4) 5 71(1) 78(2) 98(4) 6 83(3) 68(1) 79(2) 𝑅 𝑖. 15 7 23 𝑅 𝑖. 2.50 1.17 3.87

7.4 LARGE SAMPLE APPROXIMATION LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.4 LARGE SAMPLE APPROXIMATION  When ni>15 Large sample approximation can be used for Kruskal Wallis H statistic. When ni>15, H statistic asymptotically distributed as Chi-Square distribution with degrees of freedom c-1. Decision rule can be given as follows: 𝐻 1 𝐻 0 is rejected 𝑀 𝑖 ≠ 𝑀 𝑗 i𝑗 𝐻 ℎ ≥ χ 𝑐−1,𝛼 2

7.4 LARGE SAMPLE APPROXIMATION LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.4 LARGE SAMPLE APPROXIMATION  Example: Sixteen experts rated five brands of Colombian coffee in a taste-testing experiment. A rating on a 7-point scale (1=extremely unpleasing, 7 == extremely pleasing) is given for each of the following four characteristics: taste, aroma, richness, and acidity. The following table displays the summated ratings over all four characteristics. a. At the 0.10 level of significance, is there evidence of a difference in the median summated ratings of the five brands of Colombian coffee?

7.4 LARGE SAMPLE APPROXIMATION LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.4 LARGE SAMPLE APPROXIMATION Expert A B C D E 1 20(3) 14(2) 10(1) 27(4) 28(5) 2 13(2) 11(1) 18(3) 26(5) 25(4) 3 15(1) 16(2) 22(3) 4 18(2) 21(3) 5 15(2) 24(3) 6 14(1) 7 17(3) 19(4) 25(5) 8 19(3) 16(1) 17(2) 24(5) 9 16(3) 22(4) 10 21(4) 11 12 11(2) 9(1) 27(5) 13 21(2) 24(4) 14 26(4) 15 17(1) 16 𝑅 𝑖. 33 28 36 69 74 𝑅 𝑖. 2.0625 1.75 2.25 4.3125 4.6250  

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.5 PAIRWISE COMPARISON  When null hypothesis are rejected, we also may interest in which group/groups create the difference between groups. To do this we can use pairwise comparison. The hypotheses are as follows:   Ho: 𝑀 𝑠 = 𝑀 𝑡 H1: 𝑀 𝑠 ≠ 𝑀 𝑡 Also the test statistics can be given as follows: 𝑆 𝑠𝑡 = 𝑅 𝑠. − 𝑅 𝑡. 𝑐 𝑐+1 6𝑛

7.5 PAIRWISE COMPARISON Decision rule can be given as follows: LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.5 PAIRWISE COMPARISON  Decision rule can be given as follows: When ni>15 Large sample approximation can be also used for pairwise comparison. Decision rule can be given as follows: 𝐻 1 𝐻 0 is rejected 𝑀 𝑠 ≠ 𝑀 𝑡 𝑆 𝑠𝑡 ≥ 𝑆 𝛼 𝐻 1 𝐻 0 is rejected 𝑀 𝑠 ≠ 𝑀 𝑡 𝐻 𝑠𝑡 ≥ 𝜒 𝑐−1,𝛼 2

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.5 PAIRWISE COMPARISON  Example 3(Example 1 Cont. ): Compare these four restaurant by using pairwise comparison and find the reason of difference.  

LECTURE 6: TESTS FOR MORE THAN TWO DEPENDENT SAMPLES 7.5 PAIRWISE COMPARISON  Example 4: (Example 2 Cont.): Compare these five Coffees by using pairwise comparison and find the reason of difference.