Statistics in Applied Science and Technology Chapter14. Nonparametric Methods.

Slides:



Advertisements
Similar presentations
Hypothesis Testing Steps in Hypothesis Testing:
Advertisements

Independent t -test Features: One Independent Variable Two Groups, or Levels of the Independent Variable Independent Samples (Between-Groups): the two.
Chapter 16 Introduction to Nonparametric Statistics
Introduction to Nonparametric Statistics
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Nonparametric Methods Chapter 15.
Ordinal Data. Ordinal Tests Non-parametric tests Non-parametric tests No assumptions about the shape of the distribution No assumptions about the shape.
Chapter Seventeen HYPOTHESIS TESTING
PSY 307 – Statistics for the Behavioral Sciences
Chapter 12 Chi-Square Tests and Nonparametric Tests
ONE-WAY BETWEEN-SUBJECTS ANOVA What is the Purpose?What are the Assumptions?Why not do t-Tests?How Does it Work?How is Effect Size Measured?What is the.
Nonparametric Methods
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved.
15-1 Introduction Most of the hypothesis-testing and confidence interval procedures discussed in previous chapters are based on the assumption that.
Summary of Quantitative Analysis Neuman and Robson Ch. 11
Chapter 15 Nonparametric Statistics
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Choosing Statistical Procedures
AM Recitation 2/10/11.
1 STATISTICAL HYPOTHESES AND THEIR VERIFICATION Kazimieras Pukėnas.
Hypothesis Testing Charity I. Mulig. Variable A variable is any property or quantity that can take on different values. Variables may take on discrete.
Copyright © 2010, 2007, 2004 Pearson Education, Inc Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
NONPARAMETRIC STATISTICS
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
CHAPTER 14: Nonparametric Methods
Chapter 14 Nonparametric Statistics. 2 Introduction: Distribution-Free Tests Distribution-free tests – statistical tests that don’t rely on assumptions.
Previous Lecture: Categorical Data Methods. Nonparametric Methods This Lecture Judy Zhong Ph.D.
CHAPTER 14: Nonparametric Methods to accompany Introduction to Business Statistics seventh edition, by Ronald M. Weiers Presentation by Priscilla Chaffe-Stengel.
Hypothesis of Association: Correlation
Nonparametric Statistics aka, distribution-free statistics makes no assumption about the underlying distribution, other than that it is continuous the.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
© Copyright McGraw-Hill CHAPTER 13 Nonparametric Statistics.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Nonparametric Statistics. In previous testing, we assumed that our samples were drawn from normally distributed populations. This chapter introduces some.
1 Nonparametric Statistical Techniques Chapter 17.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Lesson 15 - R Chapter 15 Review. Objectives Summarize the chapter Define the vocabulary used Complete all objectives Successfully answer any of the review.
Chapter 15 – Analysis of Variance Math 22 Introductory Statistics.
Experimental Design and Statistics. Scientific Method
Chapter 13 CHI-SQUARE AND NONPARAMETRIC PROCEDURES.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 11-1 Chapter 11 Chi-Square Tests and Nonparametric Tests Statistics for.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Non-parametric: Analysis of Ranked Data Chapter 18.
Ka-fu Wong © 2003 Chap Dr. Ka-fu Wong ECON1003 Analysis of Economic Data.
July, 2000Guang Jin Statistics in Applied Science and Technology Chapter 12. The Chi-Square Test.
CD-ROM Chap 16-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition CD-ROM Chapter 16 Introduction.
Chapter 14: Nonparametric Statistics
Nonparametric Statistics
Chapter 15 The Chi-Square Statistic: Tests for Goodness of Fit and Independence PowerPoint Lecture Slides Essentials of Statistics for the Behavioral.
Biostatistics Nonparametric Statistics Class 8 March 14, 2000.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Chapter 21prepared by Elizabeth Bauer, Ph.D. 1 Ranking Data –Sometimes your data is ordinal level –We can put people in order and assign them ranks Common.
Nonparametric Tests with Ordinal Data Chapter 18.
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Nonparametric Statistics.
Copyright © 2010, 2007, 2004 Pearson Education, Inc Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
Chapter 13 Understanding research results: statistical inference.
SUMMARY EQT 271 MADAM SITI AISYAH ZAKARIA SEMESTER /2015.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Nonparametric statistics. Four levels of measurement Nominal Ordinal Interval Ratio  Nominal: the lowest level  Ordinal  Interval  Ratio: the highest.
1 Nonparametric Statistical Techniques Chapter 18.
Nonparametric Statistics Overview. Objectives Understand Difference between Parametric and Nonparametric Statistical Procedures Nonparametric methods.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. CHAPTER 14: Nonparametric Methods to accompany Introduction to Business Statistics fifth.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
Chapter 12 Chi-Square Tests and Nonparametric Tests
Part Four ANALYSIS AND PRESENTATION OF DATA
Nonparametric Statistics Overview
Nonparametric Statistics
Presentation transcript:

Statistics in Applied Science and Technology Chapter14. Nonparametric Methods

Key Concepts in this Chapter Nonparametric methods Nonparametric methods Distribution-free methods Distribution-free methods Ranks of observation Ranks of observation Wilcoxon Rank-Sum Test Wilcoxon Rank-Sum Test Kruskal-Wallis One-Way ANOVA By Ranks Kruskal-Wallis One-Way ANOVA By Ranks Spearman Rank-Order Correlation Coefficient (r s ) Spearman Rank-Order Correlation Coefficient (r s )

Rationale for Nonparametric Methods Nonparametric methods, often referred to as distribution-free methods, do not require any assumption about the shape of the underlying population distribution or sample size. Nonparametric methods, often referred to as distribution-free methods, do not require any assumption about the shape of the underlying population distribution or sample size. Nonparametric methods are appropriate when dealing with data that are measured on a nominal or ordinal scale. Nonparametric methods are appropriate when dealing with data that are measured on a nominal or ordinal scale.

Advantages and Disadvantages Advantages: Advantages:  No restrictive assumptions such as normality of the observations and large sample size.  Easy and speedy computation  Good for nominal or ordinal data Disadvantages: Disadvantages:  Less efficient (require larger sample size to reject a false H 0 )  Less specific  Minimal utilization of distribution

Inherent Characteristic of Nonparametric Methods Nonparametric methods deal with ranks rather than values of the observations Nonparametric methods deal with ranks rather than values of the observations Computation is simple Computation is simple

Wilcoxon Rank-Sum Test (I) Wilcoxon Rank-Sum Test is used to test if there is difference in the two population distributions Wilcoxon Rank-Sum Test is used to test if there is difference in the two population distributions Corresponds to the t test for two independent sample means Corresponds to the t test for two independent sample means No assumptions are necessary No assumptions are necessary

Wilcoxon Rank-Sum Test (II) H 0 : No difference in two population distribution H 0 : No difference in two population distribution H 1 : There is a difference in two population distribution H 1 : There is a difference in two population distribution Test Statistics: Z test using sum of the ranks Test Statistics: Z test using sum of the ranks

Wilcoxon Rank-Sum Test (III) Test Statistics Z can be calculated by: Where: W 1 is the sum of ranks of the sample W e is the expected sum of the ranks assuming H 0 is true.  w is the standard error W e can be found using the following equation:  W can be found using the following equation: Where: n 1 and n 2 is the number of observations in two samples, respectively. Where: n 1 and n 2 are defined as above.

Wilcoxon Rank-Sum Test (IV) Decision Rule: At  of 0.05, reject H 0 if Z is above 1.96 or below –1.96. At  of 0.01, reject H 0 if Z is above 2.56 or below –2.56. Decision Rule: At  of 0.05, reject H 0 if Z is above 1.96 or below –1.96. At  of 0.01, reject H 0 if Z is above 2.56 or below –2.56.

Kruskal-Wallis One-Way ANOVA By Ranks (I) Nonparametric equivalent of the one-way ANOVA (the one we discussed in chapter 10). Nonparametric equivalent of the one-way ANOVA (the one we discussed in chapter 10). Appropriate when underlying population is not normally distributed or the samples do not have equal variances. Appropriate when underlying population is not normally distributed or the samples do not have equal variances. Appropriate when data is ordinal Appropriate when data is ordinal

Kruskal-Wallis One-Way ANOVA By Ranks (II) H 0 : No differences among more than two population distributions (K groups) H 0 : No differences among more than two population distributions (K groups) H 1 : There is at least one group has a different population distribution than others H 1 : There is at least one group has a different population distribution than others Test Statistics: H test using sum of the ranks Test Statistics: H test using sum of the ranks

Kruskal-Wallis One-Way ANOVA By Ranks (III) Test statistics H can be calculated by the following: Where: k = the number of groups n j = the number of observations in the jth group N = total number of observations in all groups R j = the sum of ranks in the jth group

Kruskal-Wallis One-Way ANOVA By Ranks (IV) Decision Rule: Reject H 0 when calculated H is more than critical H which can be found in Appendix F (textbook pg. 298) Decision Rule: Reject H 0 when calculated H is more than critical H which can be found in Appendix F (textbook pg. 298) Tied Observations will somewhat influence H, a term introduced in the denominator can correct this effect. (pg.230) Tied Observations will somewhat influence H, a term introduced in the denominator can correct this effect. (pg.230)

Spearman Rank-Order Correlation Coefficient (r s ) Appropriate when two interval-ratio variables deviate away from normal distribution Appropriate when two interval-ratio variables deviate away from normal distribution Appropriate when we deal with two ordinal variables that have a broad range of many different categories since using Gamma because somewhat inconvenient. Appropriate when we deal with two ordinal variables that have a broad range of many different categories since using Gamma because somewhat inconvenient.

Spearman Rank-Order Correlation Coefficient (r s ) r s may take on values from –1 to +1. Values close to  1 indicate a strong correlation; values close to zero indicate a weak association. The sign of rs indicates the direction of association. r s may take on values from –1 to +1. Values close to  1 indicate a strong correlation; values close to zero indicate a weak association. The sign of rs indicates the direction of association. r s 2 represents the proportional reduction in errors of prediction when predicting rank on one variable from rank on the other variable, as compared to predicting rank while ignoring the other variable. r s 2 represents the proportional reduction in errors of prediction when predicting rank on one variable from rank on the other variable, as compared to predicting rank while ignoring the other variable.

Calculation of r s Where: d i is the difference between the paired ranks n is the number of pairs

Is r s “statistically significant”? If sample size is at least 10; x and y represent randomly selected and independent pairs of ranks If sample size is at least 10; x and y represent randomly selected and independent pairs of ranks We can use t test to test hypothesis: We can use t test to test hypothesis:  H 0 :  s = 0  H 1 :  s  0

Is r s “statistically significant”? t test calculation: t test calculation: With n-2 df