Download presentation
Presentation is loading. Please wait.
Published bySamson Johnson Modified over 8 years ago
1
Non-parametric Tests Research II MSW PT Class 8
2
Key Terms Power of a test refers to the probability of rejecting a false null hypothesis (or detect a relationship when it exists) Power Efficiency the power of the test relative to that of its most powerful alternative. For example, if the power efficiency of a certain nonparametric test for difference of means with sample size 10 is 0.9, it means that if interval scale and the normality assumptions can be made (more powerful), we can use the t-test with a sample size of 9 to achieve the same power.
3
Choice of nonparametric test It depends on the level of measurement obtained (nominal, ordinal, or interval), the power of the test, whether samples are related or independent, number of samples, availability of software support (e.g. SPSS) Related samples are usually referred to match-pair (using randomization) samples or before-after samples. Other cases are usually treated as independent samples. For instance, in a survey using random sampling, we have a sub- sample of males and a sub-sample of females. They can be considered as independent samples as they are all randomly selected.
5
One-sample case Binomial – tests whether the observed distribution of dichotomous variable (a variable that has two values only) is the same as that expected from a given binomial distribution. The default value of p is 0.5. You can change the value of p. For example, a couple has given birth consecutively 8 baby girls, and you would like to test if their probability of given birth to baby girls is > 0.6 or >0.7, you can test the hypothesis by changing the default value of p in the SPSS programme.
9
Chi-square – tests whether the observed distribution is the same as a certain hypothesized distribution. The default null hypothesis is even distribution.
13
Kolmogorov-Smirnov – Compares the distribution of a variable with a uniform, normal, Poisson, or exponential distribution, Null hypothesis: the observed values were sampled from a distribution of that type.
18
Runs A run is defined as a sequence of cases on the same side of the cut point. (An uninterrupted course of some state or condition, for e.g. a run of good luck). You should use the Runs Test procedure when you want to test the hypothesis that the values of a variable are ordered randomly with respect to a cut point of your choosing (Default cut point: median.
19
E.g. If you ask 20 students about how well they understand a lecture on a scale ranged from 1 to 5 (and the median in the class is 3). If you find that, the first 10 students give a value higher than 3 and the second 10 give a value lower than 3 (there are only 2 runs). 5445444545 2222112211 For random situation, there should be more runs (but will not be close to 20, which means they are ordered exactly in an alternative fashion; for example a value below 3 will be followed by one higher than it and vice versa). 2,4,1,5,1,4,2,5,1,4,2,4 The Runs Test is often used as a precursor to running tests that compare the means of two or more groups, including: The Independent-Samples T Test procedure. The One-Way ANOVA procedure. The Two-Independent-Samples Tests procedure. The Tests for Several Independent Samples procedure.
20
Note: In this data set, 80 social workers (1) are listed together, and followed by 120 non-social workers (2), obviously, the order in not random. Since there are more non-social workers, the median is still 2. There are only 2 runs, one lower than the median (2) and one higher than or equal to it.
22
Sample cases (Related Samples) McNemar – tests whether the changes in proportions are the same for pairs of dichotomous variables. McNemar ’ s test is computed like the usual chi-square test, but only the two cells in which the classification don ’ t match are used. Null hypothesis: People are equally likely to fall into two contradictory classification categories.
27
Sign test – tests whether the numbers of differences (+ve or – ve) between two samples are approximately the same. Each pair of scores (before and after) are compared. When “ after ” > “ before ” (+ sign), if smaller (- sign). When both are the same, it is a tie. Sign-test did not use all the information available (the size of difference), but it requires less assumptions about the sample and can avoid the influence of the outliers.
28
To test the association between the following two perceptions Social workers help the disadvantaged and Social workers bring hopes to those in averse situation
32
Wilcoxon matched-pairs signed-ranks test – Similar to sign test, but take into consideration the ranking of the magnitude of the difference among the pairs of values. (Sign test only considers the direction of difference but not the magnitude of differences.) The test requires that the differences (of the true values) be a sample from a symmetric distribution (but not require normality). It ’ s better to run stem- and-leaf plot of the differences.
35
Two-sample case (independent samples) Mann-Whitney U – similar to Wilcoxon matched- paired signed-ranks test except that the samples are independent and not paired. It ’ s the most commonly used alternative to the independent-samples t test. Null hypothesis: the population means are the same for the two groups. The actual computation of the Mann-Whitney test is simple. You rank the combined data values for the two groups. Then you find the average rank in each group. Requirement: the population variances for the two groups must be the same, but the shape of the distribution does not matter.
38
Kolmogorov-Smirnov Z – to test if two distributions are different. It is used when there are only a few values available on the ordinal scale. K-S test is more powerful than M-W U test if the two distributions differ in terms of dispersion instead of central tendency.
40
Wald-Wolfowitz Run – Based on the number of runs within each group when the cases are placed in rank order. Moses test of extreme reactions – Tests whether the range (excluding the lowest 5% and the highest 5%) of an ordinal variables is the same in the two groups.
41
K-sample case (Independent samples) Kruskal-Wallis One-way ANOVA – It ’ s more powerful than Chi-square test when ordinal scale can be assumed. It is computed exactly like the Mann- Whitney test, except that there are more groups. The data must be independent samples from populations with the same shape (but not necessarily normal).
45
K related samples Friedman two-way ANOVA – test whether the k related samples could probably have come from the same population with respect to mean rank.
49
Cochran Q – determines whether it is likely that the k related samples could have come from the same population with respect to proportion or frequency of “ successes ” in the various samples. In other words, it only compares dichotomous variables. Let ’ s try this in class
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.