2013/12/10.  The Kendall’s tau correlation is another non- parametric correlation coefficient  Let x 1, …, x n be a sample for random variable x and.

Slides:



Advertisements
Similar presentations
Tests of Significance and Measures of Association
Advertisements

Chapter 18: The Chi-Square Statistic
Chapter 16: Correlation.
Biomedical Statistics Testing for Normality and Symmetry Teacher:Jang-Zern Tsai ( 蔡章仁 ) Student: 邱瑋國.
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Bivariate Analysis Cross-tabulation and chi-square.
statistics NONPARAMETRIC TEST
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
 Once you know the correlation coefficient for your sample, you might want to determine whether this correlation occurred by chance.  Or does the relationship.
1 Analysis of Variance This technique is designed to test the null hypothesis that three or more group means are equal.
PSY 307 – Statistics for the Behavioral Sciences
The Simple Regression Model
SIMPLE LINEAR REGRESSION
SIMPLE LINEAR REGRESSION
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
5-3 Inference on the Means of Two Populations, Variances Unknown
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
Inferential Statistics
Correlation and Linear Regression
SIMPLE LINEAR REGRESSION
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
This Week: Testing relationships between two metric variables: Correlation Testing relationships between two nominal variables: Chi-Squared.
Introduction to Linear Regression and Correlation Analysis
Linear Regression and Correlation
Ch 10 Comparing Two Proportions Target Goal: I can determine the significance of a two sample proportion. 10.1b h.w: pg 623: 15, 17, 21, 23.
Statistical Analysis Statistical Analysis
1 Psych 5500/6500 Chi-Square (Part Two) Test for Association Fall, 2008.
Correlation.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
Chapter SixteenChapter Sixteen. Figure 16.1 Relationship of Frequency Distribution, Hypothesis Testing and Cross-Tabulation to the Previous Chapters and.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Basic concept Measures of central tendency Measures of central tendency Measures of dispersion & variability.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Chapter 20 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 These tests can be used when all of the data from a study has been measured on.
Copyright © 2004 Pearson Education, Inc.
Essential Question:  How do scientists use statistical analyses to draw meaningful conclusions from experimental results?
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
+ Chi Square Test Homogeneity or Independence( Association)
CHI SQUARE TESTS.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Nonparametric Tests of Significance Statistics for Political Science Levin and Fox Chapter Nine Part One.
Chapter 15 The Chi-Square Statistic: Tests for Goodness of Fit and Independence PowerPoint Lecture Slides Essentials of Statistics for the Behavioral.
Chapter Eleven Performing the One-Sample t-Test and Testing Correlation.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Chapter 10 Section 5 Chi-squared Test for a Variance or Standard Deviation.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
11/12 9. Inference for Two-Way Tables. Cocaine addiction Cocaine produces short-term feelings of physical and mental well being. To maintain the effect,
Chi Square Test Dr. Asif Rehman.
Tests of hypothesis Contents: Tests of significance for small samples
Inference about the slope parameter and correlation
Statistical analysis.
Dependent-Samples t-Test
Two-Sample Hypothesis Testing
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Statistical analysis.
i) Two way ANOVA without replication
Hypothesis Testing Review
Correlation and Simple Linear Regression
Consider this table: The Χ2 Test of Independence
Correlation and Simple Linear Regression
Chi Square (2) Dr. Richard Jackson
Simple Linear Regression and Correlation
Correlation and the Pearson r
SIMPLE LINEAR REGRESSION
Hypothesis Testing and Confidence Intervals
Facts from figures Having obtained the results of an investigation, a scientist is faced with the prospect of trying to interpret them. In some cases the.
Chapter 18: The Chi-Square Statistic
Presentation transcript:

2013/12/10

 The Kendall’s tau correlation is another non- parametric correlation coefficient  Let x 1, …, x n be a sample for random variable x and let y 1, …, y n be a sample for random variable y of the same size n. There are C(n, 2) possible ways of selecting distinct pairs (x i, y i ) and (x j, y j ). For any such assignment of pairs, define each pair as concordant, discordant or neither as follows:

 concordant if (x i > x j and y i > y j ) or (x i < x j and y i < y j )  discordant if (x i > x j and y i < y j ) or (x i y j )  neither if x i = x j or y i = y j (i.e. ties are not counted).

 Now let C = the number of concordant pairs and D = the number of discordant pairs. Then define tau as

 To easily calculate C – D, it is best to first put all the x data elements in ascending order. If x and y are perfectly correlated, then all the values of y would be in ascending order too. Otherwise, there will be some inversions. For each i, count the number of j > i for which x j < x i. This sum is D. If there are no ties, then C = C(n, 2) – D.

 The value of τ is :  This is a result of the fact that there are C(n, 2) pairings.  If there are a large number of ties, then C(n,2) should be replaced by  where n x is the number of ties involving x and n y is the number of ties involving y.

 The calculation of n y is similar to that of D given above, namely for each i, count the number of j > i for which x i = x j. This sum is n y. Calculating n x is similar, although easier since the x i are in ascending order. Once D, n x and n y are determined then C = C(n, 2) – D– n x – n y. This works well assuming that there are no values of i and j for which x i = x j and y i = y j.

 there is a commonly accepted measure of standard error for Kendall’s tau, namely  For sufficiently large n (generally n ≥ 10), the following statistic has a standard normal distribution and so can be used for testing the null hypothesis of zero correlation.

 For smaller values of n the table of critical values found in Kendall’s Tau Table can be used.

 A study is designed to check the relationship between smoking and longevity. A sample of 15 men fifty years and older was taken and the average number of cigarettes smoked per day and the age at death was recorded, as summarized in the table in Figure. Can we conclude from the sample that longevity is independent of smoking?

 We begin by sorting the original data in ascending order by longevity and then creating entries for inversions as ties as described above  take a look at how we calculate the value in cell C8, i.e. the number of inversions for the data element in row 8. Since the number of cigarettes smoked by that person is 14 (the value in cell B8), we count the entries in column B below B8 that have value smaller than 14. This is 5 since only the entries in cells B10, B14, B15, B16 and B18 have smaller values. We carry out the same calculation for each of the rows and sum the result to get 76 (the value in cell C19).

 This calculation is carried out by putting the array formula =COUNTIF(B5:B18,”<”&B4) in cell C4.  Ties are handled in a similar way, using, for example, the array formula =COUNTIF(B5:B18,”=”&B4) in cell E4.  Since p-value < α, the null hypothesis is rejected, and so we conclude there is a negative correlation between smoking and longevity.

 We can also establish 95% confidence interval for tau as follows: τ ± Z critical ∙ s τ = ± (1.96)(.192) = (-0.848, )

2013/12/10

 Two sample comparison of means testing can be turned into a correlation problem by combining the two samples into one (random valuable x) and setting the random variable y (the dichotomous variable) to 0 for elements in one sample and to 1 for elements in the other sample.  It turns out that the two-sample analysis using the t-test is equival to the analysis of the correlation coefficient using the t-test.

 To investigate the effect of a new hay fever drug on driving skills, a researcher studies 24 individuals with hay fever: 12 who have been taking the drug and 12 who have not. All participants then entered a simulator and were given a driving test which assigned a score to each driver as summarized in Figure. calculate the correlation coefficient γ for x and y, and then test the null hypothesis H0: ρ = 0.

H 0 : μ control = μ drug  Since t = 0.1 < 2.07 = t crit ( p-value = > α =0.05 ) we retain the null hypothesis; i.e. we are 95% confident that any difference between the two groups is due to chance.

 The values for p-value and t are exactly the same as those that result from the t-test in Example, again we conclude that the hay fever drug did not offer any significant improvement in driving results as compared to the control. correlation coefficient γcorrelation degree 0.8 abovevery high high normal low 0.2 belowvery low

 A variable is dichotomous if it only takes two values (usually set to 0 and 1).  The point-biserial correlation coefficient is simply the Pearson’s product-moment correlation coefficient where one or both of the variables are dichotomous.

 where t is the test statistic for two means hypothesis testing of variables x1 and x2 with t ~T(df), x is a combination of x1 and x2 and y is the dichotomous variable

 The effect size for the comparison of two means is given by  This means that the difference between the average memory recall score between the control group and the sleep deprived group is only about 4.1% of a standard deviation. Note that this is the same effect size that was calculated in Example

 Alternatively, we can use φ (phi) as a measure of effect size. Phi is nothing more than r. For this example φ = r = Since r2 = , we know that 0.46% of the variation in the memory recall scores is based on the amount of sleep.  A rough estimate of effect size is that r = 0.5 represents a large effect size (explains 25% of the variance), r = 0.3 represents a medium effect size (explains 9% of the variance), and r = 0.1 represents a small effect size (explains 1% of the variance).

2013/12/10

 In Independence Testing we used the chi- square test to determine whether two variables were independent. We now look at the Example using dichotomous variables.

 A researcher wants to know whether there is a significant difference in two therapies for curing patients of cocaine dependence (defined as not taking cocaine for at least 6 months). She tests 150 patients and obtains the results in the figure. Calculate the point- biserial correlation coefficient for the data using dichotomous variables.

 the point-biserial correlation coefficient :  : the average of group A  S t : the standard deviation of group A and B  A : the ratio of chosen one of A

Chi-square tests for independence

 This time let x = 1 if the patient is cured and x = 0 if the patient is not cured, and let y = 1 if therapy 1 is used and y = 0 if therapy 2 is used. Thus for 28 patients x = 1 and y = 1, for 10 patients x = 0 and y = 1, for 48 patients x = 1 and y = 0 and for 46 patients x = 0 and y = 0. If we list all 150 pairs of x and y in a column we can calculate the correlation coefficient to get r =

 if ρ = 0 (the null hypothesis), then  This property provides an alternative method for carrying out chi-square tests such as the one we did.

 Using Property 1 in Example 1, determine whether there is a significant difference in the two therapies for curing patients of cocaine dependence based on the data in Figure.

 the p-value = CHITEST(5.67,1) = < α = 0.05, we again reject the null hypothesis and conclude there is a significant difference between the two therapies.

 If we calculate the value of for independence as in Independence Testing, from the previous observation we conclude that  This gives us a way to measure the effect of the chi-square test of independence.

 there is clearly an important difference between the two therapies (not just a significant difference), but if you look at r we see that only 4.3% of the variance is explained by the choice of therapy.

 we calculated the correlation coefficient of x with y by listing all 132 values and then using Excel’s correlation function CORREL. The following is an alternative approach for calculating r, which is especially useful if n is very large. data needed for calculating

 First we repeat the data from Figure in Example 1 using the dummy variables x and y (in range F4:H7). Essentially this is a frequency table. We then calculate the mean of x and y. E.g. the mean of x (in cell F10) is calculated by the formula =SUMPRODUCT(F4:F7,H4:H7)/H8.

 Next we calculate,, (in cells L8, M8 and N8). E.g. the first of these terms is calculated by the formula =SUMPRODUCT(L4:L7,O4:O7). Now the point-serial correlation coefficient is the first of these terms divided by the square root of the product of the other two, i.e.