Inference on Categorical Data

Slides:



Advertisements
Similar presentations
© 2010 Pearson Prentice Hall. All rights reserved Chapter Inference on Categorical Data 12.
Advertisements

Chapter 11 Other Chi-Squared Tests
Chi Squared Tests. Introduction Two statistical techniques are presented. Both are used to analyze nominal data. –A goodness-of-fit test for a multinomial.
Copyright ©2011 Brooks/Cole, Cengage Learning More about Inference for Categorical Variables Chapter 15 1.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Categorical Variables Chapter 15.
© 2010 Pearson Prentice Hall. All rights reserved The Chi-Square Test of Homogeneity.
© 2010 Pearson Prentice Hall. All rights reserved The Chi-Square Goodness-of-Fit Test.
© 2010 Pearson Prentice Hall. All rights reserved The Chi-Square Test of Independence.
Chapter 11 Chi-Square Procedures 11.1 Chi-Square Goodness of Fit.
Chapter 16 Chi Squared Tests.
Copyright © 2014, 2013, 2010 and 2007 Pearson Education, Inc. Chapter Hypothesis Tests Regarding a Parameter 10.
Presentation 12 Chi-Square test.
Chapter 13 Chi-Square Tests. The chi-square test for Goodness of Fit allows us to determine whether a specified population distribution seems valid. The.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 11-1 Chapter 11 Chi-Square Tests Business Statistics, A First Course 4 th Edition.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on Categorical Data 12.
Chapter 11: Applications of Chi-Square. Count or Frequency Data Many problems for which the data is categorized and the results shown by way of counts.
Chapter 11 Chi-Square Procedures 11.3 Chi-Square Test for Independence; Homogeneity of Proportions.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Section Inference about Two Means: Independent Samples 11.3.
Chi-Square Procedures Chi-Square Test for Goodness of Fit, Independence of Variables, and Homogeneity of Proportions.
Other Chi-Square Tests
Business Statistics: A First Course, 5e © 2009 Prentice-Hall, Inc. Chap 11-1 Chapter 11 Chi-Square Tests Business Statistics: A First Course Fifth Edition.
Copyright © 2010 Pearson Education, Inc. Slide
Section 10.2 Independence. Section 10.2 Objectives Use a chi-square distribution to test whether two variables are independent Use a contingency table.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 11-1 Chapter 11 Chi-Square Tests and Nonparametric Tests Statistics for.
Chapter Outline Goodness of Fit test Test of Independence.
Copyright © Cengage Learning. All rights reserved. Chi-Square and F Distributions 10.
11.2 Tests Using Contingency Tables When data can be tabulated in table form in terms of frequencies, several types of hypotheses can be tested by using.
Section 12.2: Tests for Homogeneity and Independence in a Two-Way Table.
Chapter 14 – 1 Chi-Square Chi-Square as a Statistical Test Statistical Independence Hypothesis Testing with Chi-Square The Assumptions Stating the Research.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial.
Chapter 11 Chi-Square Procedures 11.1 Chi-Square Goodness of Fit.
Section 10.2 Objectives Use a contingency table to find expected frequencies Use a chi-square distribution to test whether two variables are independent.
© 2010 Pearson Prentice Hall. All rights reserved Chapter Hypothesis Tests Regarding a Parameter 10.
Chapter 26 Comparing Counts.
Comparing Counts Chi Square Tests Independence.
Other Chi-Square Tests
Chapter 11 – Test of Independence - Hypothesis Test for Proportions of a Multinomial Population In this case, each element of a population is assigned.
Chapter 12 Chi-Square Tests and Nonparametric Tests
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Presentation 12 Chi-Square test.
Chi-square test or c2 test
5.1 INTRODUCTORY CHI-SQUARE TEST
10 Chapter Chi-Square Tests and the F-Distribution Chapter 10
CHAPTER 11 CHI-SQUARE TESTS
Chapter 11 Chi-Square Tests.
Lecture Slides Elementary Statistics Twelfth Edition
Hypothesis Testing Review
Chapter 12 Tests with Qualitative Data
Essential Statistics Two Categorical Variables: The Chi-Square Test
Chapter 11 Goodness-of-Fit and Contingency Tables
Elementary Statistics: Picturing The World
Inference on Categorical Data
Inferential Statistics and Probability a Holistic Approach
STATISTICS INFORMED DECISIONS USING DATA
Goodness of Fit Test - Chi-Squared Distribution
Chapter 11: Inference for Distributions of Categorical Data
Chapter 10 Analyzing the Association Between Categorical Variables
Contingency Tables: Independence and Homogeneity
Overview and Chi-Square
Inference for Relationships
Chapter 11 Chi-Square Tests.
Lesson 11 - R Chapter 11 Review:
Analyzing the Association Between Categorical Variables
CHAPTER 11 CHI-SQUARE TESTS
Hypothesis Tests for a Standard Deviation
Section 11-1 Review and Preview
Chapter 26 Comparing Counts.
Chapter Outline Goodness of Fit test Test of Independence.
Chapter 11 Chi-Square Tests.
Presentation transcript:

Inference on Categorical Data Chapter 12 Inference on Categorical Data

Section 12.1 Goodness-of-Fit Test

Objective Perform a goodness-of-fit test

Characteristics of the Chi-Square Distribution It is not symmetric.

Characteristics of the Chi-Square Distribution It is not symmetric. The shape of the chi-square distribution depends on the degrees of freedom, just like Student’s t-distribution.

Characteristics of the Chi-Square Distribution It is not symmetric. The shape of the chi-square distribution depends on the degrees of freedom, just like Student’s t-distribution. As the number of degrees of freedom increases, the chi-square distribution becomes more nearly symmetric.

Characteristics of the Chi-Square Distribution It is not symmetric. The shape of the chi-square distribution depends on the degrees of freedom, just like Student’s t-distribution. As the number of degrees of freedom increases, the chi-square distribution becomes more nearly symmetric. The values of 2 are nonnegative, i.e., the values of 2 are greater than or equal to 0.

A goodness-of-fit test is an inferential procedure used to determine whether a frequency distribution follows a specific distribution.

Expected Counts Suppose that there are n independent trials of an experiment with k ≥ 3 mutually exclusive possible outcomes. Let p1 represent the probability of observing the first outcome and E1 represent the expected count of the first outcome; p2 represent the probability of observing the second outcome and E2 represent the expected count of the second outcome; and so on. The expected counts for each possible outcome are given by Ei = i = npi for i = 1, 2, …, k

Parallel Example 1: Finding Expected Counts A sociologist wishes to determine whether the distribution for the number of years care-giving grandparents are responsible for their grandchildren is different today than it was in 2000. According to the United States Census Bureau, in 2000, 22.8% of grandparents have been responsible for their grandchildren less than 1 year; 23.9% of grandparents have been responsible for their grandchildren for 1 or 2 years; 17.6% of grandparents have been responsible for their grandchildren 3 or 4 years; and 35.7% of grandparents have been responsible for their grandchildren for 5 or more years. If the sociologist randomly selects 1,000 care-giving grandparents, compute the expected number within each category assuming the distribution has not changed from 2000.

Solution Step 1: The probabilities are the relative frequencies from the 2000 distribution: p<1yr = 0.228 p1-2yr = 0.239 p3-4yr = 0.176 p ≥5yr = 0.357

Solution Step 2: There are n=1,000 trials of the experiment so the expected counts are: E<1yr = np<1yr = 1000(0.228) = 228 E1-2yr = np1-2yr = 1000(0.239) = 239 E3-4yr = np3-4yr =1000(0.176) = 176 E≥5yr = np ≥5yr = 1000(0.357) = 357

Test Statistic for Goodness-of-Fit Tests Let Oi represent the observed counts of category i, Ei represent the expected counts of category i, k represent the number of categories, and n represent the number of independent trials of an experiment. Then the formula approximately follows the chi-square distribution with k-1 degrees of freedom, provided that all expected frequencies are greater than or equal to 1 (all Ei ≥ 1) and no more than 20% of the expected frequencies are less than 5.

CAUTION! Goodness-of-fit tests are used to test hypotheses regarding the distribution of a variable based on a single population. If you wish to compare two or more populations, you must use the tests for homogeneity presented in Section 12.2.

The Goodness-of-Fit Test To test the hypotheses regarding a distribution, we use the steps that follow. Step 1: Determine the null and alternative hypotheses. H0: The random variable follows a certain distribution H1: The random variable does not follow a certain distribution

Step 2: Decide on a level of significance, , depending on the seriousness of making a Type I error.

Step 3: Calculate the expected counts for each of the k categories. The expected counts are Ei=npi for i = 1, 2, … , k where n is the number of trials and pi is the probability of the ith category, assuming that the null hypothesis is true.

Step 3: Verify that the requirements for the goodness-of-fit test are satisfied. All expected counts are greater than or equal to 1 (all Ei ≥ 1). No more than 20% of the expected counts are less than 5. Compute the test statistic: Note: Oi is the observed count for the ith category.

CAUTION! If the requirements in Step 3(b) are not satisfied, one option is to combine two or more of the low-frequency categories into a single category.

Classical Approach Step 4: Determine the critical value. All goodness-of-fit tests are right-tailed tests, so the critical value is with k-1 degrees of freedom.

Classical Approach Step 5: Compare the critical value to the test statistic. If reject the null hypothesis.

P-Value Approach Step 4: Use Table VII to obtain an approximate P-value by determining the area under the chi-square distribution with k-1 degrees of freedom to the right of the test statistic.

P-Value Approach Step 5: If the P-value < , reject the null hypothesis.

Step 6: State the conclusion.

A sociologist wishes to determine whether the distribution for Parallel Example 2: Conducting a Goodness-of -Fit Test A sociologist wishes to determine whether the distribution for the number of years care-giving grandparents are responsible for their grandchildren is different today than it was in 2000. According to the United States Census Bureau, in 2000, 22.8% of grandparents have been responsible for their grandchildren less than 1 year; 23.9% of grandparents have been responsible for their grandchildren for 1 or 2 years; 17.6% of grandparents have been responsible for their grandchildren 3 or 4 years; and 35.7% of grandparents have been responsible for their grandchildren for 5 or more years. The sociologist randomly selects 1,000 care-giving grandparents and obtains the following data.

Test the claim that the distribution is different today than it was in 2000 at the  = 0.05 level of significance.

Solution Step 1: We want to know if the distribution today is different than it was in 2000. The hypotheses are then: H0: The distribution for the number of years care-giving grandparents are responsible for their grandchildren is the same today as it was in 2000 H1: The distribution for the number of years care-giving grandparents are responsible for their grandchildren is different today than it was in 2000

Solution Step 2: The level of significance is =0.05. Step 3: (a) The expected counts were computed in Example 1. Number of Years Observed Counts Expected Counts <1 252 228 1-2 255 239 3-4 162 176 ≥5 331 357

Solution Step 3: Since all expected counts are greater than or equal to 5, the requirements for the goodness-of-fit test are satisfied. The test statistic is

Solution: Classical Approach Step 4: There are k = 4 categories, so we find the critical value using 4-1=3 degrees of freedom. The critical value is

Solution: Classical Approach Step 5: Since the test statistic, is less than the critical value , we fail to reject the null hypothesis.

Solution: P-Value Approach Step 4: There are k = 4 categories. The P-value is the area under the chi-square distribution with 4-1=3 degrees of freedom to the right of . Thus, P-value ≈ 0.09.

Solution: P-Value Approach Step 5: Since the P-value ≈ 0.09 is greater than the level of significance  = 0.05, we fail to reject the null hypothesis.

Solution Step 6: There is insufficient evidence to conclude that the distribution for the number of years care-giving grandparents are responsible for their grandchildren is different today than it was in 2000 at the  = 0.05 level of significance.

Tests for Independence and the Homogeneity of Proportions Section 12.2 Tests for Independence and the Homogeneity of Proportions

Objectives Perform a test for independence Perform a test for homogeneity of proportions

Objective 1 Perform a Test for Independence

The chi-square test for independence is used to determine whether there is an association between a row variable and column variable in a contingency table constructed from sample data. The null hypothesis is that the variables are not associated; in other words, they are independent. The alternative hypothesis is that the variables are associated, or dependent.

“In Other Words” In a chi-square independence test, the null hypothesis is always H0: The variables are independent The alternative hypothesis is always H0: The variables are not independent

The idea behind testing these types of claims is to compare actual counts to the counts we would expect if the null hypothesis were true (if the variables are independent). If a significant difference between the actual counts and expected counts exists, we would take this as evidence against the null hypothesis.

If two events are independent, then P(A and B) = P(A)P(B) We can use the Multiplication Principle for independent events to obtain the expected proportion of observations within each cell under the assumption of independence and multiply this result by n, the sample size, in order to obtain the expected count within each cell.

Parallel Example 1: Determining the Expected Counts in a Parallel Example 1: Determining the Expected Counts in a Test for Independence In a poll, 883 males and 893 females were asked “If you could have only one of the following, which would you pick: money, health, or love?” Their responses are presented in the table below. Determine the expected counts within each cell assuming that gender and response are independent. Source: Based on a Fox News Poll conducted in January, 1999

Solution Step 1: We first compute the row and column totals: Money Health Love Row Totals Men 82 446 355 883 Women 46 574 273 893 Column totals 128 1020 628 1776

Solution Step 2: Next compute the relative marginal frequencies for the row variable and column variable: Money Health Love Relative Frequency Men 82 446 355 883/1776 ≈ 0.4972 Women 46 574 273 893/1776 ≈0.5028 128/1776 ≈0.0721 1020/1776 ≈0.5743 628/1776 ≈0.3536 1

Solution Step 3: Assuming gender and response are independent, we use the Multiplication Rule for Independent Events to compute the proportion of observations we would expect in each cell. Money Health Love Men 0.0358 0.2855 0.1758 Women 0.0362 0.2888 0.1778

Solution Step 4: We multiply the expected proportions from step 3 by 1776, the sample size, to obtain the expected counts under the assumption of independence. Money Health Love Men 1776(0.0358) ≈ 63.5808 1776(0.2855) ≈ 507.048 1776(0.1758) ≈ 312.2208 Women 1776(0.0362) ≈ 64.2912 1776(0.2888) ≈ 512.9088 1776(0.1778) ≈ 315.7728

Expected Frequencies in a Chi-Square Test for Independence To find the expected frequencies in a cell when performing a chi-square independence test, multiply the row total of the row containing the cell by the column total of the column containing the cell and divide this result by the table total. That is,

Test Statistic for the Test of Independence Let Oi represent the observed number of counts in the ith cell and Ei represent the expected number of counts in the ith cell. Then approximately follows the chi-square distribution with (r-1)(c-1) degrees of freedom, where r is the number of rows and c is the number of columns in the contingency table, provided that (1) all expected frequencies are greater than or equal to 1 and (2) no more than 20% of the expected frequencies are less than 5.

Chi-Square Test for Independence To test the association (or independence of) two variables in a contingency table: Step 1: Determine the null and alternative hypotheses. H0: The row variable and column variable are independent. H1: The row variable and column variables are dependent.

Step 2: Choose a level of significance, , depending on the seriousness of making a Type I error.

Step 3: Calculate the expected frequencies (counts) for each cell in the contingency table. Verify that the requirements for the chi-square test for independence are satisfied: All expected frequencies are greater than or equal to 1 (all Ei ≥ 1). No more than 20% of the expected frequencies are less than 5.

Step 3: c) Compute the test statistic: Note: Oi is the observed count for the ith category.

Classical Approach Step 4: Determine the critical value. All chi-square tests for independence are right-tailed tests, so the critical value is with (r-1)(c-1) degrees of freedom, where r is the number of rows and c is the number of columns in the contingency table.

Classical Approach Step 5: Compare the critical value to the test statistic. If reject the null hypothesis.

P-Value Approach Step 4: Use Table VII to determine an approximate P-value by determining the area under the chi-square distribution with (r-1)(c-1) degrees of freedom to the right of the test statistic.

P-Value Approach Step 5: If the P-value < , reject the null hypothesis.

Step 6: State the conclusion.

Parallel Example 2: Performing a Chi-Square Test for Independence In a poll, 883 males and 893 females were asked “If you could have only one of the following, which would you pick: money, health, or love?” Their responses are presented in the table below. Test the claim that gender and response are independent at the  = 0.05 level of significance. Source: Based on a Fox News Poll conducted in January, 1999

Solution Step 1: We want to know whether gender and response are dependent or independent so the hypotheses are: H0: gender and response are independent H1: gender and response are dependent Step 2: The level of significance is =0.05.

Solution Step 3: (a) The expected frequencies were computed in Example 1 and are given in parentheses in the table below, along with the observed frequencies. Money Health Love Men 82 (63.5808) 446 (507.048) 355 (312.2208) Women 46 (64.2912) 574 (512.9088) 273 (315.7728)

Solution Step 3: Since none of the expected frequencies are less than 5, the requirements for the goodness-of-fit test are satisfied. The test statistic is

Solution: Classical Approach Step 4: There are r = 2 rows and c =3 columns, so we find the critical value using (2-1)(3-1) = 2 degrees of freedom. The critical value is .

Solution: Classical Approach Step 5: Since the test statistic, is greater than the critical value , we reject the null hypothesis.

Solution: P-Value Approach Step 4: There are r = 2 rows and c =3 columns so we find the P-value using (2-1)(3-1) = 2 degrees of freedom. The P-value is the area under the chi-square distribution with 2 degrees of freedom to the right of which is approximately 0.

Solution: P-Value Approach Step 5: Since the P-value is less than the level of significance  = 0.05, we reject the null hypothesis.

Solution Step 6: There is sufficient evidence to conclude that gender and response are dependent at the  = 0.05 level of significance.

To see the relation between response and gender, we draw bar graphs of the conditional distributions of response by gender. Recall that a conditional distribution lists the relative frequency of each category of a variable, given a specific value of the other variable in a contingency table.

Parallel Example 3: Constructing a Conditional Distribution Parallel Example 3: Constructing a Conditional Distribution and Bar Graph Find the conditional distribution of response by gender for the data from the previous example, reproduced below. Source: Based on a Fox News Poll conducted in January, 1999

Solution We first compute the conditional distribution of response by gender. Money Health Love Men 82/883 ≈ 0.0929 446/883 ≈ 0.5051 355/883 ≈ 0.4020 Women 46/893 ≈ 0.0515 574/893 ≈ 0.6428 273/893 ≈ 0.3057

Solution

Objective 2 Perform a Test for Homogeneity of Proportions

In a chi-square test for homogeneity of proportions, we test whether different populations have the same proportion of individuals with some characteristic.

The procedures for performing a test of homogeneity are identical to those for a test of independence.

Parallel Example 5: A Test for Homogeneity of Proportions The following question was asked of a random sample of individuals in 1992, 2002, and 2008: “Would you tell me if you feel being a teacher is an occupation of very great prestige?” The results of the survey are presented below: Test the claim that the proportion of individuals that feel being a teacher is an occupation of very great prestige is the same for each year at the  = 0.01 level of significance. Source: The Harris Poll 1992 2002 2008 Yes 418 479 525 No 602 541 485

Solution Step 1: The null hypothesis is a statement of “no difference” so the proportions for each year who feel that being a teacher is an occupation of very great prestige are equal. We state the hypotheses as follows: H0: p1= p2= p3 H1: At least one of the proportions is different from the others. Step 2: The level of significance is =0.01.

Solution Step 3: (a) The expected frequencies are found by multiplying the appropriate row and column totals and then dividing by the total sample size. They are given in parentheses in the table below, along with the observed frequencies. 1992 2002 2008 Yes 418 (475.554) 479 525 (470.892) No 602 (544.446) 541 485 (539.108)

Solution Step 3: Since none of the expected frequencies are less than 5, the requirements are satisfied. The test statistic is

Solution: Classical Approach Step 4: There are r = 2 rows and c =3 columns, so we find the critical value using (2-1)(3-1) = 2 degrees of freedom. The critical value is .

Solution: Classical Approach Step 5: Since the test statistic, is greater than the critical value , we reject the null hypothesis.

Solution: P-Value Approach Step 4: There are r = 2 rows and c =3 columns so we find the P-value using (2-1)(3-1) = 2 degrees of freedom. The P-value is the area under the chi-square distribution with 2 degrees of freedom to the right of which is approximately 0.

Solution: P-Value Approach Step 5: Since the P-value is less than the level of significance  = 0.01, we reject the null hypothesis.

Solution Step 6: There is sufficient evidence to reject the null hypothesis at the  = 0.01 level of significance. We conclude that the proportion of individuals who believe that teaching is a very prestigious career is different for at least one of the three years.