Chi-squared Tests. We want to test the “goodness of fit” of a particular theoretical distribution to an observed distribution. The procedure is: 1. Set.

Slides:



Advertisements
Similar presentations
Hypothesis Testing. To define a statistical Test we 1.Choose a statistic (called the test statistic) 2.Divide the range of possible values for the test.
Advertisements

Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
1 1 Slide © 2003 South-Western /Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Hypothesis: It is an assumption of population parameter ( mean, proportion, variance) There are two types of hypothesis : 1) Simple hypothesis :A statistical.
Inference about the Difference Between the
Statistical Inference for Frequency Data Chapter 16.
1 1 Slide © 2009 Econ-2030-Applied Statistics-Dr. Tadesse. Chapter 11: Comparisons Involving Proportions and a Test of Independence n Inferences About.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Chapter 14 Analysis of Categorical Data
Chapter 12 Chi-Square Tests and Nonparametric Tests
9-1 Hypothesis Testing Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental.
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 10 Notes Class notes for ISE 201 San Jose State University.
Ch 15 - Chi-square Nonparametric Methods: Chi-Square Applications
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 14 Goodness-of-Fit Tests and Categorical Data Analysis.
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Chapter 11a: Comparisons Involving Proportions and a Test of Independence Inference about the Difference between the Proportions of Two Populations Hypothesis.
1. State the null and alternative hypotheses. 2. Select a random sample and record observed frequency f i for the i th category ( k categories) Compute.
Chapter 9: Introduction to the t statistic
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Hypothesis Testing:.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Analysis of Variance or ANOVA. In ANOVA, we are interested in comparing the means of different populations (usually more than 2 populations). Since this.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
1 1 Slide IS 310 – Business Statistics IS 310 Business Statistics CSU Long Beach.
1 1 Slide © 2005 Thomson/South-Western Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial Population Goodness of.
Mid-Term Review Final Review Statistical for Business (1)(2)
Chapter 11: Applications of Chi-Square. Count or Frequency Data Many problems for which the data is categorized and the results shown by way of counts.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 9: Non-parametric Tests n Parametric vs Non-parametric n Chi-Square –1 way –2 way.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Chapter 20 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 These tests can be used when all of the data from a study has been measured on.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide Chapter 11 Comparisons Involving Proportions n Inference about the Difference Between the Proportions of Two Populations Proportions of Two Populations.
Chi-Square Procedures Chi-Square Test for Goodness of Fit, Independence of Variables, and Homogeneity of Proportions.
Essential Question:  How do scientists use statistical analyses to draw meaningful conclusions from experimental results?
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Statistical Testing of Differences CHAPTER fifteen.
Two-Sample Hypothesis Testing. Suppose you want to know if two populations have the same mean or, equivalently, if the difference between the population.
Fitting probability models to frequency data. Review - proportions Data: discrete nominal variable with two states (“success” and “failure”) You can do.
CHI SQUARE TESTS.
1 1 Slide © 2009 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Chapter Outline Goodness of Fit test Test of Independence.
Virtual University of Pakistan Lecture No. 44 of the course on Statistics and Probability by Miss Saleha Naghmi Habibullah.
1 G Lect 7a G Lecture 7a Comparing proportions from independent samples Analysis of matched samples Small samples and 2  2 Tables Strength.
© Copyright McGraw-Hill 2004
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
1 Chi-square Test Dr. T. T. Kachwala. Using the Chi-Square Test 2 The following are the two Applications: 1. Chi square as a test of Independence 2.Chi.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Chapter 14 – 1 Chi-Square Chi-Square as a Statistical Test Statistical Independence Hypothesis Testing with Chi-Square The Assumptions Stating the Research.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial.
Chapter 11: Categorical Data n Chi-square goodness of fit test allows us to examine a single distribution of a categorical variable in a population. n.
Class Seven Turn In: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 For Class Eight: Chapter 20: 18, 20, 24 Chapter 22: 34, 36 Read Chapters 23 &
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
CHI SQUARE DISTRIBUTION. The Chi-Square (  2 ) Distribution The chi-square distribution is the probability distribution of the sum of several independent,
Test of Goodness of Fit Lecture 41 Section 14.1 – 14.3 Wed, Nov 14, 2007.
St. Edward’s University
Hypothesis Testing Review
John Loucks St. Edward’s University . SLIDES . BY.
Discrete Event Simulation - 4
Hypothesis Tests for a Standard Deviation
Chapter Outline Goodness of Fit test Test of Independence.
Statistical Inference for the Mean: t-test
Quadrat sampling & the Chi-squared test
Quadrat sampling & the Chi-squared test
Presentation transcript:

Chi-squared Tests

We want to test the “goodness of fit” of a particular theoretical distribution to an observed distribution. The procedure is: 1. Set up the null and alternative hypotheses and select the significance level. 2. Draw a random sample of observations from a population or process. 3. Derive expected frequencies under the assumption that the null hypothesis is true. 4. Compare the observed frequencies and the expected frequencies. 5. If the discrepancy between the observed and expected frequencies is too great to attribute to chance fluctuations at the selected significance level, reject the null hypothesis.

Example 1: Five brands of coffee are taste-tested by 1000 people with the results below. Test at the 5% level the hypothesis that, in the general population, there is no difference in the proportions preferring each brand (i.e.: H 0 : p A = p B = p C = p D = p E versus H 1 : not all the proportions are the same). Brand preference Observed frequency f o Theoretical frequency f t f o -f t (f o -f t ) 2 A210 B312 C170 D85 E

If all the proportions were the same, we’d expect about 200 people in each group, if we have a total of 1000 people. Brand preference Observed frequency f o Theoretical frequency f t f o -f t (f o -f t ) 2 A B C D85200 E

We next compute the differences in the observed and theoretical frequencies. Brand preference Observed frequency f o Theoretical frequency f t f o -f t (f o -f t ) 2 A B C D E

Then we square each of those differences. Brand preference Observed frequency f o Theoretical frequency f t f o -f t (f o -f t ) 2 A B C D E

Then we divide each of the squares by the expected frequency and add the quotients. The resulting statistic has a chi-squared (  2 ) distribution. Brand preference Observed frequency f o Theoretical frequency f t f o -f t (f o -f t ) 2 A B C D E

The chi-squared (  2 ) distribution 22 f(  2 ) The chi-squared distribution is skewed to the right. (i.e.: It has the bump on the left and the tail on the right.)

In these goodness of fit problems, the number of degrees of freedom is: In the current problem, we have 5 categories (the 5 brands). We have 1 restriction. When we determined our expected frequencies, we restricted our numbers so that the total would be the same total as for the observed frequencies (1000). We didn’t estimate any parameters in this particular problem. So dof = 5 – 1 – 0 = 4.

Large values of the  2 statistic indicate big discrepancies between the observed and theoretical frequencies. 22 f(  2 ) So when the  2 statistic is large, we reject the hypothesis that the theoretical distribution is a good fit. That means the critical region consists of the large values, the right tail. acceptance region crit. reg.

f(  2 ) From the  2 table, we see that for a 5% test with 4 degrees of freedom, the cut-off point is In the current problem, our  2 statistic had a value of So we reject the null hypothesis and conclude that the proportions preferring each brand were not the same. acceptance region crit. reg

Example 2: A diagnostic test of mathematics is given to a group of 1000 students. The administrator analyzing the results wants to know if the scores of this group differ significantly from those of the past. Test at the 10% level. Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) <

Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) <

Based on the historical relative frequency, we determine the expected absolute frequency, restricting the total to the total for the current observed frequency. Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) <

We subtract the theoretical frequency from the observed frequency. Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) <

We square those differences. Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) , , <

We divide the square by the theoretical frequency and sum up. Grade Historical Rel. freq. Expected Abs. freq. f t Current Obs. freq. f o f o -f t (f o -f t ) , , <

We have 5 categories (the 5 grade groups). We have 1 restriction. We restricted our expected frequencies so that the total would be the same total as for the observed frequencies (1000). We didn’t estimate any parameters in this particular problem. So dof = 5 – 1 – 0 = 4.

f(  2 ) From the  2 table, we see that for a 10% test with 4 degrees of freedom, the cut-off point is In the current problem, our  2 statistic had a value of 125. So we reject the null hypothesis and conclude that the grade distribution is NOT the same as it was historically. acceptance region crit. reg

Example 3: Test at the 5% level whether the demand for a particular product as listed below has a Poisson distribution. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

Multiplying the number of days on which each amount was sold by the amount sold on that day, and then adding those products, we find that the total number of units sold on the 200 days is 600. So the mean number of units sold per day is 3. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

We use the 3 as the estimated mean for the Poisson distribution. Then using the Poisson table, we determine the probabilities for each x value. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

Then we multiply the probabilities by 200 to compute f t, the expected number of days on which each number of units would be sold. By multiplying by 200, we restrict the f t total to be the same as the f o total. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

When the f t ’s are small (less than 5), the test is not reliable. So we group small f t values. In this example, we group the last 4 categories. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

Next we subtract the theoretical frequencies f t from the observed frequencies f o. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

Then we square the differences … # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

… divide by the theoretical frequencies, and sum up. # of units demanded per day x Observed # of days f o xf o prob. f(x) Expected # of days f t f o -f t (f o -f t )

We have 8 categories (after grouping the small ones). We have 1 restriction. We restricted our expected frequencies so that the total would be the same total as for the observed frequencies (200). We estimated 1 parameter, the mean for the Poisson distribution. So dof = 8 – 1 – 1 = 6.

f(  2 ) From the  2 table, we see that for a 5% test with 6 degrees of freedom, the cut-off point is In the current problem, our  2 statistic had a value of So we accept the null hypothesis that the Poisson distribution is a reasonable fit for the product demand. acceptance region crit. reg

Example 4: Test at the 10% level whether the following exam grades are from a normal distribution. Note: This is a very long problem. grade intervals midpoint X fofo X f o [50, 60)14 [60,70)18 [70,80)36 [80.90)18 [90,100]14 100

If the distribution is normal, we need to estimate its mean and standard deviation. grade intervals midpoint X fofo X f o [50, 60)14 [60,70)18 [70,80)36 [80.90)18 [90,100]14 100

To estimate the mean, we first determine the midpoints of the grade intervals. grade intervals midpoint X fofo X f o [50, 60)5514 [60,70)6518 [70,80)7536 [80.90)8518 [90,100]

We then multiple these midpoints by the observed frequencies of the intervals, add the products, and divide the sum by the number of observations. The resulting mean is 7500/100 = 75. grade intervals midpoint X fofo X f o [50, 60) [60,70) [70,80) [80.90) [90,100]

Next we need to calculate the standard deviation We begin by subtracting the mean of 75 from each midpoint, and squaring the differences. grade intervals midpoint X fofo X f o [50, 60) [60,70) [70,80) [80.90) [90,100]

We multiply by the observed frequencies and sum up. Dividing by n –1 or 99, the sample variance s 2 = The square root is the sample standard deviation s = grade intervals midpoint X fofo X f o [50, 60) [60,70) [70,80) [80.90) [90,100] ,800

We will use the 75 and as the mean  and the standard deviation  of our proposed normal distribution. We now need to determine what the expected frequencies would be if the grades were from that normal distribution.

Start with our lowest grade category, under 60. We then expect that 10.93% of our 100 observations, or about 11 grades, would be in the lowest grade category. So 11 will be one of our f t values. We need to do similar calculations for our other grade categories Z

The next grade category is [60,70). So 23.16% of our 100 observations, or about 23 grades, are expected to be in that grade category Z.1591

The next grade category is [70,80). So 31.82% of our 100 observations, or about 32 grades, are expected to be in that grade category Z 0.41

The next grade category is [80,90). So 23.16% of our 100 observations, or about 23 grades, are expected to be in that grade category Z.1591

The highest grade category is 90 and over. So 10.93% of our 100 observations, or about 11 grades, are expected to be in that grade category Z

Now we can finally compute our  2 statistic. We put in the observed frequencies that we were given and the theoretical frequencies that we just calculated. grade category fofo ftft under [60,70)1823 [70,80)3632 [80.90) and up1411

We subtract the theoretical frequencies from the observed frequencies, square the differences, divide by the theoretical frequencies, and sum up. The resulting  2 statistic is grade category fofo ftft under [60,70) [70,80) [80.90) and up

We have 5 categories (the 5 grade groups). We have 1 restrictions. We restricted our expected frequencies so that the total would be the same total as for the observed frequencies (100). We estimated two parameters, the mean and the standard deviation. So dof = 5 – 1 – 2 = 2.

f(  2 ) From the  2 table, we see that for a 10% test with 2 degrees of freedom, the cut-off point is In the current problem, our  2 statistic had a value of So we accept the null hypothesis that the normal distribution is a reasonable fit for the grades. acceptance region crit. reg

We can also use the  2 statistic to test whether two variables are independent of each other.

Example 5: Given the following frequencies for a sample of 10,000 households, test at the 1% level whether the number of phones and the number of cars for a household are independent of each other. # of cars 012 # of phones 01, or more ,000

We first compute the row and column totals, # of cars 012 row total # of phones 01, or more column total 3,0006,0001,00010,000

and the row and column percentages (marginal probabilities). # of cars 012 row total % # of phones 01, or more column total 3,0006,0001,00010, %

Recall that if 2 variables X and Y are independent of each other, then Pr(X=x and Y=y) = Pr(X=x) Pr(Y=y)

We can use our row and column percentages as marginal probabilities, and multiply to determine the probabilities and numbers of households we would expect to see in the center of the table if the numbers of phones and cars were independent of each other. # of cars 012 row total % # of phones or more 0.34 column total 1.00 %

First calculate the expected probability. For example, Pr(0 phones & 0 cars) = Pr(0 phones) Pr(0 cars) = (0.20)(0.30) = So we expect 6% of our 10,000 households, or 600 households to have 0 phones and 0 cars. # of cars 012 row total % # of phones or more 0.34 column total 10, %

Pr(0 phones & 1 car) = Pr(0 phones) Pr(1 car) = (0.20)(0.60) = So we expect 12% of our 10,000 households, or 1200 households to have 0 phones and 1 car. # of cars 012 row total % # of phones or more 0.34 column total 10, %

Pr(0 phones & 2 cars) = Pr(0 phones) Pr(2 cars) = (0.20)(0.10) = So we expect 2% of our 10,000 households, or 200 households to have 0 phones and 2 cars. # of cars 012 row total % # of phones or more 0.34 column total 10, %

Notice that when we add the 3 numbers that we just calculated, we get the same total for the row (2,000) that we had observed. The row and column totals should be the same for the observed and expected tables. # of cars 012 row total % # of phones , or more 0.34 column total 10, %

Continuing, we get the following numbers for the 2 nd and 3 rd rows. # of cars 012 row total % # of phones , or more column total 10, %

The column totals are the same as for the observed table. # of cars 012 row total % # of phones , or more column total , %

Now we set up the same type of table that we did for our earlier  2 goodness-of-fit tests. We put in the f o column the observed frequencies and in the f t column the expected frequencies that we calculated. # of cars# of phonesfofo ftft or more or more or more400340

Then we subtract the theoretical frequencies from the observed frequencies, square the differences, divide by the theoretical frequencies, and sum to get our  2 statistic. # of cars# of phonesfofo ftft or more or more or more

In our example, we have 3 rows and 3 columns. So dof = (3 – 1)( 3 – 1) = (2)(2) = 4. In these tests of independence, the number of degrees of freedom is

f(  2 ) From the  2 table, we see that for a 1% test with 4 degrees of freedom, the cut-off point is In the current problem, our  2 statistic had a value of So we reject the null hypothesis and conclude that the number of phones and the number of cars in a household are not independent. acceptance region crit. reg

In testing for independence in 2x2 tables, the chi-square statistic has only (r-1)(c-1) =1 degree of freedom. In these cases, it is often recommended that the value of the statistic be “corrected” so that its discrete distribution will be better approximated by the continuous chi-square distribution. Yates Correction

The Hypothesis Test for the Variance or Standard Deviation This test is another one that uses the chi-squared distribution.

Sometimes it is important to know the variance or standard deviation of a variable. For example, medication often needs to be extremely close to the specified dosage. If the dosage is too low, the medication may be ineffective and a patient may die from inadequate treatment. If the dosage is too high, the patient may die from an overdose. So you may want to make sure that the variance is a very small amount.

If the data are normally distributed, the chi-squared test for the variance or standard deviation is appropriate. The statistic is n is the sample size, and σ 2 is the hypothesized population variance. The number of degrees of freedom is n-1.

Example: Suppose you want to test at the 5% level whether the population standard deviation for a particular medication is 0.5 mg. Based on a sample of 25 capsules, you determine the sample standard deviation to be 0.6 mg. Perform the test. Now we need to determine the critical region for the test.

Because the chi-squared distribution is not symmetric, you need to look up the two critical values for a two-tailed test separately critical region acceptance region critical region You can find the two numbers either by looking under “Cumulative Probabilities” and =0.975 or under “Upper-Tail Areas” and Recall that the value of the test statistic was 34.56, which is in the acceptance region. So we can not rule out the null hypothesis and therefore we conclude that the population standard deviation is 0.5 mg.