CHI SQUARE DISTRIBUTION. The Chi-Square (  2 ) Distribution The chi-square distribution is the probability distribution of the sum of several independent,

Slides:



Advertisements
Similar presentations
Statistical Techniques I
Advertisements

CHI-SQUARE(X2) DISTRIBUTION
Chi Squared Tests. Introduction Two statistical techniques are presented. Both are used to analyze nominal data. –A goodness-of-fit test for a multinomial.
COMPLETE BUSINESS STATISTICS
Chapter 12 Goodness-of-Fit Tests and Contingency Analysis
Chi square.  Non-parametric test that’s useful when your sample violates the assumptions about normality required by other tests ◦ All other tests we’ve.
Hypothesis Testing IV Chi Square.
Chapter 13: The Chi-Square Test
1 1 Slide © 2009 Econ-2030-Applied Statistics-Dr. Tadesse. Chapter 11: Comparisons Involving Proportions and a Test of Independence n Inferences About.
© 2010 Pearson Prentice Hall. All rights reserved The Chi-Square Test of Independence.
12.The Chi-square Test and the Analysis of the Contingency Tables 12.1Contingency Table 12.2A Words of Caution about Chi-Square Test.
Chapter 11 Chi-Square Procedures 11.1 Chi-Square Goodness of Fit.
Chapter 16 Chi Squared Tests.
Ch 15 - Chi-square Nonparametric Methods: Chi-Square Applications
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 14 Goodness-of-Fit Tests and Categorical Data Analysis.
11-2 Goodness-of-Fit In this section, we consider sample data consisting of observed frequency counts arranged in a single row or column (called a one-way.
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
BCOR 1020 Business Statistics
Chi-Square Tests and the F-Distribution
Presentation 12 Chi-Square test.
Chapter 13 Chi-Square Tests. The chi-square test for Goodness of Fit allows us to determine whether a specified population distribution seems valid. The.
+ Quantitative Statistics: Chi-Square ScWk 242 – Session 7 Slides.
AM Recitation 2/10/11.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
EDRS 6208 Analysis and Interpretation of Data Non Parametric Tests
Section 10.1 Goodness of Fit. Section 10.1 Objectives Use the chi-square distribution to test whether a frequency distribution fits a claimed distribution.
For testing significance of patterns in qualitative data Test statistic is based on counts that represent the number of items that fall in each category.
Chapter 11: Applications of Chi-Square. Count or Frequency Data Many problems for which the data is categorized and the results shown by way of counts.
Chi-square Test of Independence Hypotheses Neha Jain Lecturer School of Biotechnology Devi Ahilya University, Indore.
Chapter 11 Chi-Square Procedures 11.3 Chi-Square Test for Independence; Homogeneity of Proportions.
Chi-Square as a Statistical Test Chi-square test: an inferential statistics technique designed to test for significant relationships between two variables.
Chapter 9: Non-parametric Tests n Parametric vs Non-parametric n Chi-Square –1 way –2 way.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 In this case, each element of a population is assigned to one and only one of several classes or categories. Chapter 11 – Test of Independence - Hypothesis.
Chapter 10 Chi-Square Tests and the F-Distribution
Chi- square test x 2. Chi Square test Symbolized by Greek x 2 pronounced “Ki square” A Test of STATISTICAL SIGNIFICANCE for TABLE data.
Section 10.2 Independence. Section 10.2 Objectives Use a chi-square distribution to test whether two variables are independent Use a contingency table.
Test of Goodness of Fit Lecture 43 Section 14.1 – 14.3 Fri, Apr 8, 2005.
1 1 Slide © 2009 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
© Copyright McGraw-Hill CHAPTER 11 Other Chi-Square Tests.
Nonparametric Methods and Chi-Square Tests Session 5.
Chapter Outline Goodness of Fit test Test of Independence.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
© Copyright McGraw-Hill 2004
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 11 Analyzing the Association Between Categorical Variables Section 11.2 Testing Categorical.
Statistics 300: Elementary Statistics Section 11-2.
Chapter 13- Inference For Tables: Chi-square Procedures Section Test for goodness of fit Section Inference for Two-Way tables Presented By:
Section 6.4 Inferences for Variances. Chi-square probability densities.
Chapter 14 Nonparametric Methods and Chi-Square Tests
Chapter 14 – 1 Chi-Square Chi-Square as a Statistical Test Statistical Independence Hypothesis Testing with Chi-Square The Assumptions Stating the Research.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 12 Tests of Goodness of Fit and Independence n Goodness of Fit Test: A Multinomial.
Chapter 10 Section 5 Chi-squared Test for a Variance or Standard Deviation.
Hypothesis Tests u Structure of hypothesis tests 1. choose the appropriate test »based on: data characteristics, study objectives »parametric or nonparametric.
Chapter 11: Categorical Data n Chi-square goodness of fit test allows us to examine a single distribution of a categorical variable in a population. n.
Class Seven Turn In: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 For Class Eight: Chapter 20: 18, 20, 24 Chapter 22: 34, 36 Read Chapters 23 &
Chi-Två Test Kapitel 6. Introduction Two statistical techniques are presented, to analyze nominal data. –A goodness-of-fit test for the multinomial experiment.
Section 10.2 Objectives Use a contingency table to find expected frequencies Use a chi-square distribution to test whether two variables are independent.
1 Pertemuan 26 Metode Non Parametrik-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
Slide 1 Shakeel Nouman M.Phil Statistics Nonparametric Methods and Chi-Square Tests (1) Nonparametric Methods and Chi-Square Tests (1) By Shakeel Nouman.
Test of independence: Contingency Table
Chapter 11 – Test of Independence - Hypothesis Test for Proportions of a Multinomial Population In this case, each element of a population is assigned.
10 Chapter Chi-Square Tests and the F-Distribution Chapter 10
Qualitative data – tests of association
Econ 3790: Business and Economics Statistics
Goodness of Fit Tests Qualitative (Nominal) Data
Chapter 10 Analyzing the Association Between Categorical Variables
Hypothesis Tests for a Standard Deviation
Chapter Outline Goodness of Fit test Test of Independence.
Quadrat sampling & the Chi-squared test
Quadrat sampling & the Chi-squared test
Presentation transcript:

CHI SQUARE DISTRIBUTION

The Chi-Square (  2 ) Distribution The chi-square distribution is the probability distribution of the sum of several independent, squared standard normal random variables.

The Chi-Square (  2 ) Distribution The chi-square random variable cannot be negative, so it is bound by zero on the left. The chi-square distribution is skewed to the right. The chi-square distribution approaches a normal as the degrees of freedom increase.

Area in Right Tail Area in Left Tail df Values and Probabilities of Chi-Square Distributions

Test of Goodness of Fit It enable us to determine how good a fit is between observed and expected frequencies. Where the former is the outcome of samples, the later are determined from the hypothesized population from which the sample is taken. Conventionally the Null hypothesis is formed that says there is no difference between the obtained and expected frequencies, i.e there is a good between observed and expected.

Test of Independence Here we try to test whether two different types of attribute of same population are statistically independent or not. Conventionally the Null hypothesis is formed that says there is no difference between the obtained and expected frequencies. In other words attributes under study are independent of each other.

The Chi-Square Distribution  The chi-square  ² distribution depends on the number of degrees of freedom  A chi-square point  ² α is the point under a chi-square distribution that gives right-hand tail area 

14-9 A Chi-Square Test for Goodness of Fit Steps in a chi-square analysis: Formulate null and alternative hypotheses Compute frequencies of occurrence that would be expected if the null hypothesis were true - expected cell counts Note actual, observed cell counts Use differences between expected and actual cell counts to find chi-square statistic: Compare chi-statistic with critical values from the chi-square distribution (with k-1 degrees of freedom) to test the null hypothesis Steps in a chi-square analysis: Formulate null and alternative hypotheses Compute frequencies of occurrence that would be expected if the null hypothesis were true - expected cell counts Note actual, observed cell counts Use differences between expected and actual cell counts to find chi-square statistic: Compare chi-statistic with critical values from the chi-square distribution (with k-1 degrees of freedom) to test the null hypothesis

The null and alternative hypotheses: H 0 : The probabilities of occurrence of events E 1, E 2...,E k are given by p 1,p 2,...,p k H 1 : The probabilities of the k events are not as specified in the null hypothesis The null and alternative hypotheses: H 0 : The probabilities of occurrence of events E 1, E 2...,E k are given by p 1,p 2,...,p k H 1 : The probabilities of the k events are not as specified in the null hypothesis Assuming equal probabilities, p 1 = p 2 = p 3 = p 4 =0.25 and n=80 PreferenceTanBrownMaroonBlackTotal Observed Expected(np) (O-E) Goodness-of-Fit Test

z f ( z ) Partitioning the Standard Normal Distribution Use the table of the standard normal distribution to determine an appropriate partition of the standard normal distribution which gives ranges with approximately equal percentages. p(z<-1) = p(-1<z<-0.44)= p(-0.44<z<0)= p(0<z<0.44)= p(0.44<z<14)= p(z>1) = Given z boundaries, x boundaries can be determined from the inverse standard normal transformation: x =  +  z = z. 3. Compare with the critical value of the  2 distribution with k-3 degrees of freedom. Goodness-of-Fit for the Normal Distribution

iO i E i O i - E i (O i - E i ) 2 (O i - E i ) 2 / E i  2 : iO i E i O i - E i (O i - E i ) 2 (O i - E i ) 2 / E i  2 :  2 (0.10,k-3) = >  H 0 is not rejected at the 0.10 level Solution

Test for Independence

Null and alternative hypotheses: H 0 : The two classification variables are independent of each other H 1 : The two classification variables are not independent Chi-square test statistic for independence: Degrees of freedom: df=(r-1)(c-1) Expected cell count: A and B are independent if:P(A  B) = P(A)  P(B). If the first and second classification categories are independent:E ij = (R i )(C j )/n A and B are independent if:P(A  B) = P(A)  P(B). If the first and second classification categories are independent:E ij = (R i )(C j )/n Contingency Table Analysis: A Chi-Square Test for Independence

ijOEO-E(O-E) 2 (O-E) 2 /E  2 :  2 (0.01,(2-1)(2-1)) = H 0 is rejected at the 0.01 level and it is concluded that the two variables are not independent. Contingency Table Analysis:

F - Distribution When independent samples of size n1 and n2 are drawn from two normal population, the ratio Follows F- distribution with n1-1 and n2-1 df. Where s1 and s2 are sample variance and

Test of Independence Here we try to test whether two different types of attribute of same population are statistically independent or not. Conventionally the Null hypothesis is formed that says there is no difference between the obtained and expected frequencies. In other words attributes under study are independent of each other.

Test of Independence Here we try to test whether two different types of attribute of same population are statistically independent or not. Conventionally the Null hypothesis is formed that says there is no difference between the obtained and expected frequencies. In other words attributes under study are independent of each other.

Test of Independence Here we try to test whether two different types of attribute of same population are statistically independent or not. Conventionally the Null hypothesis is formed that says there is no difference between the obtained and expected frequencies. In other words attributes under study are independent of each other.