Comparing k Populations

Slides:



Advertisements
Similar presentations
“Students” t-test.
Advertisements

Test of (µ 1 – µ 2 ),  1 =  2, Populations Normal Test Statistic and df = n 1 + n 2 – 2 2– )1– 2 ( 2 1 )1– 1 ( 2 where ] 2 – 1 [–
Hypothesis Testing. To define a statistical Test we 1.Choose a statistic (called the test statistic) 2.Divide the range of possible values for the test.
Inference for Regression
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
Chi-square Test of Independence
Inferences About Process Quality
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Introduction to Linear Regression and Correlation Analysis
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
Hypothesis testing – mean differences between populations
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Basic concept Measures of central tendency Measures of central tendency Measures of dispersion & variability.
Orthogonal Linear Contrasts This is a technique for partitioning ANOVA sum of squares into individual degrees of freedom.
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
Orthogonal Linear Contrasts This is a technique for partitioning ANOVA sum of squares into individual degrees of freedom.
Analysis of Variance 1 Dr. Mohammed Alahmed Ph.D. in BioStatistics (011)
ANOVA: Analysis of Variance.
The two way frequency table The  2 statistic Techniques for examining dependence amongst two categorical variables.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Logistic regression. Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from.
Analysis of Variance STAT E-150 Statistical Methods.
The p-value approach to Hypothesis Testing
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
ANOVA: Analysis of Variation
Introduction to Marketing Research
ANOVA: Analysis of Variation
ANOVA: Analysis of Variation
Chapter 14 Introduction to Multiple Regression
Regression and Correlation
Chapter 12 Chi-Square Tests and Nonparametric Tests
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Chapter 10 Two-Sample Tests and One-Way ANOVA.
Presentation 12 Chi-Square test.
ANOVA: Analysis of Variation
SEMINAR ON ONE WAY ANOVA
Two-Sample Hypothesis Testing
CHAPTER 11 CHI-SQUARE TESTS
Chapter 10 Two-Sample Tests and One-Way ANOVA.
What if. . . You were asked to determine if psychology and sociology majors have significantly different class attendance (i.e., the number of days a person.
Determining the distribution of Sample statistics
Comparing k Populations
Chapter 9 Hypothesis Testing.
Chapter 11 Analysis of Variance
Multivariate Data Summary
Comparing k Populations
Comparing Populations
Hypothesis testing and Estimation
Lecture Slides Elementary Statistics Eleventh Edition
Statistical Inference about Regression
What if. . . You were asked to determine if psychology and sociology majors have significantly different class attendance (i.e., the number of days.
Correlation and Regression
Comparing k Populations
ANOVA Analysis of Variance.
Reasoning in Psychology Using Statistics
Reasoning in Psychology Using Statistics
Section 11-1 Review and Preview
Chapter Fifteen Frequency Distribution, Cross-Tabulation, and
The z-test for the Mean of a Normal Population
BUS-221 Quantitative Methods
One way Analysis of Variance (ANOVA)
Correlation and Simple Linear Regression
Presentation transcript:

Comparing k Populations Means – One way Analysis of Variance (ANOVA)

The F test – for comparing k means Situation We have k normal populations Let mi and s denote the mean and standard deviation of population i. i = 1, 2, 3, … k. Note: we assume that the standard deviation for each population is the same. s1 = s2 = … = sk = s

We want to test against

The data Assume we have collected data from each of th k populations Let xi1, xi2 , xi3 , … denote the ni observations from population i. i = 1, 2, 3, … k. Let

One possible solution (incorrect) Choose the populations two at a time then perform a two sample t test of Repeat this for every possible pair of populations

The flaw with this procedure is that you are performing a collection of tests rather than a single test If each test is performed with a = 0.05, then the probability that each test makes a type I error is 5% but the probability the group of tests makes a type I error could be considerably higher than 5%. i.e. Suppose there is no different in the means of the populations. The chance that this procedure could declare a significant difference could be considerably higher than 5%

The Bonferoni inequality If N tests are preformed with significance level a. then P[group of N tests makes a type I error] ≤ 1 – (1 – a)N Example: Suppose a. = 0.05, N = 10 then P[group of N tests makes a type I error] ≤ 1 – (0.95)10 = 0.41

For this reason we are going to consider a single test for testing: against Note: If k = 10, the number of pairs of means (and hence the number of tests that would have to be performed ) is:

The F test

To test against use the test statistic

the statistic is called the Between Sum of Squares and is denoted by SSBetween It measures the variability between samples k – 1 is known as the Between degrees of freedom and is called the Between Mean Square and is denoted by MSBetween

the statistic is called the Within Sum of Squares and is denoted by SSWithin is known as the Within degrees of freedom and is called the Within Mean Square and is denoted by MSWithin

then

The Computing formula for F: Compute 1) 2) 3) 4) 5)

Then 1) 2) 3)

The critical region for the F test We reject if Fa is the critical point under the F distribution with n1 = k - 1degrees of freedom in the numerator and n2 = N – k degrees of freedom in the denominator

Example In the following example we are comparing weight gains resulting from the following six diets Diet 1 - High Protein , Beef Diet 2 - High Protein , Cereal Diet 3 - High Protein , Pork Diet 4 - Low protein , Beef Diet 5 - Low protein , Cereal Diet 6 - Low protein , Pork

Hence

Thus Thus since F > 2.386 we reject H0

A convenient method for displaying the calculations for the F-test The ANOVA Table A convenient method for displaying the calculations for the F-test

Anova Table Mean Square F-ratio Between k - 1 SSBetween MSBetween Source d.f. Sum of Squares Mean Square F-ratio Between k - 1 SSBetween MSBetween MSB /MSW Within N - k SSWithin MSWithin Total N - 1 SSTotal

The Diet Example Mean Square F-ratio Between 5 4612.933 922.587 4.3 Source d.f. Sum of Squares Mean Square F-ratio Between 5 4612.933 922.587 4.3 Within 54 11586.000 214.556 (p = 0.0023) Total 59 16198.933

Equivalence of the F-test and the t-test when k = 2

the F-test

Hence

Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS

Assume the data is contained in an Excel file

Each variable is in a column Weight gain (wtgn) diet Source of protein (Source) Level of Protein (Level)

After starting the SSPS program the following dialogue box appears:

If you select Opening an existing file and press OK the following dialogue box appears

The following dialogue box appears:

If the variable names are in the file ask it to read the names If the variable names are in the file ask it to read the names. If you do not specify the Range the program will identify the Range: Once you “click OK”, two windows will appear

One that will contain the output:

The other containing the data:

To perform ANOVA select Analyze->General Linear Model-> Univariate

The following dialog box appears

Select the dependent variable and the fixed factors Press OK to perform the Analysis

The Output

Comments The F-test H0: m1 = m2 = m3 = … = mk against HA: at least one pair of means are different If H0 is accepted we know that all means are equal (not significantly different) If H0 is rejected we conclude that at least one pair of means is significantly different. The F – test gives no information to which pairs of means are different. One now can use two sample t tests to determine which pairs means are significantly different

Fishers LSD (least significant difference) procedure: Test H0: m1 = m2 = m3 = … = mk against HA: at least one pair of means are different, using the ANOVA F-test If H0 is accepted we know that all means are equal (not significantly different). Then stop in this case If H0 is rejected we conclude that at least one pair of means is significantly different, then follow this by using two sample t tests to determine which pairs means are significantly different

Example In the following example we are comparing weight gains resulting from the following six diets Diet 1 - High Protein , Beef Diet 2 - High Protein , Cereal Diet 3 - High Protein , Pork Diet 4 - Low protein , Beef Diet 5 - Low protein , Cereal Diet 6 - Low protein , Pork

Hence

Thus

The ANOVA Table Thus since F > 2.386 we reject H0 Source d.f. Sum of Squares Mean Square F-ratio Between 5 4612.933 922.587 4.3 Within 54 11586.000 214.556 (p = 0.0023) Total 59 16198.933 Thus since F > 2.386 we reject H0 Conclusion: There are significant differences amongst the k = 6 means

Now we want to perform t tests to compare the k = 6 means with t0.025 = 2.005 for 54 d.f.

Table of means t test results Critical value t0.025 = 2.005 for 54 d.f. t values that are significant are indicated in bold.

Conclusions: There is no significant difference between diet 1 (high protein, pork) and diet 3 (high protein, pork). There are no significant differences amongst diets 2, 4, 5 and 6. (i. e. high protein, cereal (diet 2) and the low protein diets (diets 4, 5 and 6)). There are significant differences between diets 1and 3 (high protein, meat) and the other diets (2, 4, 5, and 6). Major conclusion: High protein diets result in a higher weight gain but only if the source of protein is a meat source.

These are similar conclusions to those made using exploratory techniques Examining box-plots

High Protein Low Protein Beef Cereal Pork Cereal Pork Beef

Conclusions Weight gain is higher for the high protein meat diets Increasing the level of protein - increases weight gain but only if source of protein is a meat source The carrying out of the F-test and Fisher’s LSD ensures the significance of the conclusions. Differences observed exploratory methods could have occurred by chance.

Comparing k Populations Proportions The c2 test for independence

The two sample test for proportions The data can be displayed in the following table: population 1 2 Total Success x1 x2 x1 + x2 Failure n1 - x2 n2 - x2 n1 + n2- (x1 + x2) n1 n2 n1 + n2

This problem can be extended in two ways: Increasing the populations (columns) from 2 to k (or c) Increasing the number of categories (rows) from 2 to r. 1 2 c Total x11 x12 R1 x21 x22 R2 Rr C1 C2 Cc N

The c2 test for independence

Situation We have two categorical variables R and C. The number of categories of R is r. The number of categories of C is c. We observe n subjects from the population and count xij = the number of subjects for which R = i and C = j. R = rows, C = columns

Example Both Systolic Blood pressure (C) and Serum Cholesterol (R) were meansured for a sample of n = 1237 subjects. The categories for Blood Pressure are: <126 127-146 147-166 167+ The categories for Cholesterol are: <200 200-219 220-259 260+

Table: two-way frequency

The c2 test for independence Define = Expected frequency in the (i,j) th cell in the case of independence.

Justification - for Eij = (RiCj)/n in the case of independence Let pij = P[R = i, C = j] = P[R = i] P[C = j] = rigj in the case of independence = Expected frequency in the (i,j) th cell in the case of independence.

H0: R and C are independent Then to test H0: R and C are independent against HA: R and C are not independent Use test statistic Eij= Expected frequency in the (i,j) th cell in the case of independence. xij= observed frequency in the (i,j) th cell

Sampling distribution of test statistic when H0 is true - c2 distribution with degrees of freedom n = (r - 1)(c - 1) Critical and Acceptance Region Reject H0 if : Accept H0 if :

Standardized residuals Test statistic degrees of freedom n = (r - 1)(c - 1) = 9 Reject H0 using a = 0.05

Another Example This data comes from a Globe and Mail study examining the attitudes of the baby boomers. Data was collected on various age groups

One question with responses Are there differences in weekly consumption of alcohol related to age?

Table: Expected frequencies

Table: Residuals Conclusion: There is a significant relationship between age group and weekly alcohol use

Examining the Residuals allows one to identify the cells that indicate a departure from independence Large positive residuals indicate cells where the observed frequencies were larger than expected if independent Large negative residuals indicate cells where the observed frequencies were smaller than expected if independent

Another question with responses In an average week, how many times would you surf the internet? Are there differences in weekly internet use related to age?

Table: Expected frequencies

Table: Residuals Conclusion: There is a significant relationship between age group and weekly internet use

Echo (Age 20 – 29)

Gen X (Age 30 – 39)

Younger Boomers (Age 40 – 49)

Older Boomers (Age 50 – 59)

Pre Boomers (Age 60+)

Regressions and Correlation Estimation by confidence intervals, Hypothesis Testing