Presentation is loading. Please wait.

Presentation is loading. Please wait.

Comparing k Populations Means – One way Analysis of Variance (ANOVA)

Similar presentations


Presentation on theme: "Comparing k Populations Means – One way Analysis of Variance (ANOVA)"— Presentation transcript:

1 Comparing k Populations Means – One way Analysis of Variance (ANOVA)

2 The F test – for comparing k means Situation We have k normal populations Let  i and  denote the mean and standard deviation of population i. i = 1, 2, 3, … k. Note: we assume that the standard deviation for each population is the same.  1 =  2 = … =  k = 

3 We want to test against

4 The data Assume we have collected data from each of th k populations Let x i1, x i2, x i3, … denote the n i observations from population i. i = 1, 2, 3, … k. Let

5 One possible solution (incorrect) Choose the populations two at a time then perform a two sample t test of Repeat this for every possible pair of populations

6 The flaw with this procedure is that you are performing a collection of tests rather than a single test If each test is performed with  = 0.05, then the probability that each test makes a type I error is 5% but the probability the group of tests makes a type I error could be considerably higher than 5%. i.e. Suppose there is no different in the means of the populations. The chance that this procedure could declare a significant difference could be considerably higher than 5%

7 The Bonferoni inequality If N tests are preformed with significance level . then P[group of N tests makes a type I error] ≤ 1 – (1 –  ) N Example: Suppose . = 0.05, N = 10 then P[group of N tests makes a type I error] ≤ 1 – (0.95) 10 = 0.41

8 For this reason we are going to consider a single test for testing: against Note: If k = 10, the number of pairs of means (and hence the number of tests that would have to be performed ) is:

9 The F test

10 To test against use the test statistic

11 is called the Between Sum of Squares and is denoted by SS Between It measures the variability between samples the statistic k – 1 is known as the Between degrees of freedom and is called the Between Mean Square and is denoted by MS Between

12 is called the Within Sum of Squares and is denoted by SS Within the statistic is known as the Within degrees of freedom and is called the Within Mean Square and is denoted by MS Within

13 then

14 The Computing formula for F: Compute 1) 2) 3) 4) 5)

15 Then 1) 2) 3)

16 We reject if F  is the critical point under the F distribution with 1 = k - 1degrees of freedom in the numerator and 2 = N – k degrees of freedom in the denominator The critical region for the F test

17 Example In the following example we are comparing weight gains resulting from the following six diets 1.Diet 1 - High Protein, Beef 2.Diet 2 - High Protein, Cereal 3.Diet 3 - High Protein, Pork 4.Diet 4 - Low protein, Beef 5.Diet 5 - Low protein, Cereal 6.Diet 6 - Low protein, Pork

18

19 Hence

20 Thus Thus since F > 2.386 we reject H 0

21 The ANOVA Table A convenient method for displaying the calculations for the F-test

22 Sourced.f.Sum of Squares Mean Square F-ratio Betweenk - 1SS Between MS Between MS B /MS W WithinN - kSS Within MS Within TotalN - 1SS Total Anova Table

23 Sourced.f.Sum of Squares Mean Square F-ratio Between54612.933922.5874.3 Within5411586.000214.556 (p = 0.0023) Total5916198.933 The Diet Example

24 Equivalence of the F-test and the t-test when k = 2 the t-test

25 the F-test

26

27 Hence

28 Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS

29 Assume the data is contained in an Excel file

30 Each variable is in a column 1.Weight gain (wtgn) 2.diet 3.Source of protein (Source) 4.Level of Protein (Level)

31 After starting the SSPS program the following dialogue box appears:

32 If you select Opening an existing file and press OK the following dialogue box appears

33 The following dialogue box appears:

34 If the variable names are in the file ask it to read the names. If you do not specify the Range the program will identify the Range: Once you “click OK”, two windows will appear

35 One that will contain the output:

36 The other containing the data:

37 To perform ANOVA select Analyze->General Linear Model-> Univariate

38 The following dialog box appears

39 Select the dependent variable and the fixed factors Press OK to perform the Analysis

40 The Output

41 Comments The F-test H 0 :  1 =  2 =  3 = … =  k against H A : at least one pair of means are different If H 0 is accepted we know that all means are equal (not significantly different) If H 0 is rejected we conclude that at least one pair of means is significantly different. The F – test gives no information to which pairs of means are different. One now can use two sample t tests to determine which pairs means are significantly different

42 Fishers LSD (least significant difference) procedure: 1.Test H 0 :  1 =  2 =  3 = … =  k against H A : at least one pair of means are different, using the ANOVA F-test 2.If H 0 is accepted we know that all means are equal (not significantly different). Then stop in this case 3.If H 0 is rejected we conclude that at least one pair of means is significantly different, then follow this by using two sample t tests to determine which pairs means are significantly different

43 Example In the following example we are comparing weight gains resulting from the following six diets 1.Diet 1 - High Protein, Beef 2.Diet 2 - High Protein, Cereal 3.Diet 3 - High Protein, Pork 4.Diet 4 - Low protein, Beef 5.Diet 5 - Low protein, Cereal 6.Diet 6 - Low protein, Pork

44

45 Hence

46 Thus

47 Sourced.f.Sum of Squares Mean Square F-ratio Between54612.933922.5874.3 Within5411586.000214.556 (p = 0.0023) Total5916198.933 The ANOVA Table Thus since F > 2.386 we reject H 0 Conclusion: There are significant differences amongst the k = 6 means

48 with t 0.025 = 2.005 for 54 d.f. Now we want to perform t tests to compare the k = 6 means

49 Critical value t 0.025 = 2.005 for 54 d.f. t values that are significant are indicated in bold. Table of means t test results

50 Conclusions: 1.There is no significant difference between diet 1 (high protein, pork) and diet 3 (high protein, pork). 2.There are no significant differences amongst diets 2, 4, 5 and 6. (i. e. high protein, cereal (diet 2) and the low protein diets (diets 4, 5 and 6)). 3.There are significant differences between diets 1and 3 (high protein, meat) and the other diets (2, 4, 5, and 6). Major conclusion: High protein diets result in a higher weight gain but only if the source of protein is a meat source.

51 These are similar conclusions to those made using exploratory techniques –Examining box-plots

52 High ProteinLow Protein Beef Cereal Pork

53 Conclusions Weight gain is higher for the high protein meat diets Increasing the level of protein - increases weight gain but only if source of protein is a meat source The carrying out of the F-test and Fisher’s LSD ensures the significance of the conclusions. Differences observed exploratory methods could have occurred by chance.

54 Comparing k Populations Proportions The  2 test for independence

55 The two sample test for proportions population 12Total Successx1x1 x2x2 x 1 + x 2 Failure n 1 - x 2 n 2 - x 2 n 1 + n 2 - (x 1 + x 2 ) Totaln1n1 n2n2 n 1 + n 2 The data can be displayed in the following table:

56 12cTotal 1x 11 x 12 R1R1 2 x 21 x 22 R2R2 RrRr TotalC1C1 C2C2 CcCc N This problem can be extended in two ways: 1.Increasing the populations (columns) from 2 to k (or c) 2.Increasing the number of categories (rows) from 2 to r.

57 The  2 test for independence

58 Situation We have two categorical variables R and C. The number of categories of R is r. The number of categories of C is c. We observe n subjects from the population and count x ij = the number of subjects for which R = i and C = j. R = rows, C = columns

59 Example Both Systolic Blood pressure (C) and Serum Cholesterol (R) were meansured for a sample of n = 1237 subjects. The categories for Blood Pressure are: <126127-146147-166167+ The categories for Cholesterol are: <200200-219220-259260+

60 Table: two-way frequency

61 The  2 test for independence Define = Expected frequency in the (i,j) th cell in the case of independence.

62 Justification - for E ij = (R i C j )/n in the case of independence Let  ij = P[R = i, C = j] = P[R = i] P[C = j] =  i  j in the case of independence = Expected frequency in the (i,j) th cell in the case of independence.

63 Use test statistic E ij = Expected frequency in the (i,j) th cell in the case of independence. H 0 : R and C are independent against H A : R and C are not independent Then to test x ij = observed frequency in the (i,j) th cell

64 Sampling distribution of test statistic when H 0 is true -  2 distribution with degrees of freedom = (r - 1)(c - 1) Critical and Acceptance Region Reject H 0 if : Accept H 0 if :

65

66 Standardized residuals degrees of freedom = (r - 1)(c - 1) = 9 Test statistic Reject H 0 using  = 0.05

67 Another Example This data comes from a Globe and Mail study examining the attitudes of the baby boomers. Data was collected on various age groups

68 One question with responses Are there differences in weekly consumption of alcohol related to age?

69 Table: Expected frequencies

70 Table: Residuals Conclusion: There is a significant relationship between age group and weekly alcohol use

71 Examining the Residuals allows one to identify the cells that indicate a departure from independence Large positive residuals indicate cells where the observed frequencies were larger than expected if independent Large negative residuals indicate cells where the observed frequencies were smaller than expected if independent

72 Another question with responses Are there differences in weekly internet use related to age? In an average week, how many times would you surf the internet?

73 Table: Expected frequencies

74 Table: Residuals Conclusion: There is a significant relationship between age group and weekly internet use

75 Echo (Age 20 – 29)

76 Gen X (Age 30 – 39)

77 Younger Boomers (Age 40 – 49)

78 Older Boomers (Age 50 – 59)

79 Pre Boomers (Age 60+)

80 Regressions and Correlation Estimation by confidence intervals, Hypothesis Testing


Download ppt "Comparing k Populations Means – One way Analysis of Variance (ANOVA)"

Similar presentations


Ads by Google