Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.1 One-Way ANOVA: Comparing.

Similar presentations


Presentation on theme: "Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.1 One-Way ANOVA: Comparing."— Presentation transcript:

1

2 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.1 One-Way ANOVA: Comparing Several Means

3 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 3 Analysis of Variance The analysis of variance method compares means of several groups.  Let g denote the number of groups.  Each group has a corresponding population of subjects.  The means of the response variable for the g populations are denoted by.

4 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 4 Hypotheses and Assumptions for the ANOVA Test Comparing Means The analysis of variance is a significance test of the null hypothesis of equal population means:  The alternative hypothesis is:  : at least two of the population means are unequal.

5 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 5 The assumptions for the ANOVA test comparing population means are as follows:  The population distributions of the response variable for the g groups are normal with the same standard deviation for each group.  Randomization (depends on data collection method):  In a survey sample, independent random samples are selected from each of the g populations.  For an experiment, subjects are randomly assigned separately to the g groups. Hypotheses and Assumptions for the ANOVA Test Comparing Means

6 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 6 Example: Tolerance of Being on Hold? An airline has a toll-free telephone number for reservations. Often the call volume is heavy, and callers are placed on hold until a reservation agent is free to answer. The airline hopes a caller remains on hold until the call is answered, so as not to lose a potential customer.

7 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 7 The airline recently conducted a randomized experiment to analyze whether callers would remain on hold longer, on the average, if they heard:  An advertisement about the airline and its current promotion  Muzak (“elevator music”)  Classical music Example: Tolerance of Being on Hold?

8 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 8 The company randomly selected one out of every 1000 calls in a week. For each call, they randomly selected one of the three recorded messages. They measured the number of minutes that the caller stayed on hold before hanging up (these calls were purposely not answered). Example: Tolerance of Being on Hold?

9 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 9 Table 14.1 Telephone Holding Times by Type of Recorded Message. Each observation is the number of minutes a caller remained on hold before hanging up, rounded to the nearest minute. Example: Tolerance of Being on Hold?

10 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 10 Denote the holding time means for the populations that these three random samples represent by:  = mean for the advertisement  = mean for the Muzak  = mean for the classical music Example: Tolerance of Being on Hold?

11 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 11 The hypotheses for the ANOVA test are:   : at least two of the population means are different Example: Tolerance of Being on Hold?

12 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 12 Here is a display of the sample means: Figure 14.1 Sample Means of Telephone Holding Times for Callers Who Hear One of Three Recorded Messages. Question: Since the sample means are quite different, can we conclude that the population means differ? Example: Tolerance of Being on Hold?

13 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 13 As you can see from the output on the previous page, the sample means are quite different. But even if the population means are equal, we expect the sample means to differ some because of sampling variability. This alone is not sufficient evidence to enable us to reject. Example: Tolerance of Being on Hold?

14 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 14 Variability Between Groups and Within Groups Is the Key to Significance The ANOVA method is used to compare population means. It is called analysis of variance because it uses evidence about two types of variability.

15 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 15 Two examples of data sets with equal means but unequal variability: Figure 14.2 Data from Table 14.1 in Figure 14.2a and Hypothetical Data in Figure 14.2b That Have the Same Means but Less Variability Within Groups Variability Between Groups and Within Groups Is the Key to Significance

16 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 16 Which case do you think gives stronger evidence against ? What is the difference between the data in these two cases? Variability Between Groups and Within Groups Is the Key to Significance

17 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 17 In both cases the variability between pairs of means is the same. In ‘Case b’ the variability within each sample is much smaller than in ‘Case a.’ The fact that ‘Case b’ has less variability within each sample gives stronger evidence against. Variability Between Groups and Within Groups Is the Key to Significance

18 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 18 ANOVA F-Test Statistic The analysis of variance (ANOVA) F-test statistic is: The larger the variability between groups relative to the variability within groups, the larger the F test statistic tends to be.

19 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 19 The test statistic for comparing means has the F-distribution. The larger the F-test statistic value, the stronger the evidence against. ANOVA F-Test Statistic

20 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 20 SUMMARY: ANOVA F-test for Comparing Population Means of Several Groups 1. Assumptions:  Independent random samples  Normal population distributions with equal standard deviations 2. Hypotheses:   : at least two of the population means are different

21 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 21 3. Test statistic:  F- sampling distribution has, (total sample size – no. of groups) SUMMARY: ANOVA F-test for Comparing Population Means of Several Groups

22 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 22 4. P-value: Right-tail probability above the observed F- value 5. Conclusion: If decision is needed, reject if P-value significance level (such as 0.05) SUMMARY: ANOVA F-test for Comparing Population Means of Several Groups

23 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 23 The Variance Estimates and the ANOVA Table Let denote the standard deviation for each of the g population distributions  One assumption for the ANOVA F-test is that each population has the same standard deviation,.  The F-test statistic is the ratio of two estimates of, the population variance for each group.

24 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 24  The estimate of in the denominator of the F-test statistic uses the variability within each group.  The estimate of in the numerator of the F-test statistic uses the variability between each sample mean and the overall mean for all the data. The Variance Estimates and the ANOVA Table

25 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 25  Computer software displays the two estimates of in the ANOVA table similar to tables displayed in regression.  The MS column contains the two estimates, which are called mean squares.  The ratio of the two mean squares is the F- test statistic.  This F- statistic has a P-value. The Variance Estimates and the ANOVA Table

26 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 26 Example: Telephone Holding Times This example is a continuation of a previous example in which an airline conducted a randomized experiment to analyze whether callers would remain on hold longer, on the average, if they heard:  An advertisement about the airline and its current promotion  Muzak (“elevator music”)  Classical music

27 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 27 Denote the holding time means for the populations that these three random samples represent by:  = mean for the advertisement  = mean for the Muzak  = mean for the classical music Example: Telephone Holding Times

28 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 28 The hypotheses for the ANOVA test are:   : at least two of the population means are different Example: Telephone Holding Times

29 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 29 Table 14.2 ANOVA Table for F Test Using Data From Table 14.1 Example: Telephone Holding Times

30 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 30 Since P-value < 0.05, there is sufficient evidence to reject. We conclude that a difference exists among the three types of messages in the population mean amount of time that customers are willing to remain on hold. Example: Telephone Holding Times

31 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 31 1.Population distributions are normal  Moderate violations of the normality assumption are not serious. 2.These distributions have the same standard deviation  Moderate violations are also not serious. 3.The data resulted from randomization. Assumptions and the Effects of Violating Them

32 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 32 You can construct box plots or dot plots for the sample data distributions to check for extreme violations of normality. Misleading results may occur with the F-test if the distributions are highly skewed and the sample size N is small. Assumptions and the Effects of Violating Them

33 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 33  Misleading results may also occur with the F-test if there are relatively large differences among the standard deviations (the largest sample standard deviation being more than double the smallest one).  The ANOVA methods presented here are for independent samples. For dependent methods, other techniques must be used. Assumptions and the Effects of Violating Them

34 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 34 Using One F Test or Several t Tests to Compare the Means Why Not Use Multiple t-tests?  When there are several groups, using the F test instead of multiple t tests allows us to control the probability of a type I error.  If separate t tests are used, the significance level applies to each individual comparison, not the overall type I error rate for all the comparisons.  However, the F test does not tell us which groups differ or how different they are.

35 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.2 Estimating Differences in Groups for a Single Factor

36 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 36 Confidence Intervals Comparing Pairs of Means Follow Up to an ANOVA F-Test:  When an analysis of variance F-test has a small P-value, the test does not specify which means are different or how different they are.  We can estimate differences between population means with confidence intervals.

37 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 37 For two groups i and j, with sample means and having sample sizes n i and n j, the 95% confidence interval for is: The t-score has total sample size - # groups SUMMARY: Confidence Interval Comparing Means

38 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 38 Confidence Intervals Comparing Pairs of Means In the context of follow-up analyses after the ANOVA F test by forming this confidence interval to compare a pair of means, some software (such as MINITAB) refers to this method of comparing means as the Fisher method. When the confidence interval does not contain 0, we can infer that the population means are different. The interval shows just how different the means may be.

39 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 39 A recent GSS study asked: “About how many good friends do you have?” The study also asked each respondent to indicate whether they were ‘very happy,’ ‘pretty happy,’ or ‘not too happy’. Example: Number of Good Friends and Happiness

40 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 40 Let the response variable y = number of good friends Let the categorical explanatory variable x = happiness level Example: Number of Good Friends and Happiness

41 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 41 Table 14.3 Summary of ANOVA for Comparing Mean Number of Good Friends for Three Happiness Categories. The analysis is based on GSS data. Example: Number of Good Friends and Happiness

42 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 42 Construct a 95% CI to compare the population mean number of good friends for the three pairs of happiness categories—very happy with pretty happy, very happy with not too happy, and pretty happy with not too happy. 95% CI formula: Example: Number of Good Friends and Happiness

43 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 43 First, use the output to find s:  df=828  Use software or a table to find the t-value of 1.963 Example: Number of Good Friends and Happiness

44 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 44 For comparing the very happy and pretty happy categories, the confidence interval for is Since the CI contains only positive numbers, this suggests that, on average, people who are very happy have more good friends than people who are pretty happy. Example: Number of Good Friends and Happiness

45 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 45 The Effects of Violating Assumptions The t confidence intervals have the same assumptions as the ANOVA F test: 1.normal population distributions with 2.identical standard deviations 3.data obtained from randomization When the sample sizes are large and the ratio of the largest standard deviation to the smallest is less than 2, these procedures are robust to violations of these assumptions. If the ratio of the largest standard deviation to the smallest exceeds 2, use the confidence interval formulas that use separate standard deviations for the groups.

46 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 46 Controlling Overall Confidence with Many Confidence Intervals The confidence interval method just discussed is mainly used when g is small or when only a few comparisons are of main interest. The confidence level of 0.95 applies to any particular confidence interval that we construct.

47 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 47 How can we construct the intervals so that the 95% confidence extends to the entire set of intervals rather than to each single interval? Methods that control the probability that all confidence intervals will contain the true differences in means are called multiple comparison methods. For these methods, all intervals are designed to contain the true parameters simultaneously with an overall fixed probability. Controlling Overall Confidence with Many Confidence Intervals

48 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 48  The method that we will use is called the Tukey method.  It is designed to give overall confidence level very close to the desired value (such as 0.95).  This method is available in most software packages. Controlling Overall Confidence with Many Confidence Intervals

49 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 49 Example: Number of Good Friends Table 14.4 Multiple Comparisons of Mean Good Friends for Three Happiness Categories. An asterisk * indicates a significant difference, with the confidence interval not containing 0.

50 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 50 ANOVA and Regression ANOVA can be presented as a special case of multiple regression by using indicator variables to represent the factors. For example, with 3 groups we need 2 indicator variables to indicate group membership: The first indicator variable is  x1 = 1 for observations from the first group, = 0 otherwise

51 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 51 The second indicator variable is  for observations from the second group  otherwise The indicator variables identify the group to which an observation belongs as follows: ANOVA and Regression

52 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 52 The multiple regression equation for the mean of y is Table 14.5 Interpretation of Coefficients of Indicator Variables in Regression Model The indicator variables represent a categorical predictor with three categories specifying three groups. ANOVA and Regression

53 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 53 Using Regression for the ANOVA Comparison of Means For three groups, the null hypothesis for the ANOVA F test is   If is true, then and In the Multiple Regression model:  with and Thus, ANOVA hypothesis is equivalent to in the regression model.

54 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.3 Two-Way ANOVA

55 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 55 Type of ANOVA One-way ANOVA is a bivariate method:  It has a quantitative response variable  It has one categorical explanatory variable Two-way ANOVA is a multivariate method:  It has a quantitative response variable  It has two categorical explanatory variables

56 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 56 Example: Amounts of Fertilizer and Manure A recent study at Iowa State University:  A field was portioned into 20 equal-size plots.  Each plot was planted with the same amount of corn seed.  The goal was to study how the yield of corn later harvested depended on the levels of use of nitrogen-based fertilizer and manure.  Each factor (fertilizer and manure) was measured in a binary manner.

57 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 57 There are four treatments you can compare with this experiment found by cross-classifying the two binary factors: fertilizer level and manure level. Table 14.7 Four Groups for Comparing Mean Corn Yield These result from the two-way cross classification of fertilizer level with manure level. Example: Amounts of Fertilizer and Manure

58 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 58 Inference about Effects in Two-Way ANOVA In two-way ANOVA, a null hypothesis states that the population means are the same in each category of one factor, at each fixed level of the other factor. We could test: : Mean corn yield is equal for plots at the low and high levels of fertilizer, for each fixed level of manure. Example: Amounts of Fertilizer and Manure

59 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 59 We could also test: : Mean corn yield is equal for plots at the low and high levels of manure, for each fixed level of fertilizer. The effect of individual factors tested with the two null hypotheses (the previous two pages) are called the main effects. Example: Amounts of Fertilizer and Manure

60 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 60 Assumptions for the Two-way ANOVA F-test  The population distribution for each group is normal.  The population standard deviations are identical.  The data result from a random sample or randomized experiment.

61 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 61 SUMMARY: F-test Statistics in Two-Way ANOVA For testing the main effect for a factor, the test statistic is the ratio of mean squares:  The MS for the factor is a variance estimate based on between-groups variation for that factor.  The MS error is a within-groups variance estimate that is always unbiased.

62 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 62  When the null hypothesis of equal population means for the factor is true, the F-test statistic values tend to fluctuate around 1.  When it is false, they tend to be larger.  The P-value is the right-tail probability above the observed F-value. SUMMARY: F-test Statistics in Two-Way ANOVA

63 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 63 Example: Corn Yield Data and sample statistics for each group: Table 14.9 Corn Yield by Fertilizer Level and Manure Level

64 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 64 Output from Two-way ANOVA: Table 14.10 Two-Way ANOVA for Corn Yield Data in Table 14.9 Example: Corn Yield

65 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 65 First consider the hypothesis: : Mean corn yield is equal for plots at the low and high levels of fertilizer, for each fixed level of manure. From the output, you can obtain the F-test statistic of 6.33 with its corresponding P-value of 0.022. The small P-value indicates strong evidence that the mean corn yield depends on fertilizer level. Example: Corn Yield

66 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 66 Next consider the hypothesis: : Mean corn yield is equal for plots at the low and high levels of manure, for each fixed level of fertilizer. From the output, you can obtain the F-test statistic of 6.88 with its corresponding P-value of 0.018. The small P-value indicates strong evidence that the mean corn yield depends on manure level. Example: Corn Yield

67 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 67 Exploring Interaction between Factors in Two-Way ANOVA No interaction between two factors means that the effect of either factor on the response variable is the same at each category of the other factor.

68 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 68 Figure 14.5 Mean Corn Yield, by Fertilizer and Manure Levels, Showing No Interaction. Exploring Interaction between Factors in Two-Way ANOVA

69 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 69 A graph showing interaction: Figure 14.6 Mean Corn Yield, by Fertilizer and Manure Levels, Displaying Interaction. Exploring Interaction between Factors in Two-Way ANOVA

70 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 70 Testing for Interaction In conducting a two-way ANOVA, before testing the main effects, it is customary to test a third null hypothesis stating that their is no interaction between the factors in their effects on the response.

71 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 71 The test statistic providing the sample evidence of interaction is: When is false, the F-statistic tends to be large. Testing for Interaction

72 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 72 Example: Corn Yield Data ANOVA table for a model that allows interaction: Table 14.14 Two-Way ANOVA of Mean Corn Yield by Fertilizer Level and Manure Level, Allowing Interaction

73 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 73 The test statistic for : no interaction is: F = (MS for interaction)/(MS error) = 3.04 / 2.78 = 1.10 ANOVA table reports corresponding P-value of 0.311  There is not much evidence of interaction.  We would not reject at the usual significance levels, such as 0.05. Example: Corn Yield Data

74 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 74 Check Interaction Before Main Effects In practice, in two-way ANOVA, you should first test the hypothesis of no interaction. It is not meaningful to test the main effects hypotheses when there is interaction.

75 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 75 If the evidence of interaction is not strong (that is, if the P-value is not small), then test the main effects hypotheses and/or construct confidence intervals for those effects. Check Interaction Before Main Effects

76 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 76 If important evidence of interaction exists, plot and compare the cell means for a factor separately at each category of the other factor. Check Interaction Before Main Effects

77 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 77 Why Not Instead Perform Two Separate One-Way ANOVAs? When you have two factors, you could perform two separate One-Way ANOVAs rather than a Two-Way ANOVA but  you learn more with a Two-Way ANOVA -it indicates whether there is interaction.  more cost effective to study the variables together rather than running two separate experiments.  the residual variability tends to decrease so we get better predictions, larger test statistics and hence greater power for rejecting false null hypotheses.

78 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 78 Factorial ANOVA The methods of two-way ANOVA can be extended to the analysis of several factors. A multifactor ANOVA with observations from all combinations of the factors is called factorial ANOVA, e.g., with three factors - three-way ANOVA considers main effects for all three factors as well as possible interactions.

79 Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 79 Use Regression With Categorical and Quantitative Predictors In practice, when you have several predictors, both categorical and quantitative, it is sensible to build a multiple regression model containing both types of predictors.


Download ppt "Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.1 One-Way ANOVA: Comparing."

Similar presentations


Ads by Google