McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Experimental Design and Analysis of Variance Chapter 10.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Experimental Design and Analysis of Variance
Chapter Nine Comparing Population Means McGraw-Hill/Irwin Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved.
Chapter 11 Analysis of Variance
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
Design of Experiments and Analysis of Variance
1 1 Slide © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Analysis and Interpretation Inferential Statistics ANOVA
© 2010 Pearson Prentice Hall. All rights reserved Single Factor ANOVA.
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
Statistics for Managers Using Microsoft® Excel 5th Edition
Chapter 11 Analysis of Variance
Analysis of Variance. Experimental Design u Investigator controls one or more independent variables –Called treatment variables or factors –Contain two.
Statistics for Business and Economics
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 15 Analysis of Variance.
Copyright ©2011 Pearson Education 11-1 Chapter 11 Analysis of Variance Statistics for Managers using Microsoft Excel 6 th Global Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 10-1 Chapter 10 Analysis of Variance Statistics for Managers Using Microsoft.
Chap 10-1 Analysis of Variance. Chap 10-2 Overview Analysis of Variance (ANOVA) F-test Tukey- Kramer test One-Way ANOVA Two-Way ANOVA Interaction Effects.
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 11-1 Chapter 11 Analysis of Variance Statistics for Managers using Microsoft Excel.
1 1 Slide © 2005 Thomson/South-Western AK/ECON 3480 M & N WINTER 2006 n Power Point Presentation n Professor Ying Kong School of Analytic Studies and Information.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
Statistics Design of Experiment.
1 1 Slide © 2009 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
1 1 Slide 統計學 Spring 2004 授課教師:統計系余清祥 日期: 2004 年 3 月 30 日 第八週:變異數分析與實驗設計.
1 1 Slide © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Adapted by Peter Au, George Brown College McGraw-Hill Ryerson Copyright © 2011 McGraw-Hill Ryerson Limited.
QNT 531 Advanced Problems in Statistics and Research Methods
12-1 Chapter Twelve McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Comparing Three or More Means 13.
1 1 Slide Analysis of Variance Chapter 13 BA 303.
© 2003 Prentice-Hall, Inc.Chap 11-1 Analysis of Variance IE 340/440 PROCESS IMPROVEMENT THROUGH PLANNED EXPERIMENTATION Dr. Xueping Li University of Tennessee.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Statistical Inferences Based on Two Samples Chapter 9.
12-1 Chapter Twelve McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved.
© 2002 Prentice-Hall, Inc.Chap 9-1 Statistics for Managers Using Microsoft Excel 3 rd Edition Chapter 9 Analysis of Variance.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
One-Factor Analysis of Variance A method to compare two or more (normal) population means.
Chapter 10 Analysis of Variance.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Chapter 19 Analysis of Variance (ANOVA). ANOVA How to test a null hypothesis that the means of more than two populations are equal. H 0 :  1 =  2 =
Lecture 9-1 Analysis of Variance
Chapter 10: Analysis of Variance: Comparing More Than Two Means.
12-1 Chapter Twelve McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved.
Chapter 4 Analysis of Variance
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Copyright © 2004 by The McGraw-Hill Companies, Inc. All rights reserved.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Experimental Design and Analysis of Variance Chapter 11.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
ENGR 610 Applied Statistics Fall Week 8 Marshall University CITE Jack Smith.
10-1 of 29 ANOVA Experimental Design and Analysis of Variance McGraw-Hill/Irwin Copyright © 2003 by The McGraw-Hill Companies, Inc. All rights reserved.
Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc.
Experimental Design and Analysis of Variance
Statistics for Managers Using Microsoft Excel 3rd Edition
Factorial Experiments
Chapter 10: Analysis of Variance: Comparing More Than Two Means
Statistics for Business and Economics (13e)
Econ 3790: Business and Economic Statistics
Chapter 11 Analysis of Variance
Chapter 10 – Part II Analysis of Variance
Presentation transcript:

McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Experimental Design and Analysis of Variance Chapter 10

10-2 Experimental Design and Analysis of Variance 10.1Basic Concepts of Experimental DesignBasic Concepts of Experimental Design 10.2One-Way Analysis of VarianceOne-Way Analysis of Variance 10.3The Randomized Block DesignThe Randomized Block Design 10.4Two-Way Analysis of VarianceTwo-Way Analysis of Variance

10-3 Experimental Design #1 Up until now, have considered only two ways of collecting and comparing data: Use independent random samples Use paired (or matched) samples Often data is collected as the result of an experiment To systematically study how one or more factors (variables) influence the variable that is being studied

10-4 Experimental Design #2 In an experiment, there is strict control over the factors contributing to the experiment The values or levels of the factors are called treatments For example, in testing a medical drug, the experimenters decide which participants in the test get the drug and which ones get the placebo, instead of leaving the choice to the subjects The object is to compare and estimate the effects of different treatments on the response variable

10-5 Experimental Design #3 The different treatments are assigned to objects (the test subjects) called experimental units When a treatment is applied to more than one experimental unit, the treatment is being “replicated” A designed experiment is an experiment where the analyst controls which treatments used and how they are applied to the experimental units

10-6 Experimental Design #4 In a completely randomized experimental design, independent random samples are assigned to each of the treatments For example, suppose three experimental units are to be assigned to five treatments For completely randomized experimental design randomly pick three experimental units for one treatment, randomly pick three different experimental units from those remaining for the next treatment, and so on

10-7 Experimental Design #5 Once the experimental units are assigned and the experiment is performed, a value of the response variable is observed for each experimental unit Obtain a sample of values for the response variable for each treatment

10-8 Experimental Design #6 In a completely randomized experimental design, it is presumed that each sample is a random sample from the population of all possible values of the response variable That could possibly be observed when using the specific treatment The samples are independent of each other Reasonable because the completely randomized design ensures that each sample results from different measurements being taken on different experimental units Can also say that an independent samples experiment is being performed

10-9 One-Way Analysis of Variance Want to study the effects of all p treatments on a response variable For each treatment, find the mean and standard deviation of all possible values of the response variable when using that treatment For treatment i, find treatment mean  i One-way analysis of variance estimates and compares the effects of the different treatments on the response variable By estimating and comparing the treatment means  1,  2, …,  p One-way analysis of variance, or one-way ANOVA

10-10 ANOVA Notation n i denotes the size of the sample randomly selected for treatment i x ij is the j th value of the response variable using treatment i  i is average of the sample of n i values for treatment i  i is the point estimate of the treatment mean  i s i is the standard deviation of the sample of n i values for treatment i s i is the point estimate for the treatment (population) standard deviation  i

10-11 One-Way ANOVA Assumptions Completely randomized experimental design Assume that a sample has been selected randomly for each of the p treatments on the response variable by using a completely randomized experimental design Constant variance The p populations of values of the response variable (associated with the p treatments) all have the same variance

10-12 One-Way ANOVA Assumptions Continued Normality The p populations of values of the response variable all have normal distributions Independence The samples of experimental units are randomly selected, independent samples

10-13 Testing for Significant Differences Between Treatment Means Are there any statistically significant differences between the sample (treatment) means? The null hypothesis is that the mean of all p treatments are the same H 0 :  1 =  2 = … =  p The alternative is that some (or all, but at least two) of the p treatments have different effects on the mean response H a : at least two of  1,  2, …,  p differ

10-14 Testing for Significant Differences Between Treatment Means Continued Compare the between-treatment variability to the within-treatment variability Between-treatment variability is the variability of the sample means, sample to sample Within-treatment variability is the variability of the treatments (that is, the values) within each sample

10-15 Partitioning the Total Variability in the Response  Squares of Squares of Squares of SumError SumTreatment Sum Total  yVariabilit y Treatment yVariabilit Within Between Total  SSE SST = SSTO

10-16 Mean Squares The treatment mean-squares is The error mean-squares is

10-17 F Test for Difference Between Treatment Means Suppose that we want to compare p treatment means The null hypothesis is that all treatment means are the same: H 0 :  1 =  2 = … =  p The alternative hypothesis is that they are not all the same: H a : at least two of  1,  2, …,  p differ

10-18 F Test for Difference Between Treatment Means #2 Define the F statistic: The p-value is the area under the F curve to the right of F, where the F curve has p – 1 numerator and n – p denominator degrees of freedom

10-19 F Test for Difference Between Treatment Means #3 Reject H 0 in favor of H a at the  level of significance if F > F   or if p-value <  F  is based on p – 1 numerator and n – p denominator degrees of freedom

10-20 Pairwise Comparisons, Individual Intervals Individual 100(1 - a)% confidence interval for  i –  h : t  /2 is based on n – p degrees of freedom

10-21 Pairwise Comparisons, Simultaneous Intervals Tukey simultaneous 100(1 -  )% confidence interval for  i –  h : q  is the upper  percentage point of the studentized range for p and (n – p) from Table A.9 m denotes common sample size

10-22 A randomized block design compares p treatments (for example, production methods) on each of b blocks (or experimental units or sets of units; for example, machine operators) Each block is used exactly once to measure the effect of each and every treatment The order in which each treatment is assigned to a block should be random A generalization of the paired difference design, this design controls for variability in experimental units by comparing each treatment on the same (not independent) experimental units Differences in the treatments are not hidden by differences in the experimental units (the blocks) The Randomized Block Design

10-23 Randomized Block Design Define: x ij = the value of the response variable when block j uses treatment i  i = the mean of the b response variable observed when using treatment i = the treatment i mean  j = the mean of the p values of the response variable when using block j = the block j mean  = the mean of all the bp values of the response variable observed in the experiment = the overall mean

10-24 The ANOVA Table, Randomized Blocks DegreesSum of MeanF Sourceof FreedomSquaresSquaresStatistic Treatmentsp-1SSTMST = SSTF(trt) = MST p-1 MSE Blocksb-1SSBMSB = SSBF(blk) = MSB b-1 MSE Error(p-1)  (b-1)SSEMSE = SSE (p-1)(b-1) Total(p  b)-1SSTO where SSTO = SST + SSB + SSE

10-25 Sum of Squares SST measures the amount of between-treatment variability SSB measures the amount of variability due to the blocks SSTO measures the total amount of variability SSE measures the amount of the variability due to error SSE = SSTO – SST – SSB

10-26 F Test for Treatment Effects H 0 : No difference between treatment effects H a : At least two treatment effects differ Reject H 0 if F > F   or p-value <  F  is based on p-1 numerator and (p-1)  (b-1) denominator degrees of freedom

10-27 F Test for Block Effects H 0 : No difference between block effects H a : At least two block effects differ Reject H 0 if F > F   or p-value <  F  is based on b-1 numerator and (p-1)  (b-1) denominator degrees of freedom

10-28 Estimation of Treatment Differences Under Randomized Blocks, Individual Intervals Individual 100(1 -  )% confidence interval for  i  -  h  t  is based on (p-1)(b-1) degrees of freedom

10-29 Tukey simultaneous 100(1 -  )% confidence interval for  i  -  h  q  is the upper  percentage point of the studentized range for p and (p-1)(b-1) from Table A.9 Estimation of Treatment Differences Under Randomized Blocks, Simultaneous Intervals

10-30 A two factor factorial design compares the mean response for a levels of factor 1 (for example, display height) and each of b levels of factor 2 ( for example, display width.) A treatment is a combination of a level of factor 1 and a level of factor 2 Two-Way Analysis of Variance Factor a12...a Factor … b x ijk =response for the k th experimental unit (k=1,…,m) assigned to the i th level of Factor 1 and the j th level of Factor 2

10-31 Two-Way ANOVA Table DegreesSum of MeanF Sourceof FreedomSquaresSquaresStatistic Factor 1a-1SS(1)MS(1) = SS(1)F(1) = MS(1) a-1 MSE Factor 1b-1SS(2)MS(2) = SS(2)F(2) = MS(2) b-1 MSE Interaction(a-1)(b-1)SS(int)MS(int) = SS(int) F(int) = MS(int) (a-1)(b-1) MSE Errorab(m-1)SSEMSE = SSE ab(m-1) Totalabm-1SSTO

10-32 Estimation of Treatment Differences Under Two-Way ANOVA, Factor 1 Individual 100(1 -  )% confidence interval for  i  -  i’  t  is based on ab(m-1) degrees of freedom Tukey simultaneous 100(1 -  )% confidence interval for  i  -  i’  q  is the upper  percentage point of the studentized range for a and ab(m-1) from Table A.9

10-33 Estimation of Treatment Differences Under Two-Way ANOVA, Factor 2 Individual 100(1 -  )% confidence interval for   j -   j’ t  is based on ab(m-1) degrees of freedom Tukey simultaneous 100(1 -  )% confidence interval for   j -   j’ q  is the upper  percentage point of the studentized range for b and ab(m-1) from Table A.9