Download presentation
Presentation is loading. Please wait.
Published byMarian McLaughlin Modified over 9 years ago
1
Design and Analysis of Experiments Dr. Tai-Yue Wang Department of Industrial and Information Management National Cheng Kung University Tainan, TAIWAN, ROC 1/33
2
Analysis of Variance Dr. Tai-Yue Wang Department of Industrial and Information Management National Cheng Kung University Tainan, TAIWAN, ROC 2/33
3
Outline(1/2) Example The ANOVA Analysis of Fixed effects Model Model adequacy Checking Practical Interpretation of results Determining Sample Size 3/33
4
Outline (2/2) Discovering Dispersion Effects Nonparametric Methods in the ANOVA 4/33
5
What If There Are More Than Two Factor Levels? The t-test does not directly apply There are lots of practical situations where there are either more than two levels of interest, or there are several factors of simultaneous interest The ANalysis Of VAriance (ANOVA) is the appropriate analysis “engine” for these types of experiments The ANOVA was developed by Fisher in the early 1920s, and initially applied to agricultural experiments Used extensively today for industrial experiments 5
6
An Example(1/6) 6
7
An Example(2/6) An engineer is interested in investigating the relationship between the RF power setting and the etch rate for this tool. The objective of an experiment like this is to model the relationship between etch rate and RF power, and to specify the power setting that will give a desired target etch rate. The response variable is etch rate. 7
8
An Example(3/6) She is interested in a particular gas (C2F6) and gap (0.80 cm), and wants to test four levels of RF power: 160W, 180W, 200W, and 220W. She decided to test five wafers at each level of RF power. The experimenter chooses 4 levels of RF power 160W, 180W, 200W, and 220W The experiment is replicated 5 times – runs made in random order 8
9
An Example --Data 9
10
An Example – Data Plot 10
11
11 Does changing the power change the mean etch rate? Is there an optimum level for power? We would like to have an objective way to answer these questions The t-test really doesn’t apply here – more than two factor levels An Example--Questions
12
The Analysis of Variance In general, there will be a levels of the factor, or a treatments, and n replicates of the experiment, run in random order…a completely randomized design (CRD) N = an total runs We consider the fixed effects case…the random effects case will be discussed later Objective is to test hypotheses about the equality of the a treatment means 12
13
The Analysis of Variance 13
14
The Analysis of Variance The name “analysis of variance” stems from a partitioning of the total variability in the response variable into components that are consistent with a model for the experiment 14
15
The Analysis of Variance The basic single-factor ANOVA model is 15
16
Models for the Data There are several ways to write a model for the data Mean model Also known as one-way or single-factor ANOVA 16
17
Models for the Data Fixed or random factor? The a treatments could have been specifically chosen by the experimenter. In this case, the results may apply only to the levels considered in the analysis. fixed effect models 17
18
Models for the Data The a treatments could be a random sample from a larger population of treatments. In this case, we should be able to extend the conclusion to all treatments in the population. random effect models 18
19
Analysis of the Fixed Effects Model Recall the single-factor ANOVA for the fixed effect model Define 19
20
Analysis of the Fixed Effects Model Hypothesis 20
21
Analysis of the Fixed Effects Model Thus, the equivalent Hypothesis 21
22
Total variability is measured by the total sum of squares: The basic ANOVA partitioning is: 22 Analysis of the Fixed Effects Model-Decomposition
23
In detail 23 Analysis of the Fixed Effects Model-Decomposition (=0)
24
Thus 24 Analysis of the Fixed Effects Model-Decomposition SS T SS Treatments SS E
25
A large value of SS Treatments reflects large differences in treatment means A small value of SS Treatments likely indicates no differences in treatment means Formal statistical hypotheses are: 25 Analysis of the Fixed Effects Model-Decomposition
26
For SS E Recall 26 Analysis of the Fixed Effects Model-Decomposition
27
Combine a sample variances The above formula is a pooled estimate of the common variance with each a treatment. 27 Analysis of the Fixed Effects Model-Decomposition
28
Define and 28 Analysis of the Fixed Effects Model-Mean Squares df
29
By mathematics, That is, MS E estimates σ 2. If there are no differences in treatments means, MS Treatments also estimates σ 2. 29 Analysis of the Fixed Effects Model-Mean Squares
30
Cochran’s Theorem Let Z i be NID(0,1) for i=1,2,…,v and then Q1, q2,…,Qs are independent chi- square random variables withv1, v2, …,vs degrees of freedom, respectively, If and only if 30 Analysis of the Fixed Effects Model-Statistical Analysis
31
Cochran’s Theorem implies that are independently distributed chi-square random variables Thus, if the null hypothesis is true, the ratio is distributed as F with a-1 and N-a degrees of freedom. 31 Analysis of the Fixed Effects Model-Statistical Analysis
32
Cochran’s Theorem implies that Are independently distributed chi-square random variables Thus, if the null hypothesis is true, the ratio is distributed as F with a-1 and N-a degrees of freedom. 32 Analysis of the Fixed Effects Model-Statistical Analysis
33
Analysis of the Fixed Effects Models-- Summary Table The reference distribution for F 0 is the F a-1, a(n-1) distribution Reject the null hypothesis (equal treatment means) if 33
34
Analysis of the Fixed Effects Models-- Example Recall the example of etch rate, Hypothesis 34
35
Analysis of the Fixed Effects Models-- Example ANOVA table 35
36
Analysis of the Fixed Effects Models-- Example Rejection region 36
37
Analysis of the Fixed Effects Models-- Example P-value 37 P-value
38
Analysis of the Fixed Effects Models-- Example Minitab 38 One-way ANOVA: Etch Rate versus Power Source DF SS MS F P Power 3 66871 22290 66.80 0.000 Error 16 5339 334 Total 19 72210 S = 18.27 R-Sq = 92.61% R-Sq(adj) = 91.22% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ---+---------+---------+---------+------ 160 5 551.20 20.02 (--*---) 180 5 587.40 16.74 (--*---) 200 5 625.40 20.53 (--*---) 220 5 707.00 15.25 (--*---) ---+---------+---------+---------+------ 550 600 650 700 Pooled StDev = 18.27
39
Coding the observations will not change the results Without the assumption of randomization, the ANOVA F test can be viewed as approximation to the randomization test. 39 Analysis of the Fixed Effects Model-Statistical Analysis
40
Reasonable estimates of the overall mean and the treatment effects for the single- factor model are given by 40 Analysis of the Fixed Effects Model- Estimation of the model parameters
41
Confidence interval for μ i Confidence interval for μ i - μ j 41 Analysis of the Fixed Effects Model- Estimation of the model parameters
42
For the unbalanced data 42 Analysis of the Fixed Effects Model- Unbalanced data
43
43 A little (very little) humor…
44
Assumptions on the model Errors are normally distributed and independently distributed with mean zero and constant but unknown variances σ 2 Define residual where is an estimate of y ij 44 Model Adequacy Checking
45
Normal probability plot 45 Model Adequacy Checking -- Normality
46
Four-in-one 46 Model Adequacy Checking -- Normality
47
Residuals vs run order 47 Model Adequacy Checking -- Plot of residuals in time sequence
48
Residuals vs fitted 48 Model Adequacy Checking -- Residuals vs fitted
49
Defects Horn shape Moon type Test for equal variances Bartlett’s test 49 Model Adequacy Checking -- Residuals vs fitted
50
Test for equal variances Bartlett’s test 50 Model Adequacy Checking -- Residuals vs fitted Test for Equal Variances: Etch Rate versus Power 95% Bonferroni confidence intervals for standard deviations Power N Lower StDev Upper 160 5 10.5675 20.0175 83.0477 180 5 8.8384 16.7422 69.4591 200 5 10.8357 20.5256 85.1557 220 5 8.0496 15.2480 63.2600 Bartlett's Test (Normal Distribution) Test statistic = 0.43, p-value = 0.933 Levene's Test (Any Continuous Distribution) Test statistic = 0.20, p-value = 0.898
51
Test for equal variances Bartlett’s test 51 Model Adequacy Checking -- Residuals vs fitted
52
Variance-stabilizing transformation Deal with nonconstant variance If observations follows Poisson distribution square root transformation If observations follows Lognormal distribution Logarithmic transformation 52 Model Adequacy Checking -- Residuals vs fitted
53
Variance-stabilizing transformation If observations are binominal data Arcsine transformation Other transformation check the relationship among observations and mean. 53 Model Adequacy Checking -- Residuals vs fitted
54
The one-way ANOVA model is a regression model and is similar to 54 Practical Interpretation of Results – Regression Model
55
Computer Results 55 Practical Interpretation of Results – Regression Model Regression Analysis: Etch Rate versus Power The regression equation is Etch Rate = 138 + 2.53 Power Predictor Coef SE Coef T P Constant 137.62 41.21 3.34 0.004 Power 2.5270 0.2154 11.73 0.000 S = 21.5413 R-Sq = 88.4% R-Sq(adj) = 87.8% Analysis of Variance Source DFSS MS F P Regression 1 63857 63857 137.62 0.000 Residual Error 18 8352 464 Total 19 72210 Unusual Observations Obs Power Etch Rate Fit SE Fit Residual St Resid 11 200 600.00 643.02 5.28 -43.02 -2.06R R denotes an observation with a large standardized residual.
56
The Regression Model 56
57
Practical Interpretation of Results – Comparison of Means The analysis of variance tests the hypothesis of equal treatment means Assume that residual analysis is satisfactory If that hypothesis is rejected, we don’t know which specific means are different Determining which specific means differ following an ANOVA is called the multiple comparisons problem 57
58
Practical Interpretation of Results – Comparison of Means There are lots of ways to do this We will use pairwised t-tests on means…sometimes called Fisher’s Least Significant Difference (or Fisher’s LSD) Method 58
59
Practical Interpretation of Results – Comparison of Means 59 Fisher 95% Individual Confidence Intervals All Pairwise Comparisons among Levels of Power Simultaneous confidence level = 81.11% Power = 160 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 180 11.71 36.20 60.69 (--*-) 200 49.71 74.20 98.69 (-*--) 220 131.31 155.80 180.29 (--*-) ----+---------+---------+---------+----- -100 0 100 200
60
Practical Interpretation of Results – Comparison of Means 60 Fisher 95% Individual Confidence Intervals All Pairwise Comparisons among Levels of Power Simultaneous confidence level = 81.11% Power = 180 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 200 13.51 38.00 62.49 (--*-) 220 95.11 119.60 144.09 (-*-) ----+---------+---------+---------+----- -100 0 100 200 Power = 200 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 220 57.11 81.60 106.09 (-*--) ----+---------+---------+---------+----- -100 0 100 200
61
Practical Interpretation of Results – Graphical Comparison of Means 61
62
Practical Interpretation of Results – Contrasts 62 A linear combination of parameters So the hypothesis becomes
63
Practical Interpretation of Results – Contrasts 63 Examples
64
Practical Interpretation of Results – Contrasts 64 Testing t-test Contrast average Contrast Variance Test statistic
65
Practical Interpretation of Results – Contrasts 65 Testing F-test Test statistic
66
Practical Interpretation of Results – Contrasts 66 Testing F-test Reject hypothesis if
67
Practical Interpretation of Results – Contrasts 67 Confidence interval
68
Practical Interpretation of Results – Contrasts 68 Standardized contrast Standardize the contrasts when more than one contrast is of interest Standardized contrast
69
Practical Interpretation of Results – Contrasts 69 Unequal sample size Contrast t-statistic
70
Practical Interpretation of Results – Contrasts 70 Unequal sample size Contrast sum of squares
71
Practical Interpretation of Results – Orthogonal Contrasts 71 Define two contrasts with coefficients {c i } and {d i } are orthogonal contrasts if Unbalanced design
72
Practical Interpretation of Results – Orthogonal Contrasts 72 Why use orthogonal contrasts ? For a treatments, the set of a-1 orthogonal contrasts partition the sum of squares due to treatments into a-1 independent single- degree-of freedom tests performed on orthogonal contrasts are independent.
73
Practical Interpretation of Results – Orthogonal Contrasts 73 example
74
Practical Interpretation of Results – Orthogonal Contrasts 74 Example for contrast
75
Practical Interpretation of Results – Scheffe’s method for comparing all contrasts 75 Comparing any and all possible contrasts between treatment means Suppose that a set of m contrasts in the treatment means of interest have been determined. The corresponding contrast in the treatment averages is
76
Practical Interpretation of Results – Scheffe’s method for comparing all contrasts 76 The standard error of this contrast is The critical value against which should be compared is If |C u |>S α,u, the hypothesis that contrast Γ u equals zero is rejected.
77
Practical Interpretation of Results – Scheffe’s method for comparing all contrasts 77 The simultaneous confidence intervals with type I error α
78
Practical Interpretation of Results – example for Scheffe’s method 78 Contrast of interest The numerical values of these contrasts are
79
Practical Interpretation of Results – example for Scheffe’s method 79 Standard error One percent critical values are |C 1 |>S 0.01,1 and |C 1 |>S 0.01,1, both contrast hypotheses should be rejected.
80
Practical Interpretation of Results – comparing pairs of treatment means 80 Tukey’s test Fisher’s Least significant Difference (LSD) method Hsu’s Methods
81
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 81 One-way ANOVA: Etch Rate versus Power Source DF SS MS F P Power 3 66871 22290 66.80 0.000 Error 16 5339 334 Total 19 72210 S = 18.27 R-Sq = 92.61% R-Sq(adj) = 91.22% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ---+---------+---------+---------+------ 160 5 551.20 20.02 (--*---) 180 5 587.40 16.74 (--*---) 200 5 625.40 20.53 (--*---) 220 5 707.00 15.25 (--*---) ---+---------+---------+---------+------ 550 600 650 700 Pooled StDev = 18.27
82
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 82 Grouping Information Using Tukey Method Power N Mean Grouping 220 5 707.00 A 200 5 625.40 B 180 5 587.40 C 160 5 551.20 D Means that do not share a letter are significantly different. Tukey 95% Simultaneous Confidence Intervals All Pairwise Comparisons among Levels of Power Individual confidence level = 98.87% Power = 160 subtracted from: Power Lower Center Upper -----+---------+---------+---------+---- 180 3.11 36.20 69.29 (---*--) 200 41.11 74.20 107.29 (--*---) 220 122.71 155.80 188.89 (---*--) -----+---------+---------+---------+---- -100 0 100 200
83
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 83 Power = 180 subtracted from: Power Lower Center Upper -----+---------+---------+---------+---- 200 4.91 38.00 71.09 (---*--) 220 86.51 119.60 152.69 (--*--) -----+---------+---------+---------+---- -100 0 100 200 Power = 200 subtracted from: Power Lower Center Upper -----+---------+---------+---------+---- 220 48.51 81.60 114.69 (--*--) -----+---------+---------+---------+---- -100 0 100 200
84
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 84 Hsu's MCB (Multiple Comparisons with the Best) Family error rate = 0.05 Critical value = 2.23 Intervals for level mean minus largest of other level means Level Lower Center Upper ---+---------+---------+---------+------ 160 -181.53 -155.80 0.00 (---*------------------) 180 -145.33 -119.60 0.00 (--*--------------) 200 -107.33 -81.60 0.00 (--*---------) 220 0.00 81.60 107.33 (---------*--) ---+---------+---------+---------+------ -160 -80 0 80
85
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 85 Grouping Information Using Fisher Method Power N Mean Grouping 220 5 707.00 A 200 5 625.40 B 180 5 587.40 C 160 5 551.20 D Means that do not share a letter are significantly different. Fisher 95% Individual Confidence Intervals All Pairwise Comparisons among Levels of Power Simultaneous confidence level = 81.11% Power = 160 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 180 11.71 36.20 60.69 (--*-) 200 49.71 74.20 98.69 (-*--) 220 131.31 155.80 180.29 (--*-) ----+---------+---------+---------+----- -100 0 100 200
86
Practical Interpretation of Results – comparing pairs of treatment means—Computer output 86 Power = 180 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 200 13.51 38.00 62.49 (--*-) 220 95.11 119.60 144.09 (-*-) ----+---------+---------+---------+----- -100 0 100 200 Power = 200 subtracted from: Power Lower Center Upper ----+---------+---------+---------+----- 220 57.11 81.60 106.09 (-*--) ----+---------+---------+---------+----- -100 0 100 200
87
Practical Interpretation of Results – comparing treatment means with a control 87 Donntt’s method Control– the one to be compared Totally a-1 comparisons Dunnett's comparisons with a control Family error rate = 0.05 Individual error rate = 0.0196 Critical value = 2.59 Control = level (160) of Power Intervals for treatment mean minus control mean Level Lower Center Upper ---------+---------+---------+---------+ 180 6.25 36.20 66.15 (-----*-----) 200 44.25 74.20 104.15 (-----*-----) 220 125.85 155.80 185.75 (-----*-----) ---------+---------+---------+---------+ 50 100 150 200
88
Determining Sample size -- Minitab 88 One-way ANOVA Alpha = 0.01 Assumed standard deviation = 25 Factors: 1 Number of levels: 4 Maximum Sample Target Difference Size Power Actual Power 75 6 0.9 0.915384 The sample size is for each level. Stat Power and sample size One-way ANOVA Number of level ->4 Sample size -> Max. difference 75 Power value 0.9 SD 25
89
Dispersion Effects 89 ANOVA for location effects Different factor level affects variability dispersion effects Example Average and standard deviation are measured for a response variable.
90
Dispersion Effects 90 ANOVA found no location effects Transform the standard deviation to RatioObser. Control Algorithm123456 1-2.99573-3.21888-2.99573-2.81341-3.50656-2.99573 2-3.21888-3.91202-3.50656-2.99573-3.50656-2.40795 3 -2.04022-2.20727-1.89712-2.52573-2.12026 4-3.50656-3.21888-2.99573 -3.50656-3.91202
91
Dispersion Effects 91 ANOVA found dispersion effects One-way ANOVA: y=ln(s) versus Algorithm Source DF SS MS F P Algorithm 3 6.1661 2.0554 21.96 0.000 Error 20 1.8716 0.0936 Total 23 8.0377 S = 0.3059 R-Sq = 76.71% R-Sq(adj) = 73.22%
92
Nonparametric methods in the ANOVA 92 When normality is invalid Use Kruskal-Wallis test Rank observation y ij in ascending order Replace each observation by its rank, R ij In case tie, assign average rank to them test statistic
93
Regression and ANOVA 93 Regression Analysis: Etch Rate versus Power The regression equation is Etch Rate = 138 + 2.53 Power Predictor Coef SE Coef T P Constant 137.62 41.21 3.34 0.004 Power 2.5270 0.2154 11.73 0.000 S = 21.5413 R-Sq = 88.4% R-Sq(adj) = 87.8% Analysis of Variance Source DF SS MS F P Regression 1 63857 63857 137.62 0.000 Residual Error 18 8352 464 Total 19 72210 Unusual Observations Obs Power Etch Rate Fit SE Fit Residual St Resid 11 200 600.00 643.02 5.28 -43.02 -2.06R R denotes an observation with a large standardized residual.
94
Nonparametric methods in the ANOVA 94 Kruskal-Wallis test test statistic where
95
Nonparametric methods in the ANOVA 95 Kruskal-Wallis test If the null hypothesis is rejected.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.