Download presentation
Presentation is loading. Please wait.
Published bySherman Mathews Modified over 6 years ago
1
CHAPTER 13 Design and Analysis of Single-Factor Experiments:
The Analysis of Variance
2
Learning Objectives Design and conduct engineering experiments
Understand how the analysis of variance is used to analyze the data Use multiple comparison procedures Make decisions about sample size Understand the difference between fixed and random factors Estimate variance components Understand the blocking principle Design and conduct experiments involving the randomized complete block design
3
Engineering Experiments
Experiments are a natural part of the engineering decision-making process Designed to improve the performance of a subset of processes Processes can be described in terms of controllable variables Determine which subset has the greatest influence Such analysis can lead to Improved process yield Reduced variability in the process and closer conformance to nominal or target requirements Reduced cost of operation
4
Steps In Experimental Design
Usually designed sequentially Determine which variables are most important Used to refine the information to determine the critical variables for improving the process Determine which variables result in the best process performance
5
Single Factor Experiment
Assume a parameter of interest Consist of making up several specimens in two samples Analyzed them using the statistical hypotheses methods Can say an experiment with single factor Has two levels of investigations Levels are called treatments Treatment has n observations or replicates
6
Designing Engineering Experiments
More than two levels of the factor This chapter shows ANalysis Of VAriance (ANOVA) Discuss randomization of the experimental runs Design and analyze experiments with several factors
7
Linear Statistical Model
Following linear model Yij= µ+i+ij i=1, 2,…,a, and j=1, 2,…,n Yij is (ij)th observation µ called the overall mean i called the ith treatment effect ij is a random error component with mean zero and variance 2 Each treatment defines a population Mean µi consisting of the overall mean µ Plus an effect i Pg. 471 Fig 13-1b
8
Completely Randomized Design
Table shows the underlying model Following observations are taken in random order Treatments are used as uniform as possible Called completely randomized design
9
Fixed-effects and Random Models
Chosen in two different ways Experimenter chooses the a treatments Called the fixed-effect model Experimenter chooses the treatments from a larger population Called random-effect model
10
Development of ANOVA Total of the observations and the average of the observations under the ith treatment Grand total of all observations and the grand mean N=an is the total number of observations “dot” subscript notation implies summation
11
Hypothesis Testing H0: 1=2=…=a=0
Interested in testing the equality of the following a treatment means 1,2 ….. a Equivalent H0: 1=2=…=a=0 H1: a#0 for at least one i If the null hypothesis is true, changing the levels of the factor has no effect on the mean response
12
Components of Total Variability
Total variability in data is described by the total sum of squares Partitions this total variability into two parts Measure the differences between treatments Measure the random error effect
13
Computational Formulas
Mean square for treatments MSTreatments=SSTreatments /(a-1) Error mean square MSE=SSE /[a(n-1)] Efficient formulas Total sum of squares Treatment sum of squares Error sum of squares SSE=SST - SSTreatment
14
ANOVA TABLE
15
Using Computer Software
Packages have the capability to analyze data from designed experiments Presents the output from the Minitab one-way analysis of variance routine
16
Example The tensile strength of a synthetic fiber is of interest to the manufacturer. It is suspected that strength is related to the percentage of cotton in the fiber. Five levels of cotton percentage are used, and five replicates are run in random order, resulting in the data below. Use α=0.05. a) Does cotton percentage affect breaking strength?
17
Solution Use the general steps in hypothesis testing
Parameter of interest is the cotton percentage H0: 1=2= 3=4=5=0 H1: i #0 for at least one I α = 0.05 Test statistic Fo = MSTR /MSE 6. Reject Ho if fo > fα,(a-1)n(a-1) Computations
18
Initial calculations Compute the last two columns Conc 1 2 3 4 5 15 7
11 9 49 9.8 20 12 17 18 77 15.4 25 14 19 88 17.6 30 22 23 108 21.6 35 10 54 10.8 =376
19
Solution - Cont. 8. Since fo=14.75> f0.05,4,20= 2.87, reject Ho
Compute SST, SSTR, SSE , MSTR, and MSE =(7)2 +(7)2 + ….+(376)2/25= =((49)2 +(77)2 +..+(54)2)/5 -376/ =475.7 SSE = = MSTR= SSTR/a-1 = /4 = 118.9 MSE = SSE/a(n-1)=161.20/5(5-1) = 8.0 Hence, the test statistic Fo = MSTR /MSE = /8.06 = 14.75 8. Since fo=14.75> f0.05,4,20= 2.87, reject Ho
20
Solution ANOVA results Source DF SS MS F P
COTTON Error Total Reject H0 and conclude that cotton percentage affects breaking strength
21
Multiple Comparisons Following the ANOVA
When H0:1=2=…=a=0 is rejected Know that some of the treatment are different Doesn’t identify which means are different Called multiple comparisons methods Called Fisher’s least significant difference (LSD) method
22
Fisher’s Least Significant Difference (LSD) Method
Compares all pairs of means with the H0: = for all i # j Test statistic Pair of means i and j would be different Least significant difference, LSD, is
23
Example Use Fisher’s LSD method with α = 0.05 test to analyze the means of five different levels of cotton percentage content in the previous example Recall H0 was rejected and concluded that cotton percentage affects the breaking strength Apply the Fisher’s LSD method to determine which treatment means are different
24
Solution Summarize a =5 means, n=5, MSE = 8.06, and t0.025,20=2.086
Treatment means are 9.8 15.4 17.6 21.6 10.8
25
Solution –Cont. Value of LSD Comparisons
5 Vs. 1=I10.8–9.8I=1 5 Vs. 2=I I=4.6>3.74 5 Vs. 3=I I=6.8>3.74 5 Vs. 4=I I=10.8>3.74 4 Vs. 1=I I=11.8>3.74 4 Vs. 2=I I=6.2 > 3.74 4 Vs. 3=I21.6 –17.6I=4>3.74 3 Vs. 1=I I=7.8>3.74 3 Vs. 2=I I=2.2 2 Vs. 1=I I=5.6>3.74 From this analysis, we see that there are significant differences between all pairs of means except 5 vs. 1 and 3 vs. 2
26
C.I. on Treatment Means Confidence interval on the mean of the ith treatment µi Confidence interval on the difference in two treatment means
27
Determining Sample Size
Choice of the sample size to use is important OC curves provide guidance in making this selection Power of the ANOVA test is 1-β=P( Reject H0 | H0 is false) =P(F0> fα, a-1, a(n-1) | H0 is false) Plot β against a parameter
28
Sample OC Curves
29
Example Suppose that four normal populations have common variance 2=25 and means µ1 =50, µ 2=60, µ3=50, and µ4=60. How many observations should be taken on each population so that the probability of rejecting the hypothesis of equality of means is at least 0.90? Use α=0.05
30
Solution Average mean 1 = -5, 2 = 5, 3 = -5, 4 = 5
Various choices: Therefore, n = 5 is needed
31
The Random-effects Model
A large number of possible levels Experimenter randomly selects a of these levels from the population of factor levels Called random-effect model Valid for the entire population of factor levels
32
Linear Statistical Model
Following linear model Yij= µ+i+ij =1,2,….a, j=1,2,…n Yij is the (ij)th observation i and ij are independent random variables Identical in structure to the fixed-effects case Parameters have a different interpretation ij are with mean 0 and variance 2 i are with mean zero and variance 2
33
Testing the Hypothesis
Testing the hypothesis that the individual treatment effects are zero is meaningless Appropriate to test a hypothesis about the variance of the treatment effect H0: 2 =0 vs. H1: 2 >0 2 =0, all treatments are identical There is variability between them Total variability SST=SSTreatments +SSE Expected values of the MS E(MSTreatments)= 2 + n2 and E(MSE)= 2 Computational procedure and construction of the ANOVA table are identical to the fixed-effects case Eq ,22
34
Randomized Complete Block Design
Desired to design an experiment so that the variability arising from a nuisance factor can be controlled Recall about the paired t-test When all experimental runs cannot be made under homogeneous conditions See the paired t-test as a method for reducing the noise in the experiment by blocking out a nuisance factor effect Randomized block design can be viewed as an extension of the paired t-test Factor of interest has more than two levels More than two treatments must be compared
35
Randomized Complete Block Design
General procedure for a randomized complete block design consists of selecting b blocks Data that result from running a randomized complete block design for investigating a single factor with a levels and b blocks
36
Linear Statistical Model
Following linear model Yij is the (ij)th observation j is the effect of the jth block µ called the overall mean i called the ith treatment effect ij is a random error component with mean zero and variance 2
37
Hypothesis Testing H0: 1=2=…=a=0
Interested in testing the equality of the a treatment means 1,2 ….. a Equivalent H0: 1=2=…=a=0 H1: a#0 for at least one i If the null hypothesis is true, changing the levels of the factor has no effect on the mean response
38
Displaying Data
39
Components of Total Variability
Total variability in data Partitions this total variability into three parts Or symbolically, SST=SStreatments+SSblocks+SSerrors
40
Computational Formulas
Computing formulas for the sums of squares Error sum of squares SSerrors=SST-SStreatments-SSblocks Computer software package will be used to perform the analysis of variance
41
Analysis of Variance
42
Next Agenda Ends our discussion with the analysis of variance when there are more then two levels of a single factor In the next chapter, we will show how to design and analyze experiments with several factors with more than two levels
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.