Download presentation
Published byJulianna Sherman Modified over 9 years ago
1
Design of Experiments and Analysis of Variance
Chapter 10 Design of Experiments and Analysis of Variance Slides for Optional Sections No Optional Sections
2
Elements of a Designed Experiment
Response variable Also called the dependent variable Factors (quantitative and qualitative) Also called the independent variables Factor Levels Treatments Experimental Unit
3
Elements of a Designed Experiment
Designed vs. Observational Experiment In a Designed Experiment, the analyst determines the treatments, methods of assigning units to treatments. In an Observational Experiment, the analyst observes treatments and responses, but does not determine treatments Many experiments are a mix of designed and observational
4
Elements of a Designed Experiment
Single-Factor Experiment Population of Interest Sample Independent Variable Dependent Variable
5
Elements of a Designed Experiment
Two-factor Experiment
6
The Completely Randomized Design
Achieved when the samples of experimental units for each treatment are random and independent of each other Design is used to compare the treatment means:
7
The Completely Randomized Design
The hypotheses are tested by comparing the differences between the treatment means to the amount of sampling variability present Test statistic is calculated using measures of variability within treatment groups and measures of variability between treatment groups
8
The Completely Randomized Design
Sum of Squares for Treatments (SST) Measure of the total variation between treatment means, with k treatments Calculated by Where
9
The Completely Randomized Design
Sum of Squares for Error (SSE) Measure of the variability around treatment means attributable to sampling error Calculated by After substitution, SSE can be rewritten as
10
The Completely Randomized Design
Mean Square for Treatments (MST) Measure of the variability among treatment means Mean Square for Error (MSE) Measure of sampling variability within treatments
11
The Completely Randomized Design
F-Statistic Ratio of MST to MSE Values of F close to 1 suggest that population means do not differ Values further away from 1 suggest variation among means exceeds that within means, supports Ha
12
The Completely Randomized Design
Conditions Required for a Valid ANOVA F-Test: Completely Randomized Design Independent, randomly selected samples. All sampled populations have distributions that approximate normal distribution The k population variances are equal
13
The Completely Randomized Design
A Format for an ANOVA summary table
14
The Completely Randomized Design
ANOVA summary table: an example from Minitab
15
The Completely Randomized Design
Conducting an ANOVA for a Completely Randomized Design Assure randomness of design, and independence, randomness of samples Check normality, equal variance assumptions Create ANOVA summary table Conduct multiple comparisons for pairs of means as necessary/desired If H0 not rejected, consider possible explanations, keeping in mind the possibility of a Type II error
16
Multiple Comparisons of Means
A significant F-test in an ANOVA tells you that the treatment means as a group are statistically different. Does not tell you which pairs of means differ statistically from each other With k treatment means, there are c different pairs of means that can be compared, with c calculated as
17
Multiple Comparisons of Means
Three widely used techniques for making multiple comparisons of a set of treatment means In each technique, confidence intervals are constructed around differences between means to facilitate comparison of pairs of means Selection of technique is based on experimental design and comparisons of interest Most statistical analysis packages provide the analyst with a choice of the procedures used by the three techniques for calculating confidence intervals for differences between treatment means
18
Multiple Comparisons of Means
19
The Randomized Block Design
Two-step procedure for the Randomized Block Design: Form b blocks (matched sets of experimental units) of k units, where k is the number of treatments. Randomly assign one unit from each block to each treatment. (Total responses, n=bk)
20
The Randomized Block Design
Partitioning Sum of Squares
21
The Randomized Block Design
Calculating Mean Squares Setting Hypotheses Hypothesis Testing Rejection region: F > F, F based on (k-1), (n-b-k+1) degrees of freedom
22
The Randomized Block Design
Conditions Required for a Valid ANOVA F-Test: Randomized Block Design The b blocks are randomly selected, all k treatments are randomly applied to each block Distributions of all bk combinations are approximately normal The bk distributions have equal variances
23
The Randomized Block Design
Conducting an ANOVA for a Randomized Block Design Ensure design consists of blocks, random assignment of treatments to units in block Check normality, equal variance assumptions Create ANOVA summary table Conduct multiple comparisons for pairs of means as necessary/desired If H0 not rejected, consider possible explanations, keeping in mind the possibility of a Type II error If desired, conduct test of H0 that block means are equal
24
Factorial Experiments
Complete Factorial Experiment Every factor-level combination is utilized
25
Factorial Experiments
Partitioning Total Sum of Squares Usually done with statistical package
26
Factorial Experiments
Conducting an ANOVA for a Factorial Design Partition Total Sum of Squares into Treatment and Error components Test H0 that treatment means are equal. If H0 is rejected proceed to step 3 Partition Treatment Sum of Squares into Main Effect and Interaction Sum of Squares Test H0 that factors A and B do not interact. If H0 is rejected, go to step 6. If H0 is not rejected, go to step 5. Test for main effects of Factor A and Factor B Compare the treatment means
27
Factorial Experiments
SPSS ANOVA Output for a factorial experiment
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.