1-Way Analysis of Variance - Completely Randomized Design

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Dr. AJIT SAHAI Director – Professor Biometrics JIPMER, Pondicherry
Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,
1-Way Analysis of Variance
Analysis of Variance (ANOVA) ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution.
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
© 2010 Pearson Prentice Hall. All rights reserved Single Factor ANOVA.
Analysis of Variance: Inferences about 2 or More Means
Statistics Are Fun! Analysis of Variance
Comparing Means.
Chapter Topics The Completely Randomized Model: One-Factor Analysis of Variance F-Test for Difference in c Means The Tukey-Kramer Procedure ANOVA Assumptions.
Lecture 9: One Way ANOVA Between Subjects
One-way Between Groups Analysis of Variance
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
Experimental Design and the Analysis of Variance.
Chapter 12: Analysis of Variance
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.2 Estimating Differences.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Chapter 10 Analysis of Variance.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Chapter 15 – Analysis of Variance Math 22 Introductory Statistics.
ANOVA: Analysis of Variance.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter 8 1-Way Analysis of Variance - Completely Randomized Design.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Comparing k > 2 Groups - Numeric Responses Extension of Methods used to Compare 2 Groups Parallel Groups and Crossover Designs Normal and non-normal data.
Experimental Design and the Analysis of Variance.
Chapter 9 More Complicated Experimental Designs. Randomized Block Design (RBD) t > 2 Treatments (groups) to be compared b Blocks of homogeneous units.
Chapters Way Analysis of Variance - Completely Randomized Design.
Comparing I > 2 Groups - Numeric Responses Extension of Methods used to Compare 2 Groups Independent and Dependent Samples Normal and non-normal data structures.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
Chapter 11 Analysis of Variance
More on Inference.
Statistics for Managers Using Microsoft Excel 3rd Edition
ANOVA Econ201 HSTS212.
Statistical Quality Control, 7th Edition by Douglas C. Montgomery.
CHAPTER 13 Design and Analysis of Single-Factor Experiments:
Randomized Block Design
Comparing Three or More Means
Basic Practice of Statistics - 5th Edition
Statistics Analysis of Variance.
Chapter 10: Analysis of Variance: Comparing More Than Two Means
Randomized Block Design
Post Hoc Tests on One-Way ANOVA
Post Hoc Tests on One-Way ANOVA
Experimental Design and the Analysis of Variance
Statistics for Business and Economics (13e)
More Complicated Experimental Designs
Chapter 13: Comparing Several Means (One-Way ANOVA)
More on Inference.
Linear Contrasts and Multiple Comparisons (§ 8.6)
Comparing Three or More Means
Multiple Comparisons: Example
Single-Factor Studies
Single-Factor Studies
Chapter 11: The ANalysis Of Variance (ANOVA)
Analysis of Variance (ANOVA)
Nonparametric Tests BPS 7e Chapter 28 © 2015 W. H. Freeman and Company.
More Complicated Experimental Designs
I. Statistical Tests: Why do we use them? What do they involve?
More Complicated Experimental Designs
Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,
Model Diagnostics and Tests
Experimental Design and the Analysis of Variance
1-Way Analysis of Variance - Completely Randomized Design
Experimental Design and the Analysis of Variance
STATISTICS INFORMED DECISIONS USING DATA
Presentation transcript:

1-Way Analysis of Variance - Completely Randomized Design Chapter 8 1-Way Analysis of Variance - Completely Randomized Design

Comparing t > 2 Groups - Numeric Responses Extension of Methods used to Compare 2 Groups Independent Samples and Paired Data Designs Normal and non-normal data distributions

Completely Randomized Design (CRD) Controlled Experiments - Subjects assigned at random to one of the t treatments to be compared Observational Studies - Subjects are sampled from t existing groups Statistical model yij is measurement from the jth subject from group i: where m is the overall mean, ai is the effect of treatment i , eij is a random error, and mi is the population mean for group i

1-Way ANOVA for Normal Data (CRD) For each group obtain the mean, standard deviation, and sample size: Obtain the overall mean and sample size

Analysis of Variance - Sums of Squares Total Variation Between Group (Sample) Variation Within Group (Sample) Variation

Analysis of Variance Table and F-Test Assumption: All distributions normal with common variance H0: No differences among Group Means (a1 =  = at =0) HA: Group means are not all equal (Not all ai are 0)

Expected Mean Squares Model: yij = m +ai + eij with eij ~ N(0,s2), Sai = 0:

Expected Mean Squares 3 Factors effect magnitude of F-statistic (for fixed t) True group effects (a1,…,at) Group sample sizes (n1,…,nt) Within group variance (s2) Fobs = MST/MSE When H0 is true (a1=…=at=0), E(MST)/E(MSE)=1 Marginal Effects of each factor (all other factors fixed) As spread in (a1,…,at)  E(MST)/E(MSE)  As (n1,…,nt)  E(MST)/E(MSE)  (when H0 false) As s2  E(MST)/E(MSE)  (when H0 false)

A) m=100, t1=-20, t2=0, t3=20, s = 20 B) m=100, t1=-20, t2=0, t3=20, s = 5 C) m=100, t1=-5, t2=0, t3=5, s = 20 D) m=100, t1=-5, t2=0, t3=5, s = 5

CRD with Non-Normal Data Kruskal-Wallis Test Extension of Wilcoxon Rank-Sum Test to k > 2 Groups Procedure: Rank the observations across groups from smallest (1) to largest ( N = n1+...+nk ), adjusting for ties Compute the rank sums for each group: T1,...,Tk . Note that T1+...+Tk = N(N+1)/2

Kruskal-Wallis Test H0: The k population distributions are identical (m1=...=mk) HA: Not all k distributions are identical (Not all mi are equal) An adjustment to H is suggested when there are many ties in the data. Formula is given on page 344 of O&L.

Post-hoc Comparisons of Treatments If differences in group means are determined from the F-test, researchers want to compare pairs of groups. Three popular methods include: Fisher’s LSD - Upon rejecting the null hypothesis of no differences in group means, LSD method is equivalent to doing pairwise comparisons among all pairs of groups as in Chapter 6. Tukey’s Method - Specifically compares all t(t-1)/2 pairs of groups. Utilizes a special table (Table 11, p. 701). Bonferroni’s Method - Adjusts individual comparison error rates so that all conclusions will be correct at desired confidence/significance level. Any number of comparisons can be made. Very general approach can be applied to any inferential problem

Fisher’s Least Significant Difference Procedure Protected Version is to only apply method after significant result in overall F-test For each pair of groups, compute the least significant difference (LSD) that the sample means need to differ by to conclude the population means are not equal

Tukey’s W Procedure More conservative than Fisher’s LSD (minimum significant difference and confidence interval width are higher). Derived so that the probability that at least one false difference is detected is a (experimentwise error rate)

Bonferroni’s Method (Most General) Wish to make C comparisons of pairs of groups with simultaneous confidence intervals or 2-sided tests When all pair of treatments are to be compared, C = t(t-1)/2 Want the overall confidence level for all intervals to be “correct” to be 95% or the overall type I error rate for all tests to be 0.05 For confidence intervals, construct (1-(0.05/C))100% CIs for the difference in each pair of group means (wider than 95% CIs) Conduct each test at a=0.05/C significance level (rejection region cut-offs more extreme than when a=0.05) Critical t-values are given in table on class website, we will use notation: ta/2,C,n where C=#Comparisons, n = df

Bonferroni’s Method (Most General)