Conducting ANOVA’s. I.Why? A. more than two groups to compare.

Slides:



Advertisements
Similar presentations
Multiple-choice question
Advertisements

Week 2 – PART III POST-HOC TESTS. POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than.
Analysis of Variance (ANOVA)
Lesson #24 Multiple Comparisons. When doing ANOVA, suppose we reject H 0 :  1 =  2 =  3 = … =  k Next, we want to know which means differ. This does.
1-Way Analysis of Variance
Topic 12: Multiple Linear Regression
Analysis of Variance (ANOVA) ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution.
Analysis of covariance Experimental design and data analysis for biologists (Quinn & Keough, 2002) Environmental sampling and analysis.
Independent Sample T-test Formula
Basics of ANOVA Why ANOVA Assumptions used in ANOVA Various forms of ANOVA Simple ANOVA tables Interpretation of values in the table Exercises.
Analysis of Variance: ANOVA. Group 1: control group/ no ind. Var. Group 2: low level of the ind. Var. Group 3: high level of the ind var.
Lesson #23 Analysis of Variance. In Analysis of Variance (ANOVA), we have: H 0 :  1 =  2 =  3 = … =  k H 1 : at least one  i does not equal the others.
Lecture 10: One Way ANOVA Between Subjects: Practice! Laura McAvinue School of Psychology Trinity College Dublin.
Chapter 14 Conducting & Reading Research Baumgartner et al Chapter 14 Inferential Data Analysis.
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Post-hoc Tests for ANOVA Explaining significant differences in 1-way ANOVA.
Analysis of Variance & Multivariate Analysis of Variance
Analysis of Variance Introduction to Business Statistics, 5e Kvanli/Guynes/Pavur (c)2000 South-Western College Publishing.
Comparing Several Means: One-way ANOVA Lesson 14.
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
If = 10 and = 0.05 per experiment = 0.5 Type I Error Rates I.Per Comparison II.Per Experiment (frequency) = error rate of any comparison = # of comparisons.
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
Chapter 12: Analysis of Variance
ANOVA Chapter 12.
PS 225 Lecture 15 Analysis of Variance ANOVA Tables.
Repeated Measures ANOVA
Basics of ANOVA Why ANOVA Assumptions used in ANOVA Various forms of ANOVA Simple ANOVA tables Interpretation of values in the table Exercises.
ANOVA Greg C Elvers.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Comparing Several Means: One-way ANOVA Lesson 15.
January 31 and February 3,  Some formulae are presented in this lecture to provide the general mathematical background to the topic or to demonstrate.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Chapter 10 Analysis of Variance.
The Randomized Complete Block Design
Chapter 4 analysis of variance (ANOVA). Section 1 the basic idea and condition of application.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Orthogonal Linear Contrasts This is a technique for partitioning ANOVA sum of squares into individual degrees of freedom.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Three Statistical Issues (1) Observational Study (2) Multiple Comparisons (3) Censoring Definitions.
Principles of Biostatistics ANOVA. DietWeight Gain (grams) Standard910 8 Junk Food Organic Table shows weight gains for mice on 3 diets.
Chapter 8 1-Way Analysis of Variance - Completely Randomized Design.
Psy 230 Jeopardy Related Samples t-test ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
ANOVAs.  Analysis of Variance (ANOVA)  Difference in two or more average scores in different groups  Simplest is one-way ANOVA (one variable as predictor);
By: Mona Tarek Assistant Researcher Department of Pharmaceutics Statistics Supervised by Dr. Amal Fatany King Saud University.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Tests after a significant F
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Simple ANOVA Comparing the Means of Three or More Groups Chapter 9.
Chapters Way Analysis of Variance - Completely Randomized Design.
O A post-hoc test is needed after we complete an ANOVA in order to determine which groups differ from each other. o Do not conduct a post-hoc test unless.
ANalysis Of VAriance (ANOVA) Used for continuous outcomes with a nominal exposure with three or more categories (groups) Result of test is F statistic.
I231B QUANTITATIVE METHODS Analysis of Variance (ANOVA)
Differences Among Groups
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Дисперсионный анализ ANOVA
Chapter 11: Test for Comparing Group Means: Part I.
Posthoc Comparisons finding the differences. Statistical Significance What does a statistically significant F statistic, in a Oneway ANOVA, tell us? What.
Education 793 Class Notes ANCOVA Presentation 11.
Comparing Multiple Groups:
Analysis of Variance -ANOVA
Comparing Multiple Groups: Analysis of Variance ANOVA (1-way)
Kin 304 Inferential Statistics
Experimental Statistics - Week 4 (Lab)
Ch10 Analysis of Variance.
Presentation transcript:

Conducting ANOVA’s

I.Why? A. more than two groups to compare

Conducting ANOVA’s I.Why? A. more than two groups to compare What’s the prob? D. putrida low density D. putrida high density D. putrida with D. tripuncatata

Conducting ANOVA’s I.Why? A. more than two groups to compare What’s the prob? D. putrida low density D. putrida high density D. putrida with D. tripuncatata What was our solution?

Conducting ANOVA’s I.Why? A. more than two groups to compare What’s the prob? D. putrida low density D. putrida high density D. putrida with D. tripuncatata What was our solution? U U U

What’s the prob? D. putrida low density D. putrida high density D. putrida with D. tripuncatata Tested each contrast at p = 0.05 Probability of being correct in rejecting each Ho: U U U

What’s the prob? D. putrida low density D. putrida high density D. putrida with D. tripuncatata Tested each contrast at p = 0.05 Probability of being correct in rejecting all Ho: x0.95x0.95 = 0.86 So, Type I error rate has increased from 0.05 to 0.14 U U U

Probability of being correct in rejecting all Ho: x0.95x0.95 = 0.86 So, Type I error rate has increased from 0.05 to 0.14 Hmmmm….. What can we do to maintain a 0.05 level across all contrasts?

Probability of being correct in rejecting all Ho: x0.95x0.95 = 0.86 So, Type I error rate has increased from 0.05 to 0.14 Hmmmm….. What can we do to maintain a 0.05 level across all contrasts? Right. Adjust the comparison-wise error rate.

Probability of being correct in rejecting all Ho: x0.95x0.95 = 0.86 So, Type I error rate has increased from 0.05 to 0.14 Simplest: Bonferroni correction: Comparison-wise p = experiment-wise p/n Where n = number of contrasts. Experient-wise = 0.05 Comparison-wise = 0.05/3 =

Probability of being correct in rejecting all Ho: x0.95x0.95 = 0.86 So, Type I error rate has increased from 0.05 to 0.14 Simplest: Bonferroni correction: Comparison-wise p = experiment-wise p/n Where n = number of contrasts. Experient-wise = 0.05 Comparison-wise = 0.05/3 = So, confidence = 0.983

Probability of being correct in rejecting all Ho: x0.983x0.983 = 0.95 So, Type I error rate is now 0.95 Simplest: Bonferroni correction: Comparison-wise p = experiment-wise p/n Where n = number of contrasts. Experient-wise = 0.05 Comparison-wise = 0.05/3 = So, confidence = 0.983

Conducting ANOVA’s I.Why? A. more than two groups to compare What’s the prob? - multiple comparisons reduce experiment-wide alpha level. - Bonferroni adjustments assume contrasts as independent; but they are both part of the same experiment…

Our consideration of 1 vs. 2 might consider the variation in all treatments that were part of this experiment; especially if we are interpreting the differences between mean comparisons as meaningful. 1 vs. 2 – not significant 1 vs. 3 – significant. So, interspecific competition is more important than intraspecific competition

- Bonferroni adjustments assume contrasts as independent; but they are both part of the same experiment… Our consideration of 1 vs. 2 might consider the variation in all treatments that were part of this experiment; especially if we are interpreting the differences between mean comparisons as meaningful.

Conducting ANOVA’s I.Why? A. more than two groups to compare B. complex design with multiple factors - blocks - nested terms - interaction effects - correlated variables (covariates) - multiple responses

Conducting ANOVA’s I.Why? II.How? A. Variance Redux Of a population Of a sample

Sum of squares n - 1 S 2 =

“Sum of squares” = SS n - 1 S 2 = = SS  x 2 ) -  x) 2 n

“Sum of squares” = SS n - 1 S 2 = = SS  x 2 ) -  x) 2 n n - 1 MS =

Conducting ANOVA’s I.Why? II.How? A. Variance Redux B. The ANOVA Table Source of Variation df SS MS F p

Group AGroup BGroup C (Control)(Junk Food)(Health Food) Weight gain in mice fed different diets Group sums  x  x 2  x) 2 /n

Group AGroup BGroup C (Control)(Junk Food)(Health Food) Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n

Group AGroup BGroup C (Control)(Junk Food)(Health Food) Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n Correction term = (  x) 2 /N = (310.7) 2 /30

Group AGroup BGroup C (Control)(Junk Food)(Health Food) Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL 29 SS total = – = = SS  x 2 ) -  x) 2 n

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL SS total = – = = SS  x 2 ) -  x) 2 n

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP SS group = – = = SS  x 2 ) -  x) 2 n

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP MS group = /2 =  x 2 ) -  x) 2 n n - 1 MS =

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP “ERROR” (within)

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP “ERROR” (within) MS error = /27 = 0.789

GOOD GRIEF !!!

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP “ERROR” (within) Variance (MS) between groups Variance (MS) within groups F =

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP “ERROR” (within) F = = 41.81

Weight gain in mice fed different diets Group sumsTotals  x  x  x) 2 /n = CT Source of Variation df SS MS F p TOTAL GROUP < 0.05 “ERROR” (within) F = = 41.81

Conducting ANOVA’s I.Why? II.How? III.Comparing Means “post-hoc mean comparison tests – after ANOVA TUKEY – CV = q MS error n Q from table A.7 = 3.53 n = n per group (10) =

Means: Health Food 8.58 Control10.29 Junk Food12.25 H – C = 1.70 J – C = 1.93 H – J = 3.67 All greater than , so all mean comparisions are significantly different at an experiment-wide error rate of 0.05.

Means: Health Food 8.58 a Control10.29 b Junk Food12.25 c H – C = 1.70 J – C = 1.93 H – J = 3.67 All greater than , so all mean comparisions are significantly different at an experiment-wide error rate of 0.05.