Effect Sizes & Power Analyses for k-group Designs Effect Size Estimates for k-group ANOVA designs Power Analysis for k-group ANOVA designs Effect Size.

Slides:



Advertisements
Similar presentations
One-Way BG ANOVA Andrew Ainsworth Psy 420. Topics Analysis with more than 2 levels Deviation, Computation, Regression, Unequal Samples Specific Comparisons.
Advertisements

Kruskal Wallis and the Friedman Test.
ANOVA & Pairwise Comparisons
Between Groups & Within-Groups ANOVA
Multiple Group X² Designs & Follow-up Analyses X² for multiple condition designs Pairwise comparisons & RH Testing Alpha inflation Effect sizes for k-group.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Multiple Group X² Designs & Follow-up Analyses X² for multiple condition designs Pairwise comparisons & RH Testing Alpha inflation Effect sizes for k-group.
Power Analysis for Correlation & Multiple Regression Sample Size & multiple regression Subject-to-variable ratios Stability of correlation values Useful.
Comparing Means.
Analyses of K-Group Designs : Analytic Comparisons & Trend Analyses Analytic Comparisons –Simple comparisons –Complex comparisons –Trend Analyses Errors.
POST HOC COMPARISONS What is the Purpose?
Hypothesis test with t – Exercise 1 Step 1: State the hypotheses H 0 :  = 50H 1 = 50 Step 2: Locate critical region 2 tail test,  =.05, df = =24.
Two Groups Too Many? Try Analysis of Variance (ANOVA)
Analyses of K-Group Designs : Omnibus F & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation.
Simple Correlation Scatterplots & r Interpreting r Outcomes vs. RH:
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
Some Details about Bivariate Stats Tests Conceptualizing the four stats tests Conceptualizing NHST, critical values and p-values NHST and Testing RH: Distinguishing.
Review of Factorial Designs 5 terms necessary to understand factorial designs 5 patterns of factorial results for a 2x2 factorial designs Descriptive &
Parametric & Nonparametric Models for Within-Groups Comparisons overview X 2 tests parametric & nonparametric stats Mann-Whitney U-test Kruskal-Wallis.
K-group ANOVA & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures.
Multiple-Group Research Designs Limitations of 2-group designs “Kinds” of Treatment & Control conditions Kinds of Causal Hypotheses k-group ANOVA & Pairwise.
2x2 BG Factorial Designs Definition and advantage of factorial research designs 5 terms necessary to understand factorial designs 5 patterns of factorial.
Effect Sizes, Power Analysis and Statistical Decisions Effect sizes -- what and why?? review of statistical decisions and statistical decision errors statistical.
Analysis of Factorial Designs Statistical Analysis of 2x2 Designs Statistical Analysis of kxk Designs.
Repeated Measures ANOVA Used when the research design contains one factor on which participants are measured more than twice (dependent, or within- groups.
Osama A Samarkandi, PhD-RN, NIAC BSc, GMD, BSN, MSN.
AM Recitation 2/10/11.
Extension to ANOVA From t to F. Review Comparisons of samples involving t-tests are restricted to the two-sample domain Comparisons of samples involving.
Analyses of K-Group Designs : Omnibus F & Follow-up Analyses ANOVA for multiple condition designs Pairwise comparisons, alpha inflation & correction Alpha.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
T-TEST Statistics The t test is used to compare to groups to answer the differential research questions. Its values determines the difference by comparing.
Sociology 5811: Lecture 14: ANOVA 2
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
One-way Analysis of Variance 1-Factor ANOVA. Previously… We learned how to determine the probability that one sample belongs to a certain population.
Coding Multiple Category Variables for Inclusion in Multiple Regression More kinds of predictors for our multiple regression models Some review of interpreting.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
I. Statistical Tests: A Repetive Review A.Why do we use them? Namely: we need to make inferences from incomplete information or uncertainty þBut we want.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
Basic Analysis of Factorial Designs The F-tests of a Factorial ANOVA Using LSD to describe the pattern of an interaction.
ANOVA: Analysis of Variance.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Education 793 Class Notes Decisions, Error and Power Presentation 8.
May  “Is there a statistically significant difference in NSEs’ perception of personal qualities of NNSE who have a strong L1 influence in their.
Power and Sample Size Anquan Zhang presents For Measurement and Statistics Club.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
ONE-WAY BETWEEN-GROUPS ANOVA Psyc 301-SPSS Spring 2014.
Research Methods and Data Analysis in Psychology Spring 2015 Kyle Stephenson.
Meta-Analysis Effect Sizes effect sizes – r, d & OR computing effect sizes estimating effect sizes & other things of which to be careful!
Research Methods and Data Analysis in Psychology Spring 2015 Kyle Stephenson.
EDUC 200C Section 9 ANOVA November 30, Goals One-way ANOVA Least Significant Difference (LSD) Practice Problem Questions?
Simple ANOVA Comparing the Means of Three or More Groups Chapter 9.
Chapter 13 Understanding research results: statistical inference.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
Introduction to Power and Effect Size  More to life than statistical significance  Reporting effect size  Assessing power.
Analyze Of VAriance. Application fields ◦ Comparing means for more than two independent samples = examining relationship between categorical->metric variables.
BUS 308 Week 2 Quiz Check this A+ tutorial guideline at 1. How is the sum of squares unlike.
Central Limit Theorem, z-tests, & t-tests
Chapter 6 Making Sense of Statistical Significance: Decision Errors, Effect Size and Statistical Power Part 1: Sept. 18, 2014.
Comparing Several Means: ANOVA
I. Statistical Tests: Why do we use them? What do they involve?
Richard S. Balkin, Ph.D., LPC
Psych 231: Research Methods in Psychology
Chapter 9 Introduction to the Analysis of Variance
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Exercise 1 Use Transform  Compute variable to calculate weight lost by each person Calculate the overall mean weight lost Calculate the means and standard.
Presentation transcript:

Effect Sizes & Power Analyses for k-group Designs Effect Size Estimates for k-group ANOVA designs Power Analysis for k-group ANOVA designs Effect Size Estimates for k-group X 2 designs Power Analysis for k-group X 2 designs

k-BG Effect Sizes When you have more than 2 groups, it is possible to compute the effect size for “the whole study”. Include the F-value, the df (both for the effect and the error), and click the button for the type of design you have (BG or WG) However, this type of effect size is not very helpful, because: -- you don’t know which pairwise comparison(s) make up the r -- it can only be compared to other designs with exactly the same combination of conditions

k-BG Effect Sizes Just as RH: for k-group designs involve comparing 2 groups at a time (pairwise comparisons)… The most useful effect sizes for k-group designs are computed as the effect size for 2 groups (effect sizes for pairwise comparisons) Since you won’t have F-values for the pairwise comparisons, you will use Computator to complete a 2-step computation Using info from the SPSPS output d = (M1 - M2 ) /  MSerror d² r =  d² + 4

Descriptives outcome variable -- larger scores are better no therapy weekly therapy daily therapy Total NMeanStd. Deviation Pairwise effect sizes computation for k-BG designs For no therapy vs. weekly therapy … For a BG design be sure to press

k-WG Effect Sizes Just as RH: for k-group designs involve comparing 2 groups at a time (pairwise comparisons)… Effect sizes for k-group designs are computed as the effect size for 2 groups (effect sizes for pairwise comparisons) Since you won’t have F-values for the pairwise comparisons, you will use Computator to complete a 3-step computation Using info from the SPSPS output d = (M1 - M2 ) /  (MSerror * 2) dw = d * 2 dw² r =  dw² + 4

Pairwise effect sizes computation for k-WG designs For no intake vs. mid … For a WG design be sure to press Descriptive Statistics INTAKE MID FINAL MeanStd. DeviationN

Determining the power you need.. For a 2-condition design... the omnibus-F is sufficient -- retain or reject, you’re done ! you can easily determine the sample size needed to test any expected effect size with a given amount of power For a k-condition design … the power of the omnibus-F - isn’t what matters ! a significant omnibus-F only tells you that the “two most different” means are significantly different follow-up (pairwise) analyses will be needed to test if the pattern of the mean differences matches the RH: you don’t want to have a “pattern of results” that is really just a “pattern of differential statistical power” you need to assure that you have sufficient power for the smallest pairwise effect needed to test your specific RH:

k-group Power Analyses As before, there are two kids of power analyses;;; A priori power analyses conducted before the study is begun start with r & desired power to determine the needed N Post hoc power analysis conducted after retaining H0: start with r & N and determine power & Type II probability

Power Analyses for k - BG designs Important Symbols S is the total # of participants in that pairwise comp n = S / 2 is the # of participants in each condition of that pairwise comparison N = n * k is the total number or participants in the study Example the smallest pairwise effect size for a 3-BG study was.25 with r =.25 and 80% power S = 120 for each of the 2 conditions n = S / 2 = 120 / 2 = 60 for the whole study N = n * k = 60 * 3 = 180

Power Analyses for k - WG designs Important Symbols S is the total # of participants in that pairwise comp For WG designs, every participant is in every condition, so… S is also the number of participants in each condition Example the smallest pairwise effect size for a 3-WG study was.20 with r =.20 and 80% power S = 191 for each condition of a WG design n = S = 191 for the whole study N = S = 191

Combing LSD & r … Cx Tx1 mean M dif r M dif r Cx 20.3 Tx Tx * *.41 * indicates mean difference is significant based on LSD criterion (min dif = 6.1) Something to notice … The effect size of Cx vs. Tx1 is substantial (Cohen calls.30 “medium effect”), but is not significant, suggesting we should check the power of the study for testing an effect of this size.

k-group Effect Sizes When you have more than 2 groups, it is possible to compute the effect size for “the whole study”. Include the X², the total N and click the button for df > 1 However, this type of effect size is not very helpful, because: -- you don’t know which pairwise comparison(s) make up the r -- it can only be compared to other designs with exactly the same combination of conditions

Pairwise Effect Sizes Just as RH: for k-group designs involve comparing 2 groups at a time (pairwise comparisons)… The most useful effect sizes for k-group designs are computed as the effect size for 2 groups (effect sizes for pairwise comparisons) The effect size computator calculates the effect size for each pairwise X² it computes

k-group Power Analyses As before, there are two kinds of power analyses;;; A priori power analyses conducted before the study is begun start with r & desired power to determine the needed N Post hoc power analysis conducted after retaining H0: start with r & N and determine power & Type II probability

Power Analyses for k -group designs Important Symbols S is the total # of participants in that pairwise comp n = S / 2 is the # of participants in each condition of that pairwise comparison N = n * k is the total number or participants in the study Example the smallest pairwise X² effect size for a 3-BG study was.25 with r =.25 and 80% power S = 120 for each of the 2 conditions n = S / 2 = 120 / 2 = 60 for the whole study N = n * k = 60 *3 = 180