Analyses of K-Group Designs : Analytic Comparisons & Trend Analyses Analytic Comparisons –Simple comparisons –Complex comparisons –Trend Analyses Errors.

Slides:



Advertisements
Similar presentations
Week 2 – PART III POST-HOC TESTS. POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than.
Advertisements

Mixed Designs: Between and Within Psy 420 Ainsworth.
One-Way BG ANOVA Andrew Ainsworth Psy 420. Topics Analysis with more than 2 levels Deviation, Computation, Regression, Unequal Samples Specific Comparisons.
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
Hypothesis Testing Steps in Hypothesis Testing:
Analytic Comparisons & Trend Analyses Analytic Comparisons –Simple comparisons –Complex comparisons –Trend Analyses Errors & Confusions when interpreting.
1 G Lect 14b G Lecture 14b Within subjects contrasts Types of Within-subject contrasts Individual vs pooled contrasts Example Between subject.
Cal State Northridge  320 Andrew Ainsworth PhD Regression.
More on ANOVA. Overview ANOVA as Regression Comparison Methods.
Between Groups & Within-Groups ANOVA
Multiple Group X² Designs & Follow-up Analyses X² for multiple condition designs Pairwise comparisons & RH Testing Alpha inflation Effect sizes for k-group.
Principal Components An Introduction Exploratory factoring Meaning & application of “principal components” Basic steps in a PC analysis PC extraction process.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Multiple Group X² Designs & Follow-up Analyses X² for multiple condition designs Pairwise comparisons & RH Testing Alpha inflation Effect sizes for k-group.
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
DOCTORAL SEMINAR, SPRING SEMESTER 2007 Experimental Design & Analysis Contrasts, Trend Analysis and Effects Sizes February 6, 2007.
Introduction the General Linear Model (GLM) l what “model,” “linear” & “general” mean l bivariate, univariate & multivariate GLModels l kinds of variables.
Analyses of K-Group Designs : Omnibus F & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation.
One-way Between Groups Analysis of Variance
Review of Factorial Designs 5 terms necessary to understand factorial designs 5 patterns of factorial results for a 2x2 factorial designs Descriptive &
Statistics for the Social Sciences
Parametric & Nonparametric Models for Within-Groups Comparisons overview X 2 tests parametric & nonparametric stats Mann-Whitney U-test Kruskal-Wallis.
K-group ANOVA & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures.
Parametric & Nonparametric Models for Univariate Tests, Tests of Association & Between Groups Comparisons Models we will consider Univariate Statistics.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
2x2 BG Factorial Designs Definition and advantage of factorial research designs 5 terms necessary to understand factorial designs 5 patterns of factorial.
Correlation and Regression Analysis
Relationships Among Variables
Analysis of Variance. ANOVA Probably the most popular analysis in psychology Why? Ease of implementation Allows for analysis of several groups at once.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Analyses of K-Group Designs : Omnibus F & Follow-up Analyses ANOVA for multiple condition designs Pairwise comparisons, alpha inflation & correction Alpha.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
So far... We have been estimating differences caused by application of various treatments, and determining the probability that an observed difference.
Effect Size Estimation in Fixed Factors Between- Groups Anova.
Coding Multiple Category Variables for Inclusion in Multiple Regression More kinds of predictors for our multiple regression models Some review of interpreting.
Curvilinear 2 Modeling Departures from the Straight Line (Curves and Interactions)
Testing Hypotheses about Differences among Several Means.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistics for the Social Sciences Psychology 340 Fall 2012 Analysis of Variance (ANOVA)
Regression Models w/ 2 Quant Variables Sources of data for this model Variations of this model Main effects version of the model –Interpreting the regression.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Within Subjects Analysis of Variance PowerPoint.
Basic Analysis of Factorial Designs The F-tests of a Factorial ANOVA Using LSD to describe the pattern of an interaction.
Analysis of Variance (ANOVA) Brian Healy, PhD BIO203.
+ Comparing several means: ANOVA (GLM 1) Chapter 11.
Stats Lunch: Day 8 Repeated-Measures ANOVA and Analyzing Trends (It’s Hot)
Chapter 13 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 Chapter 13: Multiple Comparisons Experimentwise Alpha (α EW ) –The probability.
Remember You just invented a “magic math pill” that will increase test scores. On the day of the first test you give the pill to 4 subjects. When these.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, partial correlations & multiple regression Using model plotting to think about ANCOVA & Statistical control Homogeneity.
General Linear Model What is the General Linear Model?? A bit of history & our goals Kinds of variables & effects It’s all about the model (not the data)!!
ANOVA and Multiple Comparison Tests
Factorial BG ANOVA Psy 420 Ainsworth. Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches.
Effect Sizes & Power Analyses for k-group Designs Effect Size Estimates for k-group ANOVA designs Power Analysis for k-group ANOVA designs Effect Size.
AP Statistics From Randomness to Probability Chapter 14.
Multiple Regression.
32931 Technology Research Methods Autumn 2017 Quantitative Research Component Topic 4: Bivariate Analysis (Contingency Analysis and Regression Analysis)
Dependent-Samples t-Test
Week 2 – PART III POST-HOC TESTS.
Non-linear relationships
Quantitative Methods Simple Regression.
Multiple comparisons - multiple pairwise tests - orthogonal contrasts
Multiple Regression.
Statistical Inference about Regression
Chapter 12 A Priori and Post Hoc Comparisons Multiple t-tests
Multiple comparisons - multiple pairwise tests - orthogonal contrasts
Conceptual Understanding
SPSS SPSS Problem (Part 1). SPSS SPSS Problem (Part 1)
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Analyses of K-Group Designs : Analytic Comparisons & Trend Analyses Analytic Comparisons –Simple comparisons –Complex comparisons –Trend Analyses Errors & Confusions when interpreting Comparisons Comparisons using SPSS Orthogonal Comparisons Effect sizes for Analytic & Trend Analyses

Analytic Comparisons -- techniques to make specific comparisons among condition means. There are two types… Simple Analytic Comparisons -- to compare the means of two IV conditions at a time Rules for assigning weights: 1. Assign weight of “0” to any condition not involved in RH 2. Assign weights to reflect comparison of interest 3. Weights must add up to zero Tx2 Tx1 C E.g. #1 RH: Tx1 < C (is 10 < 40 ?) E.g. #2 RH: Tx2 < Tx1(is 40 < 10?) How do Simple Analytic Comparisons & Pairwise Comparisons differ? Usually there are only k-1 analytic comparisons (1 for each df)

So, what happens with these weights? n( Σ w*mean) 2 The formula SS comp = & F = SS comp /MS errror Σ w 2 The important part is the Σw*mean  multiply each mean by its weight and add the weighted means together if a group is weighted 0, that group is “left out” of the SS comp if the groups in the analysis have the same means SS comp = 0 the more different the means of the groups in the analysis the larger SS comp will be Tx2 Tx1 C Σw*mean = (-1*40) + (0 * 10) + (1 * 40) = Σw*mean = (-1*40) + (1 * 10) + (0 * 40) = -30

Complex Analytic Comparisons -- To compare two “groups” of IV conditions, where a “group” is sometimes one condition and sometimes 2 or more conditions that are “combined” and represented as their average mean. Rules for assigning weights: 1. Assign weight of “0” to any condition not involved in RH 2. Assign weights to reflect group comparison of interest 3. Weights must add up to zero Tx2 Tx1 C RH: Control higher than average of Tx conditions (40 > 25?) Careful !!! Notice the difference between the proper interpretation of this complex comparison and of the set of simple comparisons below. RH: Control is poorer than (is 40 < 40) both of Tx conditions (is 10 < 40) Notice the complex & set of simple comparisons have different interpretations!

Criticism of Complex Analytical Comparisons Complex comparisons are seldom useful for testing research hypotheses !! (Most RH are addressed by the proper set of simple comparisons!) Complex comparisons require assumptions about the comparability of IV conditions (i.e., those combined into a “group”) that should be treated as research hypotheses !! Why would you run two (or more) separate IV conditions, being careful to following their different operational definitions, only to “collapse” them together in a complex comparison Complex comparisons are often misinterpreted as if it were a set of simple comparisons

Orthogonal and nonorthogonal sets of analytics Orthogonal means independent or unrelated -- the idea of a set of orthogonal analytic comparisons is that each would provide statistically independent information. The way to determine if a pair of comparisons is orthogonal is to sum the products of the corresponding weights. If that sum is zero, then the pair of comparisons is orthogonal. Non-orthogonal PairOrthogonal Pair Tx1 Tx2 C Sum = 1 Sum = 0 For a “set” of comparisons to be orthogonal, each pair must be !

Advantages and Disadvantages of Orthogonal comparison sets Advantages each comparison gives statistically independent information, so the orthogonal set gives the most information possible for that number of comparisons it is a mathematically elegant way of expressing the variation among the IV conditions -- SS IV is partitioned among the comps Disadvantages “separate research questions” often doesn’t translate into “statistically orthogonal comparisons” (e.g., & ) can only have # orthogonal comparisons = df IV the comparisons included in an orthogonal set rarely address the set of research hypotheses one has (e.g., sets of orthogonal analyses usually include one or more complex comparisons)

Trend Analyses -- the shape of the IV-DV relationship Trend analyses can be applied whenever the IV is quantitative. There are three basic types of trend (w/ two versions of each) Linear Trends positivenegative Quadratic Trends (requires at least 3 IV conditions) U-shapedinverted-U-shaped Cubic Trends (requires at least 4 IV conditions) Note: Trend analyses are computed same as analytics -- using weights (coefficients) from “table” (only for =n & =spacing) Note: As set of trend analyses are orthogonal – separate

Not only is it important to distinguish between the two different types of each basic trend, but it is important to identify shapes that are combinations of trends (and the different kinds) Here are two different kinds of “linear + quadratic” that would have very different interpretations + linear &+ linear & U-shapedinverted quadratic U-shape quad (“accelerating returns” curve) ( “diminishing returns” curve) Here is a common combination of + linear & cubic (“learning curve”)

“How to mess-up interpreting analytic comparisons” Simple Comparisons: -- ignore the direction of the simple difference (remember you must have a difference in the correct direction) Complex Comparisons: -- ignore direction of the difference (remember you must have a difference in the correct direction) -- misinterpret complex comparison as if it were a set of simple comparisons Trend Analyses: -- ignore specific pattern of the trend (remember you must have a shape in the correct direction or pattern) -- misinterpret trend as if it were a set of simple comps -- ignore combinations of trend (e.g., the RH of a linear trend “really means” that there is a significant linear trend, and no significant quadratic or cubic trend) -- perform trend analyses on non-quantitative IV conditions

Effect Sizes for the k-BG or k-WG  Omnibus F The effect size formula must take into account both the size of the sample (represented by dferror) and the size of the design (represented by the dfeffect). r =  ( df effect * F ) / ( F + df error ) The effect size estimate for a k-group design can only be compared to effect sizes from other studies with designs having exactly the same set of conditions. There is no “d” for k-group designs – you can’t reasonably take the “difference” among more than 2 groups.

Effect Sizes for the k-BG or k-WG  Analytic Comps (Simple, Complex & Trend Analyses) Since all three kinds of analytic comparisons always have df effect = 1, we can use the same effect size formula for them all (the same one we used for 2-group designs). r =  F / (F + df error ) or r =  t 2 / (t 2 + df) Effects size estimates from simple & complex comparisons can be compared with effect sizes from other studies with designs having the same set of conditions (no matter what other differing conditions are in the two designs). Effect size estimates from trend analyses can only be compared with effect sizes from other studies with designs having the same set of conditions.