Download presentation
Presentation is loading. Please wait.
1
Matthew Burns University of Missouri
Meta-Analysis Matthew Burns University of Missouri
2
Overview Resources
3
School Psychologists and Research
Keith (2008) – three research activities Conducting Research Consuming Research Synthesizing Research
4
Most Famous Articles in our Field?
6
Types of Errors in Narratively Synthesizing Research
Selectivity Erroneous detailing Nonrecognition of faulty author conclusions Suppression of contrary findings The larger the literature, the less useful reviews
7
Center et al. (1995) study of Reading Recovery
Reading Recovery Control Pretest Posttest Pretest Posttest Clay Book Level Test Word Reading Test Neale Analysis of Reading Passage Reading Test Diagnostic Spelling Test Phonemic Awareness Test Cloze Test Word Attack Skills Test
8
Significance = Size of Sample X Size of Effect
A standardized estimate of the difference between experimental and control groups
9
X1 – X2 d = (s1 + s2)/2
10
2r d = r2
11
Hedge’s g
13
Hedge’s g Upwardly biased for small sample sizes
Multiply g by (1-3/[N-9]) N – total sample size
14
Effect Size Websites
15
Interpreting ES Cohen (1988) Somewhat arbitrarily selected
.20 small .50 medium .80 large Somewhat arbitrarily selected “More (was) to be gained than lost by supplying a common conventional frame of reference” (p. 25).
19
Post-Test d = -0.18 Pre to Post Test d = 0.76
20
Synthesizing
21
Meta-Analysis A methodology for systematically examining a body of research (Glass, 1976). Examine the effect of variables on the phenomenon of interest Exhaustive search Established and explicit inclusion criteria Report an empirically derived effect size Integrate research to create stronger generalizations
22
Meta-Analysis of Reading Recovery
k d Reading Recovery All students “Discontinued” “Not Discontinued” Interventions for first graders All Other than RR * Elbaum, Vaughn, Hughes, & Moody (2000)
23
Interventions for Children with LD
Reading comprehension 1.13 Direct instruction Psycholinguistic training Modality instruction Diet Perceptual training Kavale & Forness, 2000
24
Contributions to Learning – Hattie 2009
The student d = .40 The school d = .23 The teacher d = .49 The curriculum d = .45
25
Meta-Analysis Hattie 2009: Curriculum
Vocabulary Instruction d = .67 Phonics Instruction d = .60 Comprehension Instruction d = .58 Whole Language d = .06 Perceptual-Motor d = .08
26
Teacher Roles: Hattie 2009 Activator Feedback d = .72
Meta-cognition d = .67 Direct Instruction d = .59 Mastery Learning d = .57 Form Assess. d = .46 Total d = .60 Facilitator Simulation/game d = .32 Inquiry-based d = .31 Class size d = .21 Problem-based d = .15 Inductive teach d = .06 Total d = .17
27
Why Conduct Meta-Analytic Research
Conflicting results Clarity in a large literature Quantify relationships between variables across studies to build theory
28
Conflicting Results: Problem Solving Teams Mean Effect Sizes
29
Clarity: Disability Simulations
30
Theory
31
Need for Meta-Analyses
32
process
33
Beginning Identify your research questions broadly stated
Identify key words that used to describe relevant concepts in literature Develop inclusion criteria (For what are you looking?)
34
Search Terms Comprehensive
Limit false positives, BUT almost eliminate false negatives Include synonyms Include vocabulary Based on literature!!!! IOA
35
Searching the Literature
Identify source for electronic search Conduct search Eliminate based on abstract Eliminate based on article Search reference lists Contact authors of found articles
36
Learner control AND reading AND math
Learner control AND reading AND math* AND literacy AND science AND social studies AND history AND language AND English language arts AND writing AND English language learner AND computer assisted learning AND classroom AND computers AND educational technology AND school
39
Codes Four general codes Terms must be based on literature
Substantive features of what is being studied Research methods Characteristics of the researcher and research context How the study is reported Terms must be based on literature Clearly described in paper Coding manual – pilot and IOA
42
Type of Meta-Analysis Fixed Effects Random Effects
43
Fixed Effects Meta-Analysis
Very similar studies – same population etc. One true effect sizes Use when all studies are identical and identified Five drug company studies Six studies on Reading Recovery
44
Random Effects Model Effects weighting (more on that later)
Articles are similar but vary somehow There is a distribution of effect sizes Combined effect Fixed effects = the one common effect Random effects = mean of the population of true effects Effects weighting (more on that later) Random effects not as influenced by extremely small or large studies (part of population)
45
Variance for Random Effects
n1 + n2 g2 n1*n2 2*(n1 + n2) v = +
46
Weighting Random Effects
Q measure of total variance and homogeneity
48
SCD Effect Size Cohen’s d is not appropriate for SCD
No-assumptions effect size (Busk & Serlin, 1992) Autocorrelation Percentage non-overlapping data (PND, Scruggs, Mastropieri, & Casto, 1987) No confidence intervals Affected by outlying data
49
Auto correlation r = .54 Auto correlation r = .57 Auto correlation r = .48 Total Auto correlation r = .57 Auto correlation r = .72
51
Non-Overlap of All Pairs (NAP)
Parker & Vannest (2009) Based on area under the curve (AUC) Compares overlap between each data point in A with each data point in B NAP = number of comparison pairs with no overlap divided by the total number of pairs Can be converted to other effect sizes
55
How Large? ROC with large effect based on visual analysis of multiple-baseline data = .96 Small effect = .90 Petersen-Brown, Karich, & Symons, 2012
56
Phi Remember PND, PAND, NAP unknown distribution cannot convert to CI
Phi commonly used effect size Cohen 1989
58
Converting to Phi NAP uses formula by Ruscio (2008) AUC to phi
59
Interpret Phi as ES .10 = small .30 = medium .50 = large
Derived from large-group designs Convert d to correlation with .20, .50, and .80
63
Threats to Validity Nonindependent effects Gaps in the literature
Exclude repeats Average Gaps in the literature Apples and oranges File drawer problem
65
Nfs = N0 (d0 – dc) dc Fail-Safe N (Orwin, 1983) N0 = Number of studies
d0 = Mean of observed effects dc = Criterion d (e.g., .20)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.