Reading and interpreting quantitative intervention research syntheses: an introduction Steve Higgins, Durham University Robert Coe, Durham University Mark.

Slides:



Advertisements
Similar presentations
Mixed methods synthesis ESRC Methods Festival 2006 James Thomas Institute of Education, University of London.
Advertisements

Reading and interpreting quantitative intervention research syntheses: an introduction Steve Higgins, Durham University Robert Coe, Durham University Mark.
What is meta-analysis? ESRC Research Methods Festival Oxford 8 th July, 2010 Professor Steven Higgins Durham University
Adapting Designs Professor David Torgerson University of York Professor Carole Torgerson Durham University.
Practical Meta-Analysis
Statistical Issues in Research Planning and Evaluation
Effect Size Overheads1 The Effect Size The effect size (ES) makes meta-analysis possible. The ES encodes the selected research findings on a numeric scale.
Effect Size and Meta-Analysis
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
15 de Abril de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Evaluating Hypotheses
Meta Analysis & Programmatic Research Re-introduction to Programmatic research Importance of literature reviews Meta Analysis – the quantitative contribution.
Meta-analysis & psychotherapy outcome research
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
Business research methods: data sources
Practical Meta-Analysis -- D. B. Wilson 1 Practical Meta-Analysis David B. Wilson.
Using meta-analyses in your literature review BERA Doctoral Workshop 3rd September 2008 Professor Steven Higgins Durham University
Practical Meta-Analysis -- D. B. Wilson1 Practical Meta-Analysis David B. Wilson Evaluators’ Institute July 16-17, 2010.
Are the results valid? Was the validity of the included studies appraised?
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
Fundamentals of Hypothesis Testing: One-Sample Tests
Determining Sample Size
Chapter 1: Introduction to Statistics
Systematic Reviews: The Potential of Meta-analysis ESRC Research Methods Festival Oxford 5 th July, 2012 Professor Steven Higgins Durham University
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Evidence Based Medicine
Chapter 8 Introduction to Hypothesis Testing
Systematic Reviews.
Introduction to Meta-Analysis a bit of history definitions, strengths & weaknesses what studies to include ??? “choosing” vs. “coding & comparing” studies.
Copyright © Allyn & Bacon 2008 Locating and Reviewing Related Literature Chapter 3 This multimedia product and its contents are protected under copyright.
University of Durham D Dr Robert Coe University of Durham School of Education Tel: (+44 / 0) Fax: (+44 / 0)
Chapter 3 Copyright © Allyn & Bacon 2008 Locating and Reviewing Related Literature This multimedia product and its contents are protected under copyright.
Research methods in clinical psychology: An introduction for students and practitioners Chris Barker, Nancy Pistrang, and Robert Elliott CHAPTER 12 Analysis,
Stats 95.
Statistical Applications for Meta-Analysis Robert M. Bernard Centre for the Study of Learning and Performance and CanKnow Concordia University December.
Session I: Unit 2 Types of Reviews September 26, 2007 NCDDR training course for NIDRR grantees: Developing Evidence-Based Products Using the Systematic.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Meta-analysis and “statistical aggregation” Dave Thompson Dept. of Biostatistics and Epidemiology College of Public Health, OUHSC Learning to Practice.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Fundamentals of Hypothesis Testing: One-Sample Tests Statistics.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Lab 9: Two Group Comparisons. Today’s Activities - Evaluating and interpreting differences across groups – Effect sizes Gender differences examples Class.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Hypothesis Testing An understanding of the method of hypothesis testing is essential for understanding how both the natural and social sciences advance.
Retain H o Refute hypothesis and model MODELS Explanations or Theories OBSERVATIONS Pattern in Space or Time HYPOTHESIS Predictions based on model NULL.
The Proposal AEE 804 Spring 2002 Revised Spring 2003 Reese & Woods.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Replication in Prevention Science Valentine, et al.
1 URBDP 591 A Analysis, Interpretation, and Synthesis -Assumptions of Progressive Synthesis -Principles of Progressive Synthesis -Components and Methods.
Systematic Reviews and Meta-analyses. Introduction A systematic review (also called an overview) attempts to summarize the scientific evidence related.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Systematic Reviews of Evidence Introduction & Applications AEA 2014 Claire Morgan Senior Research Associate, WestEd.
Week Seven.  The systematic and rigorous integration and synthesis of evidence is a cornerstone of EBP  Impossible to develop “best practice” guidelines,
Chapter 9 Introduction to the t Statistic
Effect Sizes.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Evidence Synthesis/Systematic Reviews of Eyewitness Accuracy
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Rehabilitation Research July 27, 2005
Elementary Statistics
Meta-analysis April 11, 2006.
Unit XI: Data Analysis in nursing research
15.1 The Role of Statistics in the Research Process
What are systematic reviews and why do we need them?
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

Reading and interpreting quantitative intervention research syntheses: an introduction Steve Higgins, Durham University Robert Coe, Durham University Mark Newman, EPPI Centre, IoE, London University James Thomas, EPPI Centre, IoE, London University Carole Torgerson, IEE, York University Part 1

Acknowledgements This presentation is an outcome of the work of the ESRC-funded Researcher Development Initiative: “Training in the Quantitative synthesis of Intervention Research Findings in Education and Social Sciences” which ran from The training was designed by Steve Higgins and Rob Coe (Durham University), Carole Torgerson (Birmingham University) and Mark Newman and James Thomas, Institute of Education, London University. The team acknowledges the support of Mark Lipsey, David Wilson and Herb Marsh in preparation of some of the materials, particularly Lipsey and Wilson’s (2001) “Practical Meta-analysis” and David Wilson’s slides at: (accessed 9/3/11). The materials are offered to the wider academic and educational community community under a Creative Commons licence: Creative Commons Attribution- NonCommercial-ShareAlike 3.0 Unported LicenseCreative Commons Attribution- NonCommercial-ShareAlike 3.0 Unported License You should only use the materials for educational, not-for-profit use and you should acknowledge the source in any use.

Background Training funded by the ESRC’s “Researcher Development Initiative” Collaboration between the Universities of Durham, York and the Institute of Education, University of London

Support provided Level 1 –Introduction to the use of meta-analysis of intervention research in education and social science research –Interpreting meta-analyses of impact studies Level 2 –Introduction to the statistical techniques involved Level 3 –Advanced seminars in techniques and issues Workshops for doctoral students in Education Resources and support materials

National initiative Training venues (level 1 and 2) –Round 1: Durham, Edinburgh London –Round 2: York, Cardiff, Belfast, Sussex Level 3: Edinburgh, York, London Doctoral support: British Educational Research Association (BERA) conferences

Aims To support understanding of meta-analysis of intervention research findings in education and social sciences more broadly; To develop understanding of reviewing quantitative research literature; To describe the techniques and principles of meta-analysis involved to support understanding of its benefits and limitations; To provide references and examples to support further work.

Learning outcomes To understand the role of research synthesis in identifying messages about ‘what works’ from intervention research findings To understand the concept of effect size as a metric for comparing intervention research findings To be able to read and understand a forest plot of the results To be able to read a meta-analysis of intervention research findings, interpret the results, and draw conclusions.

Overview of the day 10.00Arrival/ Registration/ Coffee Introduction to research synthesis Main concepts and features of quantitative synthesis or ‘meta-analysis’ Lunch 1.30Reading and interpreting a meta-analysis Overview of challenges to effective meta-analysis 3.00Break 3.15Summary, discussion and evaluation 4.00 Finish

Introductions Introduce yourself to the others at your table What is your interest in meta-analysis?

Synthesis of research findings How do we use findings from previous research? What counts as evidence? How do we ensure it is cumulative? How do we know it is applicable?

Schulze, R. (2007) The state and the art of meta-analysis Zeitschrift fur Psychologie/ Journal of Psychology, 215 pp Reproduced by kind permission from Zeitschrift für Psychologie / Journal of Psychology 2007; Vol. 215 (2):87–89 DOI / © 2007 Hogrefe & Huber Publishers, Please do not reproduce without seeking your own permission from the publisher

Source: Professor Herb Marsh, Oxford University, online search of ISI database, Feb

Scenario Imagine you're in a school governors’ meeting and that you are discussing the school's homework strategy. Someone waves around a review of research which they found on the internet which says has found that children should not be set more than half an hour of homework per night. What questions would you have about how the review was done in order to know how it can help decisions about a homework strategy for the school?

What if the topic was about the adoption of school uniforms?

Key issues about reviews and evidence Applicability of the evidence to the question –Breadth –Scope –Scale Robustness of the evidence –Research quality

Stages of synthesis Stages in the conduct of most reviews or syntheses: –Review question and conceptual framework –Initial organization of data –Identifying and exploring patterns in the data –Integration of the data (synthesis) –Checking the synthesis But the process should not be seen as linear

Stages of synthesis What is the question? Theories and assumptions in the review question What is the result? What new research questions emerge? What data are available? By addressing review question according to conceptual framework How does integrating the data answer the question? To address the question (including theory testing or development). What does the result mean? (conclusions) How robust is the synthesis? For quality, sensitivity, coherence & relevance. Cooper, H.M. (1982) Scientific Guidelines for Conducting Integrative Research Reviews Review Of Educational Research 52; 291 See also: Popay et al. (2006) Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Lancaster: Institute for Health Research, Lancaster University. What are the patterns in the data? Including study, intervention, outcomes and participant characteristics Can the conceptual framework be developed?

Some labels include... What is a systematic review? research synthesis, research review, systematic review, integrative review quantitative review, and meta-analysis. NB the term “meta-analysis” sometimes refers only to quantitative summaries and sometimes more broadly.

Systematic reviewing Key question Search protocol Inclusion/exclusion criteria Coding and mapping In-depth review (sub- question) Techniques for systematic synthesis What is the question? What data are available? How robust is the synthesis? What patterns are in the data? What are the results?

Advantages uses explicit, replicable methods to identify relevant studies, then uses established or transparent techniques to analyze those studies; and aims is to limit bias in the identification, and evaluation of studies and in the integration or synthesis of information applicable to a specific research question.

Underpinning bias in systematic reviews? Research and policy focus Specific reviews to answer particular questions –What works? - impact and effectiveness research with a resulting tendency to focus on quantitative and experimental designs

Meta-analysis as synthesis Quantitative data from –Experimental research studies –Correlational research studies Methodological assumptions from quantitative approaches (both epistemological and mathematical)

Literature reviews - conceptual relations Systematic reviews Meta-analyses Narrative reviews

Meta-analysis or quantitative synthesis Synthesis of quantitative data –Cumulative –Comparative –Correlational “Surveys” educational research (Lipsey and Wilson, 2001)

Origins 1952: Hans J. Eysenck concluded that there were no favorable effects of psychotherapy, starting a raging debate which 25 years of evaluation research and hundreds of studies failed to resolve 1978: To prove Eysenck wrong, Gene V. Glass statistically aggregated the findings of 375 psychotherapy outcome studies Glass (and colleague Smith) concluded that psychotherapy did indeed work “the typical therapy trial raised the treatment group to a level about two-thirds of a standard deviation on average above untreated controls; the average person received therapy finished the experiment in a position that exceeded the 75th percentile in the control group on whatever outcome measure happened to be taken” (Glass, 2000). Glass called the method “meta-analysis” ( adapted from Lipsey & Wilson, 2001)

Historical background Underpinning ideas can be identified earlier: –K. Pearson (1904) Averaged correlations for typhoid mortality after inoculation across 5 samples –R. A. Fisher (1944) “When a number of quite independent tests of significance have been made … although few or none can be claimed individually as significant, yet the aggregate gives an impression that the probabilities are on the whole lower than would often have been obtained by chance” (p. 99). Source of the idea of cumulating probability values –W. G. Cochran (1953) Discusses a method of averaging means across independent studies Set out much of the statistical foundation for meta-analysis (e.g., inverse variance weighting and homogeneity testing) ( adapted from Lipsey & Wilson, 2001 and Hedges, 1984)

Meta-analysis Key question Search protocol Inclusion/exclusion criteria Coding Statistical exploration of findings –Mean and range –Distribution –Sources of variance –‘Sensitivity’ What is the question? What data are available? How robust is the synthesis?

Intervention research Usually evaluation of policies, practices or programmes Usually based on experiments (RCTs, quasi-experimental designs) Answering impact questions –Does it work? –Is it better than…?

Impact questions Causal –Does X work better than Y? Homework intervention studies Not correlational –Rather than associational Do schools with homework do better?

Kinds of questions… Identify an area of research you are interested in Discuss what kind of questions could be answered by a)Interventions b)Correlational studies

Literature reviews - conceptual relations Systematic reviews Meta-analyses Narrative reviews Meta-analyses of intervention research

Comparing quantitative studies The need for a common measure across research studies –Identifying a comparable measure –Using this effectively –Interpreting this appropriately

Significance versus effect size Traditional test is of statistical ‘significance’ The difference is unlikely to have occurred by chance –However it may not be: Large Important, or even Educationally ‘significant’

The rationale for using effect sizes Traditional reviews focus on statistical significance testing –Highly dependent on sample size –Null finding does not carry the same “weight” as a significant finding Meta-analysis focuses on the direction and magnitude of the effects across studies –From “Is there a difference?” to “How big is the difference?” –Direction and magnitude represented by “effect size”

(standardised) Average score of person taught ‘normally’ Average score of person taught by experimental method Effect size

Effect size = Mean of experimental group – Mean of control group Standard deviation Effect size is the difference between the two groups, relative to the standard deviation

Effect size From: Marzano, R. J. (1998) A Theory-Based Meta-Analysis of Research on Instruction. Aurora, Colorado, Mid-continent Regional Educational Laboratory. Available at: (accessed 2/9/08).

Comparison of impact Same AND different measures Significance vs effect size –Does it work? vs How well does it work? Effect size Wilkinson, L., & APA Task Force on Statistical Inference. (1999) Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54,

Effect sizes Standardised way of looking at difference –Different methods for calculation Correlational (e.g. Pearson’s r) Odds ratio (binary/dichotomous outcomes) Standardised mean difference –Difference between control and intervention group as proportion of the dispersion of scores

Effect size The difference between the two means, expressed as a proportion of the standard deviation ES =(M e – M c ) / SD Issues –Which standard deviation? –Statistical significance? –Margin of error? –Normal distribution? –Restricted range –Reliability

Main approaches Cohen's d (but which SD?) Glass's Δ (sd of control) Hedges' g (weighted for sample size)

Examples of Effect Sizes: ES = 0.2 “Equivalent to the difference in heights between 15 and 16 year old girls” 58% of control group below mean of experimental group Probability you could guess which group a person was in = 0.54 Change in the proportion above a given threshold: from 50% to 58% or from 75% to 81%

“Equivalent to the difference in heights between 14 and 18 year old girls” 69% of control group below mean of experimental group Probability you could guess which group a person was in = 0.60 ES = 0.5 Change in the proportion above a given threshold: from 50% to 69% or from 75% to 88%

“Equivalent to the difference in heights between 13 and 18 year old girls” 79% of control group below mean of experimental group Probability you could guess which group a person was in = 0.66 ES = 0.8 Change in the proportion above a given threshold: from 50% to 79% or from 75% to 93%

Learning styles ICT/Educational technology Homework Providing feedback Direct instruction “small” ≤ 0.2 “medium” or “large” ≥ 0.8 Rank (or guess) some effect sizes…

0.79 Providing feedback (Hattie & Timperley, 2007) 0.6 Direct instruction (Sipe & Curlette, 1997) 0.37 ICT/Ed Tech (Hattie, 2008) 0.29 Homework (Hattie, 2008) 0.15 Learning styles (Kavale & Forness, 1987; cf Slemmer 2002) Rank order of effect sizes

Interpreting effect sizes –a “small” effect may be important in an intervention which is cheap or easy to implement –a “small” effect may be meaningful if used across an entire population (prevention programs for school children) –“small” effects may be more achievable for serious or intractable problems –but Cohen’s categories correspond with the broad distribution of effects across meta-analyses found by Lipsey and Wilson (1993), Sipe and Curlette (1997) and Hattie (2008)

Confidence intervals Robustness of the effect –Shows the range within which a presumed actual effect is likely to be Smaller studies - larger confidence intervals Larger studies - smaller confidence intervals –If a confidence interval includes zero, the intervention is not significantly different statistically from the control –Does not avoid issues of bias in the synthesis

Confidence intervals By convention set at 95% level –95 times out of 100 the population effect will be within the range of the confidence interval (in the context of estimation and assuming the same population) –Allows us to look at statistically non- significant results –Is a large effect with a wide confidence interval the same as a small effect and a narrow confidence interval?

Some recent findings from meta-analysis in education Bernard et al Distance education and classroom instruction studies, 688 effects - wide range of effects (‘heterogeneity’); asynchronous DE more effective than synchronous. Pearson et al research articles, 89 effects ‘related to digital tools and learning environments to enhance literacy acquisition’. Weighted effect size of 0.49 indicating technology can have a positive impact on reading comprehension. Klauer & Phye studies, 3,600 children. Training in inductive reasoning improves academic performance (0.69) more than intelligence test performance (0.52). Gersten et al Maths interventions for low attainers. 42 studies ES ranging from Teaching heuristics and explicit instruction particularly beneficial.

Acknowledgements This presentation is an outcome of the work of the ESRC-funded Researcher Development Initiative: “Training in the Quantitative synthesis of Intervention Research Findings in Education and Social Sciences” which ran from The training was designed by Steve Higgins and Rob Coe (Durham University), Carole Torgerson (Birmingham University) and Mark Newman and James Thomas, Institute of Education, London University. The team acknowledges the support of Mark Lipsey, David Wilson and Herb Marsh in preparation of some of the materials, particularly Lipsey and Wilson’s (2001) “Practical Meta-analysis” and David Wilson’s slides at: (accessed 9/3/11). The materials are offered to the wider academic and educational community community under a Creative Commons licence: Creative Commons Attribution- NonCommercial-ShareAlike 3.0 Unported LicenseCreative Commons Attribution- NonCommercial-ShareAlike 3.0 Unported License You should only use the materials for educational, not-for-profit use and you should acknowledge the source in any use.