Statistical Analysis Plans EEF Evaluators’ Conference 2016 Dr Ben Styles Head of NFER’s Education Trials Unit.

Slides:



Advertisements
Similar presentations
Analysis & Expressing Resultd in Clinical Trials Dr. Khalili.
Advertisements

Building Evidence in Education: Workshop for EEF evaluators 2 nd June: York 6 th June: London
Significance and probability Type I and II errors Practical Psychology 1 Week 10.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Controlling the Experimentwise Type I Error Rate When Survival Analyses Are Planned for Subsets of the Sample. Greg Yothers, MA National Surgical Adjuvant.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Session 4: Analysis and reporting Managing missing data Rob Coe (CEM, Durham) Developing a statistical analysis plan Hannah Buckley (York Trials Unit)
Elements of a clinical trial research protocol
Chapter 11: Sequential Clinical Trials Descriptive Exploratory Experimental Describe Find Cause Populations Relationships and Effect Sequential Clinical.
Agenda: Block Watch outcome map Program Theory overview Evaluation theory overview Mentoring Evaluation Assignment 1 Evaluation Debrief.
Research into research use: A ten-arm cluster RCT 10 th September 2014 Dr Ben Styles, Dr Anneka Dawson (NFER), Dr Lyn Robinson and Dr Christine Merrell.
Chapter 8 Introduction to Hypothesis Testing
بسم الله الرحمن الرحيم * this presentation about :- “experimental design “ * Induced to :- Dr Aidah Abu Elsoud Alkaissi * Prepared by :- 1)-Hamsa karof.
The Mimix Command Reference Based Multiple Imputation For Sensitivity Analysis of Longitudinal Trials with Protocol Deviation Suzie Cro EMERGE.
Frequency and type of adverse events associated with treating women with trauma in community substance abuse treatment programs T. KIlleen 1, C. Brown.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Addressing missing participant data in systematic reviews: Part I – Dichotomous outcomes Elie Akl, Shanil Ebrahim, Bradley Johnston, Pablo Alonso, Matthias.
Building Evidence in Education: Conference for EEF Evaluators 11 th July: Theory 12 th July: Practice
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Systematic Reviews.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Evaluation of a web-based educational model to improve nurse recognition of delirium: An RCT McCrow J 1, Beattie E 1, Sullivan K.A. 1, Fick D 2 & Park.
HOW TO WRITE RESEARCH PROPOSAL BY DR. NIK MAHERAN NIK MUHAMMAD.
Copyright  2004 McGraw-Hill Pty Ltd. PPTs t/a Marketing Research by Lukas, Hair, Bush and Ortinau 2-1 The Marketing Research Process Chapter Two.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Session 3: Analysis and reporting Collecting data for cost estimates Jack Worth (NFER) Panel on EEF reporting and data archiving Peter Henderson, Camilla.
1 OTC-TFM Monograph: Statistical Issues of Study Design and Analyses Thamban Valappil, Ph.D. Mathematical Statistician OPSS/OB/DBIII Nonprescription Drugs.
1 Statistics in Drug Development Mark Rothmann, Ph. D.* Division of Biometrics I Food and Drug Administration * The views expressed here are those of the.
WWC Standards for Regression Discontinuity Study Designs June 2010 Presentation to the IES Research Conference John Deke ● Jill Constantine.
Understanding Study Design & Statistics Dr Malachy O. Columb FRCA, FFICM University Hospital of South Manchester NWRAG Workshop, Bolton, May 2015.
Tips for a Sensible Animal Justification Reid D. Landes Disclaimer: the opinions and thoughts contained herein are not necessarily those of UAMS or any.
Impact of two teacher training programmes on pupils’ development of literacy and numeracy ability: a randomised trial Jack Worth National Foundation for.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
The importance of enhancing teachers’ research literacy Matt Walker UCET Annual Conference 4 th November 2015.
Session 6: Other Analysis Issues In this session, we consider various analysis issues that occur in practice: Incomplete Data: –Subjects drop-out, do not.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students June 8, 2009 IES Annual Research.
Systematic reviews and meta-analyses: when and how to do them Andrew Smith Royal Lancaster Infirmary 18 May 2015.
Changing Attitudes towards Vocational Education and Apprenticeships Sutton Trust-Pearson Higher Ambitions Summit 2014 NFER: Tami McCrone.
CONSORT 2010 Balakrishnan S, Pondicherry Institute of Medical Sciences.
Applying evidence in practice: definitions and approaches Julie Nelson, NFER 14 th November 2014 Presentation for LSRN workshop: ‘the practicalities of.
How can stakeholders benefit from the two national initiatives? Tami McCrone, NFER 3 rd June 2015 Presentation for LSRN workshop: Developing Clear Messages.
Approaches to quantitative data analysis Lara Traeger, PhD Methods in Supportive Oncology Research.
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
J Clin Oncol 30: R2 윤경한 / Prof. 김시영 Huan Jin, Dongsheng Tu, Naiqing Zhao, Lois E. Shepherd, and Paul E. Goss.
European Obesity Academy Assmannshausen 2016 Statistics; power calculation and randomization Johan Bring Statisticon AB.
CONFIDENTIAL © 2012 | 1 Writing a Statistical Analysis Plan DIA Medical Writing SIAC July 12, 2012 Peter Riebling, MS, RAC Associate Director, Regulatory.
Young people’s transitions: how employers make a difference London Conference on Employer Engagement in Education and Training 2016 Tami McCrone and Susie.
Labour Market Knowledge and A-level Choices: A Blinded Cluster Randomized Controlled Trial with Linked Administrative Data Neil Davies, Peter Davies, Tian.
Simulation setup Model parameters for simulations were tuned using repeated measurement data from multiple in-house completed studies and baseline data.
Chapter 6 Selecting a Design. Research Design The overall approach to the study that details all the major components describing how the research will.
Publishing research in a peer review journal: Strategies for success
Evaluation: For Whom and for What?
The English RCT of ‘Families and Schools Together’
Hypothesis Testing with z Tests
The Research Design Continuum
Simon Thompson University of Cambridge
Conducting Efficacy Trials
Challenges of statistical analysis in surgical trials
Multiple Endpoint Testing in Clinical Trials – Some Issues & Considerations Mohammad Huque, Ph.D. Division of Biometrics III/Office of Biostatistics/OPaSS/CDER/FDA.
Statistical Analysis Plan review
Sign critical appraisal course: exercise 2
9 Experimental Design.
Analysing educational trials: the Education Endowment Foundation Archive Steve Higgins, Adetayo Kasim, ZhiMin Xiao, with Nasima Akhter, Ewoud De Troyer,
Evidence Based Practice
Improving the Standards of Reporting of Clinical Trial Data
Imagine you were writing a test about driving a car
Writing competence based questions
Presentation transcript:

Statistical Analysis Plans EEF Evaluators’ Conference 2016 Dr Ben Styles Head of NFER’s Education Trials Unit

The replicability crisis Many research findings are false or exaggerated (Ioannidis, 2014) Suggestions to make more research findings true: replication culture, reproducibility practices, better statistical methods and standardisation of definitions and analyses Ioannidis JPA (2014) How to Make More Published Research True. PLoS Med 11(10): e doi: /journal.pmed e doi: /journal.pmed

Education trials Potentially even more vulnerable: median effect size of 0.1 (Sanders and Chonaire, 2015); very little replication; diversity of approaches to design and analysis Distribution of effect sizes in education research – Sanders and Chonaire (2015)

EEF approach Strategy to maximise replicability EEF approach Replication cultureIndependent evaluation…wider availability of trial data and analysis code…occasional re-trial Better statistical methodsAnalysis guidance Standardisation of analysesR-package and forthcoming peer review of SAPs

What happens without a SAP? All levels of randomisation Which outcome variable? Which covariates? How much missing data prompts MI/sensitivity analyses? What type of MI/sensitivity analysis? Comparisons when >2 arms? Interim analyses >1 follow-up?

What happens without a SAP? Randomised block/multi- site Cluster randomised Fixed or random effect for school? School by treatment interaction? School means/MLM/GEE/Huber- White standard errors

What happens without a SAP? = possible cherry picking!

Why SAPs To limit ‘cherry picking’ To distinguish between planned and exploratory analyses To provide the opportunity for developer/external challenge before the analysis is carried out Analysis represents a small percentage of the overall trial budget yet the decisions a statistician makes are critical Clinical trials have SAPs

Opposition ConcernRepost SAPs kill innovationSAPs do not preclude exploratory analyses Unnecessary bureaucratic hurdleAnalysis has to be written up anyway; EEF template Damaging inflexibility e.g. what if a less biased/more efficient analysis method is proposed in the interim Both analyses are performed and it is explained why one method is superior Green DP (2015) Pre-Analysis Plans: Pros and Cons of Limiting Discretion in the Analysis of RCTs. RCTs in the Social Sciences Conference 2015: Keynote Address

Suggested headings for EEF SAP template Introduction Trial design Randomisation Sample size calculations Extent of follow-up Primary outcome Secondary outcomes Protocol changes Primary outcome analysis (ITT) Effect size calculation Missing data analysis strategy On-treatment/per-protocol analysis Secondary outcome analyses Subgroup analyses Additional analyses Interim analyses Analysis of multiple follow-up Sample representation School/student characteristics Test reliability MDES calculation on the basis of parameters seen Baseline effect size for analysed groups (re attrition) Student characteristics of analysed groups (re attrition) List of tables and figures Format of tables and figures

SAP sign-off EEF (through peer review) Developer

Next steps Write your SAP at least three months before analysis EEF will publish as an addendum to the protocol Volunteer with EEF to review SAPs (small payment, Durham workshop in autumn) In the future: analysis syntax pre-written

Workshop on analysis

NFER provides evidence for excellence through its independence and insights, the breadth of its work, its connections, and a focus on outcomes. National Foundation for Educational Research The Mere, Upton Park Slough, Berks SL1 2DQ T: F: E: