Adapting Designs Professor David Torgerson University of York Professor Carole Torgerson Durham University.

Slides:



Advertisements
Similar presentations
PhD Research Seminar Series: Valid Research Designs
Advertisements

Success for All - Living up to its name? Dr Louise Tracey & Professor Bette Chambers 21 March 2013.
‘Trials and Tribulations’ An RCT with the DCSF Carole Torgerson, Andy Wiggins and Hannah Ainsworth.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
∂ What works…and who listens? Encouraging the experimental evidence base in education and the social sciences RCTs in the Social Sciences 9 th Annual Conference.
Sample size issues & Trial Quality David Torgerson.
Robert Coe Neil Appleby Academic mentoring in schools: a small RCT to evaluate a large policy Randomised Controlled trials in the Social Sciences: Challenges.
Experimental evaluation in education Professor Carole Torgerson School of Education, Durham University, United Kingdom International.
Building Evidence in Education: Workshop for EEF evaluators 2 nd June: York 6 th June: London
Experimental Research Designs
KINE 4565: The epidemiology of injury prevention Randomized controlled trials.
Conference for EEF evaluators: Building evidence in education Hannah Ainsworth, York Trials Unit, University of York Professor David Torgerson, York Trials.
‘Cohort multiple RCT’ design workshop Clare Relton & Jon Nicholl School of Health and Related Research (ScHARR) Faculty of Medicine University of Sheffield.
The use of administrative data in Randomised Controlled Trials (RCT’s) John Jerrim Institute of Education, University of London.
Building Evidence in Education: Conference for EEF Evaluators 11 th July: Theory 12 th July: Practice
What makes a good quality trial? Professor David Torgerson York Trials Unit.
Trials and Tribulations Peter Tymms York Outline Introduction Background issues –Impact of educational policies –Level of intervention –Monitoring.
Using evidence to raise the attainment of children facing disadvantage James Richardson Senior Analyst, Education Endowment Foundation 1 st April 2014.
Educational Action Research Todd Twyman Summer 2011 Week 1.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Allocation Methods David Torgerson Director, York Trials Unit
Questions What is the best way to avoid order effects while doing within subjects design? We talked about people becoming more depressed during a treatment.
A randomised controlled trial to improve writing quality during the transition between primary and secondary school Natasha Mitchell, Research Fellow Hannah.
Impact Evaluation Session VII Sampling and Power Jishnu Das November 2006.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Meta-findings from the Best Evidence Encyclopaedia Robert E Slavin University of York and Johns Hopkins University.
Rigour of evaluation Dr Carole Torgerson Senior Research Fellow Institute for Effective Education University of York.
The following lecture has been approved for University Undergraduate Students This lecture may contain information, ideas, concepts and discursive anecdotes.
Research evidence and effective use of the Pupil Premium Professor Steve Higgins, School of Education, Durham
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Building Evidence in Education: Conference for EEF Evaluators 11th July: Theory 12th July: Practice
PSYC2030 Exam Review #2 March 13th 2014.
Randomised controlled trials Peter John. Causation in policy evaluation Outcome Intervention Other agency actions External environment.
Building Evidence in Education: Conference for EEF Evaluators 11 th July: Theory 12 th July: Practice
Measuring Impact: Experiments
The use of Stepped Wedge cluster randomized trials: Systematic Review Dr Celia Taylor The University of Birmingham On behalf of my co-authors: Noreen Mdege,
ARROW Trial Design Professor Greg Brooks, Sheffield University, Ed Studies Dr Jeremy Miles York University, Trials Unit Carole Torgerson, York University,
How do we know what works? Robert Coe ResearchEd, London, 5 Sept 2015.
Programme Information Incredible Years (IY)Triple P (TP) – Level 4 GroupPromoting Alternative Thinking Strategies (PATHS) IY consists of 12 weekly (2-hour)
Research methods and statistics.  Internal validity is concerned about the causal-effect relationship in a study ◦ Can observed changes be attributed.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Sampling (conclusion) & Experimental Research Design Readings: Baxter and Babbie, 2004, Chapters 7 & 9.
Randomised Controlled Trials: What, why and how? Pam Hanley 22 March 2013.
Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Impact of two teacher training programmes on pupils’ development of literacy and numeracy ability: a randomised trial Jack Worth National Foundation for.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Chapter 11.  The general plan for carrying out a study where the independent variable is changed  Determines the internal validity  Should provide.
Indirect and mixed treatment comparisons Hannah Buckley Co-authors: Hannah Ainsworth, Clare Heaps, Catherine Hewitt, Laura Jefferson, Natasha Mitchell,
Helmingham Community Primary School Assessment Information Evening 10 February 2016.
HTA Efficient Study Designs Peter Davidson Head of HTA at NETSCC.
CONSORT 2010 Balakrishnan S, Pondicherry Institute of Medical Sciences.
Regional Implementation of the Proposed Specific Learning Difficulties (SpLD) Support Model For Primary and Post Primary Schools 07/06/20161.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Talk Boost A targeted intervention for 4-7 year olds with language delay Wendy Lee Professional Director, The Communication Trust Mary Hartshorne Head.
A copy of this presentation will be ed to all Y6 parents and will also be available on our school website – Student Zone – KS2 Elworth CE Primary.
EEF Evaluators’ Conference 25 th June Session 1: Interpretation / impact 25 th June 2015.
TCAI: Lessons from first Endline TCAI Development Partners Feb 27, 2013.
NEW NATIONAL CURRICULUM ASSESSMENT FRAMEWORK 2016.
Evaluation in Education: 'new' approaches, different perspectives, design challenges Camilla Nevill Head of Evaluation, Education Endowment Foundation.
Issues in Evaluating Educational Research
The English RCT of ‘Families and Schools Together’
A randomised controlled trial to improve writing quality during the transition between primary and secondary school Natasha Mitchell, Research Fellow Hannah.
The HEalthy Adolescent Relationships Training Study (HEARTS) is a project to evaluate the ArielTrust’s ‘FaceUp’ programme.
Conducting Efficacy Trials
Internal Validity and Confounding Variables
Evaluation of Switch-on
What can we learn from small pilots conducted by school districts?
Key Stage One National Testing Arrangements
Sampling and Power Slides by Jishnu Das.
Presentation transcript:

Adapting Designs Professor David Torgerson University of York Professor Carole Torgerson Durham University

Trial design Numerous trial designs are available to answer different questions. Sometimes the same question could be answered using different designs. Trade-off between: »Statistical efficiency (including contamination); »Post-randomisation bias; »Generalisability; »Cost.

Numerous trial designs Individual randomisation; Cluster randomisation.

Individual allocation “Standard” RCT (Summer schools) Waiting list RCT »Within school year waiting list (ECC); »Outside school year waiting list; Factorial; Combined with regression discontinuity (SHINE); Incomplete block design.

Cluster randomisation School cluster (Calderdale); Class cluster (Grammar for writing); Year cluster (Third space); Waiting list (Third space outside school); Stepped wedge; Partial split plot (Grammar for writing); Full split plot.

ECC & Online Maths In this session we will discuss two RCTs and their designs: »Every Child Counts (ECC) evaluation; »Third space (online maths) evaluation (EEF funded study).

Independent evaluation of Every Child Counts intervention ‘Numbers Count’ Effectiveness research question: Is the ECC numeracy intervention ‘Numbers Count’ better at improving mathematics achievement than normal classroom teaching in numeracy? Year 2 pupils at risk in numeracy Intervention: one to one teaching, focus on number, every day for 12 weeks Control: usual classroom teaching in number and other mathematical concepts a Torgerson, C.J., b Wiggins, A., c Torgerson, D.J., c Ainsworth, H., c Hewitt, C., Testing policy effectiveness using a randomized controlled trial, designed, conducted and reported to CONSORT standards, Journal of Research in Mathematics Education, March, 2013 Funded by Dept. for Education, £305K,

Design of experiment 12 children in each of 44 schools selected as eligible for ‘Numbers Count’ intervention Maths test (Sandwell test) (pre-test) at beginning of autumn term (administered by teachers) Random allocation of 12 children to term of delivery: autumn, spring or summer: ‘waiting list’ design Intervention group: autumn children Control group: spring and summer children Maths test (Progress in Maths test) after 12 weeks (administered by independent testers) (post-test) Simple analysis: compare the mean maths post-test score of intervention children with mean maths score of control children and conclude whether ‘Numbers Count’ is more effective than usual teaching Rigorous design: excludes some alternative explanations for results

Design features that increased internal validity and acceptability Randomisation: intervention and control groups are equivalent at start so design controls for history, maturation, regression to the mean, selection bias Large sample size: excludes chance finding Intervention and control conditions are both numeracy interventions and both last for 30 mins. per day for 12 weeks: the comparison is a ‘fair’ one Independent ‘blinded’ testing: eliminates possibility of tester bias ‘Waiting list’ design so all eligible pupils received intervention Small number of ‘wild cards’ allowed

Results Intervention Group Control Group Effect Size 95% Confidence Interval PIM 6 (0-30)15.8 (4.9) N = (4.5) N = (0.12 to 0.53)

Design limitations: Generalisability ECC schools were identified: by policy- makers/funders of programme - education policy ‘roll out’ in England, i.e., schools in disadvantaged areas Ideally, a random sample of all secondary schools in England should have been approached and asked to take part

Design limitations: Intervention One to one teaching with intervention children being withdrawn from classroom Problem of attribution: was effect due to NC intervention? one to one teaching? Design could have included additional one to one arm

Design limitations: Intervention One to one teaching with intervention children being withdrawn from classroom Problem of attribution: was effect due to NC intervention? one to one teaching? Design could have included additional one to one arm

Design limitations: ‘Contamination’/’spill over’ effects Children withdrawn from usual classroom teaching – may have benefited remaining children; teachers using programme may have applied it to some control children. Instead of randomising individual children design could have randomised by school (cluster randomisation, where school is the cluster) to avoid these problems.

Design limitations: Long term effects Wait list design prevented long term follow-up; effects may have ‘washed out’ soon after intervention was finished. Could have used cluster randomisation; Could have recruited 3 additional children above threshold and randomised these to intervention or control for long term follow-up; All options (above) rejected by funder.

Conclusions Design and conduct warranted conclusion NC (as delivered) more effective than usual classroom teaching BUT because of design limitations couldn’t answer some really important questions These questions could have been answered if a different experimental design had been used: cluster randomisation (randomisation of schools), long-term follow-up (control group that didn’t receive intervention); one to one control group (literacy or other numeracy)

Online maths evaluation EEF have funded Third Space to deliver to 600 children 1 school year of face to face online maths tuition delivered from tutors based in India; York Trials Unit with Durham University have designed a trial to evaluate this intervention; Several design options are possible.

Individual randomisation 600 children randomised to tuition and 600 allocated to nothing would give 80% power to show 0.11 ES difference (pre-post correlation 0.70); Unequal allocation 600 to tuition 1200 would increase efficiency to show 0.10 difference; Problems: »Resentful demoralisation from control children; »Difficulty in getting schools to take part.

Waiting list We could instead randomise 600 children such that all could receive the intervention; 300 in term one and 300 in term two (similar to ECC evaluation); Power: 80% to show 0.16 ES; Problems: »Lack of long term follow-up; don’t know if intervention’s effects will be sustained.

Cluster trial We could randomise schools which would avoid resentful demoralisation at the child level; 600 children (assuming 10 per school; ICC 0.19; pre/post 0.70), would give us 80% power to show 0.19 ES difference; Problem: »Schools in the control group may be more likely to drop-out introducing attrition bias.

Cluster/wait list design We could randomise schools to offer intervention to children in year 6 and the waitlist schools to get the intervention for their next year’s year 6 pupil; Prevent school level drop-out; Allow long term follow-up; Problem: »Lower efficiency than previous design (0.26 ES detectable), but lower risk of bias.

What has actually happened? Aimed to recruit 60 schools with an average of 10 pupils per school; However, over-recruited 72 schools so we are recruiting 8 pupils per school; This improves our efficiency so that we now can detect an effect size of 0.25 rather than 0.26.

Activity In small groups discuss your EEF trials where the trial design has been adapted to increase: acceptability or implementation of the intervention; internal validity; or external validity; Select the most interesting/significant example for feedback to whole group.