Randomised controlled trials Peter John. Causation in policy evaluation Outcome Intervention Other agency actions External environment.

Slides:



Advertisements
Similar presentations
Implementing Service First References & Recommendations.
Advertisements

TRANSFORMING EDUCATION THROUGH EVIDENCE. The Centre for Effective Education SCHOOL OF Education Conducting Educational Randomised Control Trials in Disadvantaged.
Characteristics of research. Designed to derive generalisable new knowledge.
Head of Learning: Job description
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
Sample size issues & Trial Quality David Torgerson.
Adapting Designs Professor David Torgerson University of York Professor Carole Torgerson Durham University.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Research ethics Jan Deckers School of Medical Sciences Education Development
Ethical Considerations when Developing Human Research Protocols A discipline “born in scandal and reared in protectionism” Carol Levine, 1988.
Scrutiny Scrutiny is a major tool for evaluating and then effecting change. Reviewing and evaluating what is done and measuring its success is key to better.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Psychological Methods
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Clinical Trials Hanyan Yang
Research Methods.
An introduction to children’s rights. Group activity.
AN OVERVIEW OF QUALITY AND TQM. What is Quality Managing for Quality How to manage for Quality To attain quality, the organization should establish its.
Ethics in Business Research
Cover Letters for Survey Research Studies
Unit 9: Program Evaluation CERT Program Manager. CERT Program Manager: Program Evaluation 9-2 CERT Program Manager: Program Evaluation Unit Objectives.
The principles used by AUTEC in granting ethical approval for research.
Creating a service Idea. Creating a service Networking / consultation Identify the need Find funding Create a project plan Business Plan.
Discussion Gitanjali Batmanabane MD PhD. Do you look like this?
RESEARCH A systematic quest for undiscovered truth A way of thinking
Writing Impact into Research Funding Applications Paula Gurteen Centre for Advanced Studies.
Qualitative Evaluation of Keep Well Lanarkshire Alan Sinclair Keep Well Evaluation Officer NHS Lanarkshire.
ACJRD 16 th Annual Conference 4 th October  2007: Prevention and Early Intervention Programme, funded by DYCA and The Atlantic Philanthropies;
Topic 4 How organisations promote quality care Codes of Practice
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Medical Audit.
Copyright  2005 McGraw-Hill Australia Pty Ltd PPTs t/a Australian Human Resources Management by Jeremy Seward and Tim Dein Slides prepared by Michelle.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
The Use and Evaluation of Experiments in Health Care Delivery Amanda Kowalski Associate Professor of Economics Department of Economics, Yale University.
ARROW Trial Design Professor Greg Brooks, Sheffield University, Ed Studies Dr Jeremy Miles York University, Trials Unit Carole Torgerson, York University,
Ethics in Quality Improvement Quality Academy Cohort 6 Melanie Rathgeber MERGE Consulting.
Introduction to Evaluation Odette Parry & Sally-Ann Baker
Human Services Integration Building More Effective Responses to Peoples’ Needs.
Making Sense of the Social World 4 th Edition Chapter 11, Evaluation Research.
EXPERIMENTAL EPIDEMIOLOGY
Development Impact Evaluation in Finance and Private Sector 1.
What Institutional Researchers Should Know about the IRB Susan Thompson Senior Research Analyst Office of Institutional Research Presented at the Texas.
The research challenges of large scale RCTs with volunteers and volunteering organisations Facilitator: Dr Matt Ryan Presenter: Professor Peter John.
Investing in a healthy community: making the most of NICE’s ROI tools Judith Richardson Health & Social Care Directorate.
Strategies for Effective Program Evaluations U.S. Department of Education The contents of this presentation were produced by the Coalition for Evidence-Based.
Compliance Original Study Design Randomised Surgical care Medical care.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
The task The task: You need to create a set of slides to use as your evaluation tools Once created please print them out and bring to your lesson. Use.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
Action Research. What is Action Research?  Applied focus  Specific, practical issue  Solve problem  Improve practice.
HTA Efficient Study Designs Peter Davidson Head of HTA at NETSCC.
HPTN Ethics Guidance for Research: Community Obligations Africa Regional Working Group Meeting, May 19-23, 2003 Lusaka, Zambia.
ACTION RESEARCH By Toni McConnell Rachel Milliken Sarah Montefiore Biljana Milovanovic.
A practical guide to randomised controlled trials Department for Communities and Local Government, London Monday 7 July 2014, 10.30am,
EVALUATION RESEARCH To know if Social programs, training programs, medical treatments, or other interventions work, we have to evaluate the outcomes systematically.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
TRAINING Community/Business Partnership Development STUDENT AND FAMILY SCHOOL COMMUNITY.
© Copyright  People at Work Project - Overview  People at Work Project - Theoretical Underpinnings  People at.
Quality Metrics of Performance of Research Ethics Committees Cristina E. Torres, PhD FERCAP Coordinator.
Encouraging civic action: Localism and Building the Big Society Gerry Stoker.
Challenging Behaviour:
Food Product Development
Challenging Behaviour:
Driving Research Group
CLINICAL PROTOCOL DEVELOPMENT
Denise Elliott Interim Head of Commissioning Adult & Health Services
Ethical Principles of Research
Professor Stephen Pilling PhD
BCS Membership Board Council Meeting 3 July 2014.
Presentation transcript:

Randomised controlled trials Peter John

Causation in policy evaluation Outcome Intervention Other agency actions External environment

Do not do this (before and after study)

A Randomised Controlled Trial A RCT is an experiment where two or more groups of subjects who are compared with a control group that does not get the treatment Randomisation between treated and control subjects ensures that there are no other outcome differences between the groups than from the intervention Focus on comparing average outcomes in each group

Figure 7.1 Simple Random Allocation Design Simple Random Allocation Design

Policy experiments A distinct form of RCT which involves the policy- maker delivering an intervention. Goes back to the 1920s - e.g. school milk experiment (sabotaged by teachers) Policy-maker randomises, researcher evaluates Different to participation experiments, such as GOTV + various forms of experiment we in NNTT Can be part of an official evaluation, but new model whereby done informally

The new nirvana? Promoted by UK Behavioural Insights Team – had little money, but look at what it did with it: RCTs on tax letters, texting for court fines, charitable giving, DVLA reminders, organ donations with DVLA, job centres, work with firms Massive expansion in development field US interest - education federal funding, social policy, welfare to work Use of experiments by economists (e.g.Duflo) Now experiments in local government

Example: Essex County Council Blue Badge Renewals: Channel Shift Peter John and Toby Blume

Essex County Council

Research design We set out to test whether local authorities can ‘nudge’ residents to renew their Blue Badge permits online rather than by post – channel shift Three types of message were tested for Blue Badge permit holders over a period of two months (December 2014-Jan 2015) Two months to go

Treatments and sample 2798 renewals were sent out: 699 allocated to control, 698 to the simplified condition, 698 to the incentive group and 699 to the messenger group Treatments: – Simplification – Messenger - the messenger treatment included a picture and testimonial from another Blue Badge holder of how they have renewed online and encouraged others to do the same. – Incentive - appealing to common good

Incentive letter

Results Simplification significantly increased online renewal rates compared against the control group by 7.3 percentage points. Offering an intrinsic incentive – that does not offer a personal benefit but rather a collective incentive, so the individual is prompted to act for the benefit of others – also significantly increased renewal rates compared with the control group by 7.3 percentage points. The use of a peer messenger – another Blue Badge holder – to encourage online renewal did not appear to have an eff

Results

The seven things you need to plan to do a RCT 1.Establish a question that can be answered by a trial 2.Work out the units 3.Determine sample size 4.Specify the treatment arm or arms 5. Set out randomisation procedure 6. Specify outcomes and their measurement 7. Address ethical issues and data protection

Step 1: Establish a question that can be answered by a trial Not all questions can Need to be a good question – you do not know the answer Is the theory plausible (a mechanism you expect)? Theory of change (ToC) Can you intervene in a way that changes behaviour or outcomes (or attitudes) in a set time period?

Step 2: Work out the units Can be individuals, households, streets, communities, larger areas, firms, other organisations, classes Can be nested in units, so individuals in streets, or students in classes

More on units Governed by a set of practical question about availability Data records will drive this Costs of measurement (sequeway to sample size) Recruitment – there will be drop out > selection bias

Step 3: work out sample size A topic of its own involving use of statistics Basic message is that RCTs need large groups Rough minimum of 400 per group Avoid too small trials Sample size calculators can help (be careful with the online ones)

More on size To calculate the right sample size, you need to have an idea of what the expected effect is going to be, usually expressed as point differences between two groups, (e.g..5 and.6 for a ten percentage point difference). You need to specify in advance what probability level you are prepared to accept. Conventionally we have a two-sided test at.05 You need to set the statistical power that is desirable, which is usually 80 per cent.

Step 4: Specify the treatment arm or arms Need to ensure the treatment reflects what you want to measure, Need for precision in delivery Ensure the treatment is not doing too many things at once or that you break up different elements into treatment arms

Step 5: Set out randomisation procedure Allocate subjects using random numbers: > Or create a random number seed e.g. in Excel Good idea to ask an outside person to randomise to ensure it is implemented effectively You can test for balance across T and C

Step 6: Specify outcomes and their measurement Need something measurable that is standard across the units and T&C, e.g. achievement, volunteering Can be interval (that is in equal units, such as money donated) or categorical (e.g. volunteer not not volunteer) or even ordered (e.g. who arrives first), can be measured over time, or before and after (desirable but not essential)

Step 7: Address ethical and data protection issues All research needs to be guided by ethical issues – consent, avoidance of harm, respect for participants (e.g. right to withdrawal, privacy), maintain safety Trials often thought to be vulnerable because of randomisation may ‘deny’ benefits or impose costs (in practice these happen anyway) Also may involve deception that needs to be justified