Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.

Slides:



Advertisements
Similar presentations
Planning and preparing the national HES EHES Training seminar, Rome 11 February 2010 Päivikki Koponen.
Advertisements

When are Impact Evaluations (IE) Appropriate and Feasible? Michele Tarsilla, Ph.D. InterAction IE Workshop May 13, 2013.
Presented by: Tom Chapel Focus On…Thinking About Design.
The NHS Tayside Experience Linking Knowledge Management with Quality Improvement Carrie Marr Associate Director of Change and Innovation Tayside Centre.
Donald T. Simeon Caribbean Health Research Council
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
NIHR Research Design Service London Enabling Better Research Forming a research team Victoria Cornelius, PhD Senior Lecturer in Medical Statistics Deputy.
A safe space for engagement? Stories from the NICE front line Andrew Dillon Chief Executive National Institute for Health and Clinical Excellence.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Benefits and limits of randomization 2.4. Tailoring the evaluation to the question Advantage: answer the specific question well – We design our evaluation.
“Rational Pharmacology” and Health Economics By Alan Maynard.
Project Monitoring Evaluation and Assessment
Drug Awareness for Primary Schools Richard Boxer Drug Education Consultant Health & Well-Being Team (CSF) Safeguarding: Drug Education Richard Boxer, Drug.
SCHOOLS OF INNOVATION: LEARNING FAST TO IMPLEMENT WELL Office of Innovation for Education December 9, 2014 Denise T. Airola
How do nurses use new technologies to inform decision making?
Clinical Trials Hanyan Yang
Agenda: Zinc recap Where are we? Outcomes and comparisons Intimate Partner Violence Judicial Oversight Evaluation Projects: Program Theory and Evaluation.
Evaluation. Practical Evaluation Michael Quinn Patton.
Types of Evaluation.
Student Assessment Inventory for School Districts Inventory Planning Training.
February 8, 2012 Session 4: Educational Leadership Policy Standards 1 Council of Chief School Officers April 2008.
Standards Debate at the Centre for Better Managed Health Care, Cass Business School, City University London, 26 th October Professor Mike Kelly Director.
Dr Amanda Perry Centre for Criminal Justice Economics and Psychology, University of York.
Project Human Resource Management
Their contribution to knowledge Morag Heirs. Research Fellow Centre for Reviews and Dissemination University of York PhD student (NIHR funded) Health.
Creating a service Idea. Creating a service Networking / consultation Identify the need Find funding Create a project plan Business Plan.
Community-based approaches to tackling Global Health Challenges Mike Podmore.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Program Evaluation Using qualitative & qualitative methods.
Overall Teacher Judgements
Randomised controlled trials Peter John. Causation in policy evaluation Outcome Intervention Other agency actions External environment.
Presenter-Dr. L.Karthiyayini Moderator- Dr. Abhishek Raut
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Overview of operational research in MSF Myriam Henkens, MD, MPH International Medical Coordinator MSF London 1st of June, 2006.
Professional Certificate – Managing Public Accounts Committees Ian “Ren” Rennie.
A quick reflection… 1.Do you think Body Worn Video is a good idea? 2.Do you think Body Worn Video affects Criminal Justice Outcomes for Domestic Abuse.
Objectives 1. Children will be supported in an integrated way through the establishment of a Start Right Community Wrap- Around Programme in the target.
Evidence-based policymaking: Seeking to do more good than harm Helen Jones Professional Adviser.
Enhancing Practice in Work with Offenders: the Role of Evaluation Jean Hine, De Montfort University.
Randomized Clinical Trials: The Versatility and Malleability of the “Gold Standard” Wing Institute Jack States Ronnie Detrich Randy Keyworth “Get your.
Evidencing Outcomes Ruth Mann / George Box Commissioning Strategies Group, NOMS February 2014 UNCLASSIFIED.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Workshop 6 - How do you measure Outcomes?
Effectiveness in Review & Oversight of Human Subjects Research Steven Joffe, MD, MPH Assistant Professor of Pediatrics.
Sue Irving. Remit  To develop and recommend a set of appropriate and adequate integrated approaches for working with substance misusers, i.e. problematic.
Third Sector Evaluation: Challenges and Opportunities Presentation to the Public Legal Education in Canada National Conference on “Making an Impact” 26.
Getting There from Here: Creating an Evidence- Based Culture Within Special Education Ronnie Detrich Randy Keyworth Jack States.
Randomised Controlled Trials: What, why and how? Pam Hanley 22 March 2013.
M & E TOOLKIT Jennifer Bogle 11 November 2014 Household Water Treatment and Water Safety Plans International and Regional Landscape.
Developing a Framework In Support of a Community of Practice in ABI Jason Newberry, Research Director Tanya Darisi, Senior Researcher
What evidence can help practice decisions about what works and what doesn’t? Elizabeth Waters Chair in Public Health School of Health and Social Development,
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
1 CBEB3101 Business Ethics Lecture 4 Semester 1, 2011/2012 Prepared by Zulkufly Ramly 1.
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
HTA Efficient Study Designs Peter Davidson Head of HTA at NETSCC.
EVALUATION RESEARCH To know if Social programs, training programs, medical treatments, or other interventions work, we have to evaluate the outcomes systematically.
Focus on health and care of mothers and infants ChiMat conference, 2009 Professor Mary Renfrew Mother and Infant Research Unit.
Evaluation Nicola Bowtell. Some myths about evaluation 2Presentation title - edit in Header and Footer always involves extensive questionnaires means.
Evaluation Planning Checklist (1 of 2) Planning Checklist Planning is a crucial part of the evaluation process. The following checklist (based on the original.
GCP (GOOD CLINICAL PRACTISE)
PRAGMATIC Study Designs: Elderly Cancer Trials
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Health Technology Assessment
Evaluation: For Whom and for What?
Patient Focused Drug Development An FDA Perspective
Ethical Principles of Research
Building a Strong Outcome Portfolio
OGB Partner Advocacy Workshop 18th & 19th March 2010
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

Criteria for Assessing The Feasibility of RCTs

RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having reviewed research from across the world, a committee of doctors and scientists on the ACMD* concluded that the success of school-based schemes was "slight or non-existent" and could even be "counter-productive". * Advisory Council on the Misuse of Drugs (ACMD)

RCTs in Social Science: York September 2006 Why are we here? We share the goal of improving well being for society and individuals Interventions which we support as a society should benefit individuals and society and should not cause harm We value evidence

RCTs in Social Science: York September 2006 Who am I? Economist Managing Director of Matrix Research & Consultancy Ltd Member of the Campbell Crime and Justice Group Member of the Cochrane Campbell Economics Methods Group Advocate of improving the use of evidence to inform decisions

RCTs in Social Science: York September 2006 Who are you? Politicians? Policy advisers? Civil Servants? Practitioners? Researchers? Economists…..?

RCTs in Social Science: York September 2006 Important evaluation questions (adapted: Bain 1999) Should it work?  Theory Can it work?  Implementation Does it work?  Impact Is it worth it?  Value

RCTs in Social Science: York September 2006 Source of interventions Historical practice (“we have always done it this way”) Taught practice (“we teach people to do it this way”) Innovative practice (“I try new ideas”) Research (“I examine the theories”)  the popular vote………………

RCTs in Social Science: York September 2006 Role of trials for new medical interventions Idea Basic science Laboratory trials Clinical trials Licence Approval (e.g. NICE)  Average years?

RCTs in Social Science: York September 2006 Particular challenges for trials in social science (adapted: Farrington 1983) Pace of idea to practice Theory base of treatment Definition and scope of treatment Context complexity Community vs individual interventions Hawthorne effects Contamination Outcome measures & measurement Duration and decay

RCTs in Social Science: York September 2006 Arguments for Trials Interventions can do harm If undertaken well  trials minimize risk of bias Basic hypothesis: “can we identify an effect with confidence?” If results insignificant then: –There isn’t an effect; or –There is an effect but we haven’t detected it with confidence.

RCTs in Social Science: York September 2006 Arguments against trials Epistemology: “the world doesn’t work like this” Analytical: “there are too many analytical constraints” Ethical: “you can’t deny treatment” Legal: “you might be challenged if you deny treatments” Logistical: “there are too many practical constraints

RCTs in Social Science: York September 2006 Research on Feasibility Research partners Matrix Research & Consultancy Ltd The Jerry Lee Centre, University of Pennsylvania. Research base Collective research experience Home Office funded study for OBPs

RCTs in Social Science: York September 2006 Analytical challenges: Hierarchy 1.Internal validity: can we attribute effect to intervention? 2.Statistical power: can we measure the effect with confidence? 3.External validity: can we generalise the results?

RCTs in Social Science: York September 2006 Hurdles to ensure Internal Validity Participant selection targeting Completion and attrition rates Inconsistent treatments & treatment measurement Multiple outcomes and inconsistent outcome measurement Independent, contamination-free alternative, treatments or no treatment

RCTs in Social Science: York September 2006 Hurdles for Statistical Power Understanding expected/required effect size?  And….Given this… How to maximize statistical power given? –Heterogeneity of sample? –Completion rates? –Attrition rates?  And…given this… How many participants do you need?

RCTs in Social Science: York September 2006 Hurdles for External Validity Context / Characteristics: Local service provider Participants Community Tension between correcting for these and the increased challenges of achieving internal validity and statistical power.

RCTs in Social Science: York September 2006 Ethical Hurdles “denial of treatment is unethical” “validity of informed consent”

RCTs in Social Science: York September 2006 Legal Hurdles E.g. for offenders  tension between random assignment and requirements to sentence Scope of legal challenge: –from those denied the intervention –where programmes exist and are perceived to reduce risk to public/ stakeholders

RCTs in Social Science: York September 2006 Logistical Hurdles Quality of trial staff Cost of a trial relative to the cost of the intervention and the value of the expected effect Sample size required when completion rates are low and attrition post completion is high Avoidance of Hawthorne effect

RCTs in Social Science: York September 2006 Are there Solutions? Focus on particular well defined interventions Appoint independent trained research teams Randomise early Separate control groups Pick control treatments which enable effects to be measured Manage and monitor implementation Ensure programme stability Determine sample size from desired effect size – look at the theory and related research Measure outcomes consistently Identify cost effective study design Educate and inform participants and stakeholders

RCTs in Social Science: York September 2006 If these aren’t available? Only then ask yourself: “What is the next best alternative….and how can I minimize the risk of getting it wrong”

RCTs in Social Science: York September 2006 Final reflections Researchers should recognise: –decisions need to be made in real time –all information can provide evidence Users should recognise: –the risks of making the wrong decision increases as the quality of the evidence base declines In social science we are addressing the needs of people and communities who are vulnerable