Randomized Control Trials

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

Designing an impact evaluation: Randomization, statistical power, and some more fun…
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Chapter 9 Overview of Alternating Treatments Designs.
1 QOL in oncology clinical trials: Now that we have the data what do we do?
Benefits and limits of randomization 2.4. Tailoring the evaluation to the question Advantage: answer the specific question well – We design our evaluation.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Copyright © 2010, 2007, 2004 Pearson Education, Inc.
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
R48 - Human-Made Trade Barriers
Non-Experimental designs: Surveys & Quasi-Experiments
AGEC 608 Lecture 11, p. 1 AGEC 608: Lecture 11 Objective: Provide overview of how “demonstrations” are typically used in deriving benefits and costs of.
11 Populations and Samples.
Descriptive and Causal Research Designs
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
1 Randomization in Practice. Unit of randomization Randomizing at the individual level Randomizing at the group level –School –Community / village –Health.
Sampling : Error and bias. Sampling definitions  Sampling universe  Sampling frame  Sampling unit  Basic sampling unit or elementary unit  Sampling.
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Chapter 1 - Introduction & Research Methods What is development?
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Experimental Study Design RCT. EXPERIMENTAL Exposure manipulated by Investigator DescriptiveAnalytic Exposure NOT manipulated by Investigator OBSERVATIONAL.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
Randomized Clinical Trials: The Versatility and Malleability of the “Gold Standard” Wing Institute Jack States Ronnie Detrich Randy Keyworth “Get your.
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 13 Assessing Affective Characteristics.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
1 Experimental Research Cause + Effect Manipulation Control.
Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.
Non-Experimental designs: Surveys Psych 231: Research Methods in Psychology.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Measuring Impact 1 Non-experimental methods 2 Experiments
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Development Impact Evaluation in Finance and Private Sector 1.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Chapter 9 International Trade. Objectives 1. Understand the basis of international specialization 2. Learn who gains and who loses from international.
Randomized Assignment Difference-in-Differences
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Human and Animal Research 1. What issues does this raise? 2.
Research Ethics Kenny Ajayi October 6, 2008 Global Poverty and Impact Evaluation.
Chapter 3 Research Design.
Application: International Trade
Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.
Continuous Improvement & Real World Evidence: A Public Payer’s Perspective Suzanne McGurn, Assistant Deputy Minister and Executive Officer Ontario Public.
Sampling.
Program Evaluation ED 740 Study Team Project Program Evaluation
Types of Research Studies Architecture of Clinical Research
An introduction to Impact Evaluation
Understanding Results
A Primer on Health Economics and Cost-Effectiveness
Implementation Issues Program roll-out
Development Impact Evaluation in Finance and Private Sector
Application: International Trade
Observational Studies vs. Randomized Controlled Trials (RCT)
Questionnaires Questionnaires are one the most commonly used research methods. There are many types of questionnaires which are used for different reasons.
Implementation Challenges
Introduction to Experimental Design
Sampling and Power Slides by Jishnu Das.
Impact Evaluation Designs for Male Circumcision
Sampling for Impact Evaluation -theory and application-
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Marketing Research: Course 4
Doing Sociology: Research Methods
Measuring the Wealth of Nations
Presentation transcript:

Randomized Control Trials Difficulties to Consider

Costs Cost of intervention itself often not difficult to justify Providing goods/services Only including promising possibilities New data are expensive Quality of evaluation dependent on quality of data More money spent on data is less money spent on providing the intervention to more people

Creating the Control Group Is it politically feasible to deny treatment to some people? How important is it to measure how well the intervention works? Issue of trade-offs Ethics less contested if: Budget constraints would have prevented everyone from receiving the intervention anyway Everyone eventually receives the intervention and the control group is only denied it initially (phased-in rollout)

Does Everyone Benefit? Necessary to deny control group intervention But don’t want to actively hurt them Can’t deceive Can’t make them worse off than they’d otherwise be Some sort of small gift/compensation typical – careful not to make this into a second treatment Must honor promises (phased-in rollout)

Internal Validity Was the control group valid? Randomization worked Intended treatment and control groups balanced Actual treatment and control groups same as intended Contamination from spillovers Was the intervention consistent in all treatment areas? Easier to guarantee in some cases than in others Do data exist to rule out alternate hypotheses?

External Validity Will the subjects in the experiment be representative of the entire population who will eventually receive the intervention? Logistically, much easier to do data collection in restricted area Less likely that experiment will generalize to entire country

Data Quality Sensitive questions Subjective questions How can we encourage subjects to give honest and complete answers? Subjective questions Self-reported vs quantitative measures “recall error” Hawthorne effect - People behave differently when they know they’re being watched Might be desirable to follow them closely for more data But that might make biases worse

Cost-Effectiveness Comparisons Resources are scarce – need to pick most effective programs Need to be able to convert impacts from various projects into one set of units How to compare improvement in nutrition to reduction in malaria?

Scaling Up Can intervention be implemented identically at scale? If not, is RCT still informative? Will the economy at large respond to the intervention at scale? (“general equilibrium effects”) Prices might go down – economies of scale Prices might go up – insufficient supply Spillover effects could set in

Final Thoughts RCT is gold-standard in terms of identifying causality But many complications arise during implementation Need to weigh theoretical advantages against practicalities – is it really the best method?