Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
Designing Clinical Research Studies An overview S.F. O’Brien.
Impact Evaluation Click to edit Master title style Click to edit Master subtitle style Impact Evaluation World Bank InstituteHuman Development Network.
Povertyactionlab.org How to Randomize? Abhijit Banerjee Massachusetts Institute of Technology.
Expanding the body of evidence Marc Shotland Training Director and Senior Research Manager.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
Clinical Trials Hanyan Yang
Example of “Random Promotion” or “Encouragement Design” Rebecca Thornton University of Michigan Randomized Evaluation of HIV Testing.
Impact of Hospital Provider Payment Mechanism on Household Health Service Utilization in Vietnam (preliminary results) Sarah Bales Public Policy in Asia,
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Impact Evaluation in the Real World One non-experimental design for evaluating behavioral HIV prevention campaigns.
Health Programme Evaluation by Propensity Score Matching: Accounting for Treatment Intensity and Health Externalities with an Application to Brazil (HEDG.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Experimental Design making causal inferences Richard Lambert, Ph.D.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Public Health Ethics for MC Glenda Gray Neil Martinson Guy de Bruyn.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Creating an Enabling Environment for PE/E Interventions 23 February 2011 Addis Ababa, Ethiopia Lindsay Morgan 1 Interventions for Impact in Essential Obstetric.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Measuring Impact 1 Non-experimental methods 2 Experiments
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Targeting of Public Spending Menno Pradhan Senior Poverty Economist The World Bank office, Jakarta.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 Steps in Implementing an Impact.
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
Randomized Assignment Difference-in-Differences
Getting more value for money: working with countries and partners toward greater effectiveness and efficiency Peter Stegman, Senior Economist.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Research Ethics Kenny Ajayi October 6, 2008 Global Poverty and Impact Evaluation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Impact Evaluation Methods Randomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Using Randomized Evaluations to Improve Policy
Measuring Results and Impact Evaluation: From Promises into Evidence
Right-sized Evaluation
Impact Evaluation Methods
Explanation of slide: Logos, to show while the audience arrive.
Using Randomized Evaluations to Improve Policy
Randomized Evaluation of HIV Testing
Free Distribution or Cost-Sharing
Development Impact Evaluation in Finance and Private Sector
1 Causal Inference Counterfactuals False Counterfactuals
Impact Evaluation Toolbox
Implementation Challenges
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Designs for Male Circumcision
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Steps in Implementing an Impact Evaluation
Presentation transcript:

Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting

Our Objective: Estimate the CAUSAL effect (impact) of: intervention P (male circumcision) on outcome Y (HIV incidence)

Our Objective: Estimate the CAUSAL effect (impact) of: intervention P (male circumcision) on outcome Y (HIV incidence) Since we can never actually know what would have happened, comparison groups allow us to estimate the counterfactual

Evaluation Designs for MC IE Study Design Cluster Stepped wedge Selective promotion Dose–Response

Evaluation Designs for MC IE Not everyone has access to the intervention at the same time (supply variation) The program is available to everyone (universal access or already rolled out) Study Design Cluster Stepped wedge Selective promotion Dose–Response

Cluster Evaluation Designs Unit of analysis is a group (e.g., communities, districts) Usually prospective InterventionComparison

Cluster Evaluation Designs Case Study: Progresa/Oportunidades Program National anti-poverty program in Mexico –Eligibility based on poverty index Cash transfers –conditional on school and health care attendance 506 communities –320 randomly allocated to receive the program –185 randomly allocated to serve as controls Program evaluated for effects on health and welfare

Stepped Wedge or Phased-In Clusters Time Period Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Stepped Wedge or Phased-In Clusters Time Period 12 1Program Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Stepped Wedge or Phased-In Clusters Time Period 123 1Program Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Stepped Wedge or Phased-In Clusters Time Period Program Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Stepped Wedge or Phased-In Case Study: Rwanda Pay-for-Performance Performance based health care financing –Increase quantity & quality of health services provided –Increase health worker motivation Financial incentives to providers to see more patients and provide higher quality of care Phased rollout at the district level –8 randomly allocated to receive the program immediately –8 randomly allocated to receive the program later

Selective Promotion Common scenarios: –National program with universal eligibility –Voluntary inscription in program Comparing enrolled to not enrolled introduces selection bias One solution: provide additional promotion, encouragement or incentives to a sub-sample: –Information –Encouragement (small gift or prize) –Transport

Selective Promotion Universal eligibility

Selective Promotion Universal eligibilitySelectively promote Promotion No Promotion

Selective Promotion Universal eligibilitySelectively promote Promotion No Promotion Enrollment

Not Encouraged 4% incidence Never Enroll Enroll if Encouraged Always Enroll Selective Promotion Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Not Encouraged 4% incidence Encouraged 3.5% incidence Never Enroll Enroll if Encouraged Always Enroll Selective Promotion Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Not Encouraged 4% incidence Encouraged 3.5% incidence Δ Effect 0.5% Never Enroll Enroll if Encouraged Always Enroll Selective Promotion Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Not Encouraged 4% incidence Encouraged 3.5% incidence Δ Effect 0.5% POPULATION IMPACT 2% incidence reduction Never Enroll Enroll if Encouraged Always Enroll Selective Promotion Brown CA, Lilford RJ. BMC Medical Research Methodology, 2006.

Selective Promotion Necessary conditions: Promoted and non-promoted groups are comparable –Promotion not correlated with population characteristics –Guaranteed by randomization Promoted group has higher enrollment in the program Promotion does not affect outcomes directly

Selective Promotion Case Study: Malawi VCT Respondents in rural Malawi were offered a free door- to-door HIV test Some were given randomly assigned vouchers between zero and three dollars, redeemable upon obtaining their results at a nearby VCT center

Dose – Response Evaluations Suitable when a program is already in place everywhere Examine differences in exposures (doses) or intensity across program areas Compare the impact of the program across varying levels of program intensity Hypothetical map of program implementation levels

Dose – Response Evaluations Example for MC: All clinics in a region offer MC, but their capacity is limited and there are queues Some towns are visited by mobile clinics that help the fixed clinic rapidly increase MC coverage

Design Variations for MC IE Study Design Allocation Method RandomizationMatching Enrolled vs. not Enrolled Cluster Stepped wedge Selective promotion Dose–Response

Random Allocation Each unit has the same probability of selection –for who receives the benefit, or –who receives the benefit first Helps obtain comparability between those who did and did not receive the intervention –On observed and unobserved factors Ensures transparency and fairness

Unit of Randomization Individuals, groups, communities, districts, etc

Matching Pick a comparison group that “matches” the treatment group based on similarities in observed characteristics

Matching Region A - TreatmentRegion B - Comparison

Matching Region A - TreatmentRegion B - Comparison

Matching Matching helps control for observable heterogeneity Cannot control for factors that are unobserved Matching can be done at baseline (more efficient) OR in the analysis

Enrolled versus Not Enrolled Consider a school-based pregnancy prevention program 10 schools in the district are asked if they would like to participate

Enrolled versus Not Enrolled 5 schools decline participation 5 schools elect to participate in the program Pregnancy Prevention Program No intervention

Enrolled versus Not Enrolled Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years

Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years Enrolled versus Not Enrolled Schools in the program had fewer adolescent pregnancies… Can we attribute this difference to the program?

Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years Observed effect might be due to differences in unobservable factors which led to differential selection into the program (“selection bias”) Enrolled versus Not Enrolled

This selection method compares “apples to oranges” The reason for not enrolling might be correlated with the outcome –You can statistically “control” for observed factors –But you cannot control for factors that are “unobserved” Estimated impact erroneously mixes the effect of different factors

Choosing Your Methods Study Design Allocation Method RandomizationMatching Enrolled vs. not Enrolled Cluster Stepped wedge Selective promotion Dose–Effect Two decisions to decide the design:

Choosing Your Methods Identify the “best” possible design given the context Best design = fewest risks for error Have we controlled for “everything”? –Internal validity Is the result valid for “everyone”? –External validity –Local versus global treatment effect

Consider Randomization First Minimizes selection bias Balances known and unknown confounders Most efficient (smaller Ns) Simpler analyses Transparency Decision makers understand (and believe) the results

Choosing Your Methods To identify an IE design for your program, consider: –Prospective/retrospective –Eligibility rules –Roll-out plan (pipeline) Is universe of eligibles larger than available resources at a given point in time? –Who controls implementation? –Budget and capacity constraints? –Excess demand for program? –Eligibility criteria? –Geographic targeting?

Thank you

Dose – Response Evaluations Case Study: Global Fund Evaluation 18 countries categorized on magnitude of Global Fund disbursements, duration of programming Country Global Fund HIV Grants Funds (US$ M) Time elapsed (yrs) Benin Cambodia Ethiopia Malawi Mozambique Rwanda Source: The Global Fund 5 Year Evaluation