Download presentation
Presentation is loading. Please wait.
Published byIris Bradley Modified over 7 years ago
1
Notes on program evaluation and scope for cooperation
Oriana Bandiera British Academy Workshop with GSR July 2017
2
Cooperation Academic researchers need data to test theories and, broadly, to understand the cause of things they have a comparative advantage in methods Policy makers need to know whether policies “work”, ie whether they achieve their aim they have access to data & institutional knowledge Unexploited gains from trade
3
Yet collaborations are the exception rather than the rule
Need to understand why the “market” for collaborations fail in order to fix it Prices cannot be used to clear the market academics cannot get paid (more later) The ”good” is not homogeneous matching is required systems to match interests are scarce
4
Understanding incentives: Academics
“Data!data!data!" he cried impatiently. "I can't make bricks without clay.”A.C.Doyle, The Adventure of the Copper Beeches High quality research is made of good questions and good data Academics maximise the quality of research this confers both financial and non-financial benefits
5
Understanding incentives: Academics
The collaboration is a mutually advantageous knowledge exchange not a consultancy contract Academics are not paid for the evaluation not free consultancy either
6
Understanding incentives: Policy Makers
PMS want to know if a policy works but are they willing to publicise when it does not? or if it has backfired in unpredicted ways? Commitment to publish results, however they come out is key for a successful collaboration
7
Examples Tax collectors incentives that increase bribes
Teachers incentives that increase kids’ consumption of sugary drinks A&E incentives that result in ambulances scarcity Anti-corruption rules that create monopolies
8
Ingredients for a successful collaboration
find a common interest start from the beginning a proper evaluation design needs to be incorporated into the policy design, ex-post evaluations are generally weak keep going past the end the effect of many policies outlive the policy itself yet longitudinal evaluations are very rare embed researchers nearly impossible to foresee what will go wrong researchers cannot understand mechanisms without knowing the context go by the principle of comparative advantage
9
An example from my own work
PM- HR Director, MoH, GoZ: : should community nurses receive career benefits? or will this attract people who don’t care about the community? Researchers-Ashraf & I: key question in org econ.. do incentives affect selection? and what’s the effect on performance? 1. common interest ✔️ 2. use different recruitment messages for the first round of hires policy & evaluation jointly designed 3. collect data for 2+ years evaluate long run responses ✔️ ✔️ 5. close collaboration with MoH staff comparative advantage 4. country team & regular visits embedded researchers ✔️ Results: career benefits attract more talented applicants who work harder and significantly improve health outcomes Policy in action: GoZ offers career incentives in all recruitment rounds
10
To randomise or not to randomise?
Evaluation requires finding an appropriate counterfactual – ie what would have happened without the policy parallel universe is a useful benchmark RCTs create a counterfactual by randomly selecting out a group of eligible beneficiaries this breaks the link between receiving the policy and all individual traits that make people keen to receive the policy but also affects their outcomes
11
What’s the value added of researchers?
RCTs only ensure that beneficiaries do not choose whether to be treated- a good start but.. Randomisation only ensures balance in large samples Uptake can still be endogenous Drop-out can still be endogenous Spillover effects can contaminate the control group Experiments are expensive and often politically unfeasible (randomised roll-out more palatable)
12
These challenges can be addressed
stratification on key determinants increases statistical power using eligible&interested as the starting sample allays take-up and drop out concerns randomisation at higher level of aggregation helps minimise spillovers randomising the roll-out rather than the policy itself is politically more feasible
13
The elusive average beneficiary
Most evaluations report average treatment effects These are often the average of very different effects Key to know whether, eg, a 20% increase in the probability of finding a job after a training program comes from a uniform 20% increase for all beneficiaries a 10% decrease for half, and a 50% increase for the other half Distributional effects are key to understand why the program works/ doesn’t work And this helps transporting the results to other settings
14
FAQs is randomisation always necessary? is a pilot enough?
no, but a valid counterfactual is is a pilot enough? for troubleshooting: yes, for evaluation: no scaled up interventions have general eq effects aren’t qualitative methods more informative? interviews are a good way to uncover mechanisms and complementary to systematic data collection they are not a substitute
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.