Notes on program evaluation and scope for cooperation

Slides:



Advertisements
Similar presentations
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Advertisements

Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Good Customer Service Needs Good People Management.
Organization Mission Organizations That Use Evaluative Thinking Will Develop mission statements specific enough to provide a basis for goals and.
Randomised controlled trials Peter John. Causation in policy evaluation Outcome Intervention Other agency actions External environment.
Arguments for and against Protection
Measuring Impact: Experiments
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
1 European Union Regional Policy – Employment, Social Affairs and Inclusion The new architecture for cohesion policy post-2013 High-Level Meeting on the.
Independent Evaluation Group World Bank November 11, 2010 Evaluation of Bank Support for Gender and Development.
Recruiting and Retaining Staff Dr Lee Gruner1. Principles of Recruitment and Retention Aimed at ensuring that the organisation has competent, high performing.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Access to drugs, Reducing bottlenecks Matt Cooper Business Development & Marketing Director NIHR Clinical Research Network
4 Countries Project: Modernising Learning Disability Nursing Dr Ben Thomas Director of Mental Health & Learning Disability Nursing 16 December, 2011.
Introduction to Human Resource Management Chapter 1 Reference Books:  Human Resource Management ( Gary Dessler)  Human Resource Management (Mondy) 
People lives communities Supported employment for disabled people Commissioning and Contracting Training Conference 12 September 2014 Rich Watts, NDTi.
Looking for statistical twins
Introduction to Human Resource Management
Measuring Workforce Effectiveness
Connecticut Chinese Language Academy
Sample HC Organizational Structures
Chapter 18 The Spectrum of Economic Systems
Background Portsmouth Hospitals has a large maternity service with over 6000 births per annum There are 3 free standing midwifery led units (FMU), an alongside.
Chapter 4 Defining Performance and Choosing a Measurement Approach
Measuring Results and Impact Evaluation: From Promises into Evidence
Food and Agriculture Organization of the United Nations
Functions and Activities of HRM
Strengthening Accountability Relations with Beneficiaries through GALI
RUNNING RANDOMISED CLINICAL TRIALS
Efficiency and Equity in a Competitive Market
Irish Forum for Global Health Conference 2012 Closing Session
Right-sized Evaluation
Competence Centre on Microeconomic Evaluation (CC-ME)
MANAGING HUMAN RESOURCES
Finding & Chasing Purple Squirrels:
George E. Thibault, MD President, Josiah Macy Jr. Foundation
Principles of economics
Evaluation of Nutrition-Sensitive Programs*
Unit 3 Human Resource Management Aim The aim of this unit is to enable students to appreciate and apply principles of effective Human Resource Management.
Conducting Efficacy Trials
HUMAN RESOURCE MANAGEMENT
Chapter 16 Implementing HR Strategy: High-Performance Work Systems
The post-intervention effects of conditional cash transfers for HIV/STI prevention: a randomized trial in rural Tanzania Damien de Walque (The World Bank)
Director, National Co-ordinating Centre for Public Engagement
Programme Analysis Planning, Monitoring, and Evaluating Programmes
Assessing your total rewards offer
Our new quality framework and methodology:
GETTING STARTED IN SOCIAL IMPACT MEASUREMENT
STEM Ambassadors – an overview
1 Causal Inference Counterfactuals False Counterfactuals
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
Enhancing statistical practices to improve data sharing
Implementation Challenges
The post-intervention effects of conditional cash transfers for HIV/STI prevention: a randomized trial in rural Tanzania Damien de Walque (The World Bank)
Domestic factors and economic development
Chapter 4 © Routledge/Taylor & Francis 2014
Role of the state.
Class 2: Evaluating Social Programs
Standard for Teachers’ Professional Development July 2016
Class 2: Evaluating Social Programs
Price and Volume Measures
Steps in Implementing an Impact Evaluation
[Group Name].
Comprehensive M&E Systems
SusCatt Increasing productivity, resource efficiency and product quality to increase the economic competitiveness of forage and grazing based cattle.
Partnership and Regionality
Shared evaluation: NHS funders
Stakeholder engagement and research utilization: Insights from Namibia
Experiment or quasiexperiment: success and failure. Lithuanian case
Presentation transcript:

Notes on program evaluation and scope for cooperation Oriana Bandiera British Academy Workshop with GSR July 2017

Cooperation Academic researchers need data to test theories and, broadly, to understand the cause of things they have a comparative advantage in methods Policy makers need to know whether policies “work”, ie whether they achieve their aim they have access to data & institutional knowledge Unexploited gains from trade

Yet collaborations are the exception rather than the rule Need to understand why the “market” for collaborations fail in order to fix it Prices cannot be used to clear the market academics cannot get paid (more later) The ”good” is not homogeneous matching is required systems to match interests are scarce

Understanding incentives: Academics “Data!data!data!" he cried impatiently. "I can't make bricks without clay.”A.C.Doyle, The Adventure of the Copper Beeches High quality research is made of good questions and good data Academics maximise the quality of research this confers both financial and non-financial benefits

Understanding incentives: Academics The collaboration is a mutually advantageous knowledge exchange not a consultancy contract Academics are not paid for the evaluation not free consultancy either

Understanding incentives: Policy Makers PMS want to know if a policy works but are they willing to publicise when it does not? or if it has backfired in unpredicted ways? Commitment to publish results, however they come out is key for a successful collaboration

Examples Tax collectors incentives that increase bribes Teachers incentives that increase kids’ consumption of sugary drinks A&E incentives that result in ambulances scarcity Anti-corruption rules that create monopolies

Ingredients for a successful collaboration find a common interest start from the beginning a proper evaluation design needs to be incorporated into the policy design, ex-post evaluations are generally weak keep going past the end the effect of many policies outlive the policy itself yet longitudinal evaluations are very rare embed researchers nearly impossible to foresee what will go wrong researchers cannot understand mechanisms without knowing the context go by the principle of comparative advantage

An example from my own work PM- HR Director, MoH, GoZ: : should community nurses receive career benefits? or will this attract people who don’t care about the community? Researchers-Ashraf & I: key question in org econ.. do incentives affect selection? and what’s the effect on performance? 1. common interest ✔️ 2. use different recruitment messages for the first round of hires policy & evaluation jointly designed 3. collect data for 2+ years  evaluate long run responses ✔️ ✔️ 5. close collaboration with MoH staff comparative advantage 4. country team & regular visits embedded researchers ✔️ Results: career benefits attract more talented applicants who work harder and significantly improve health outcomes Policy in action: GoZ offers career incentives in all recruitment rounds

To randomise or not to randomise? Evaluation requires finding an appropriate counterfactual – ie what would have happened without the policy parallel universe is a useful benchmark RCTs create a counterfactual by randomly selecting out a group of eligible beneficiaries this breaks the link between receiving the policy and all individual traits that make people keen to receive the policy but also affects their outcomes

What’s the value added of researchers? RCTs only ensure that beneficiaries do not choose whether to be treated- a good start but.. Randomisation only ensures balance in large samples Uptake can still be endogenous Drop-out can still be endogenous Spillover effects can contaminate the control group Experiments are expensive and often politically unfeasible (randomised roll-out more palatable)

These challenges can be addressed stratification on key determinants increases statistical power using eligible&interested as the starting sample allays take-up and drop out concerns randomisation at higher level of aggregation helps minimise spillovers randomising the roll-out rather than the policy itself is politically more feasible

The elusive average beneficiary Most evaluations report average treatment effects These are often the average of very different effects Key to know whether, eg, a 20% increase in the probability of finding a job after a training program comes from a uniform 20% increase for all beneficiaries a 10% decrease for half, and a 50% increase for the other half Distributional effects are key to understand why the program works/ doesn’t work And this helps transporting the results to other settings

FAQs is randomisation always necessary? is a pilot enough? no, but a valid counterfactual is is a pilot enough? for troubleshooting: yes, for evaluation: no scaled up interventions have general eq effects aren’t qualitative methods more informative? interviews are a good way to uncover mechanisms and complementary to systematic data collection they are not a substitute