Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.

Slides:



Advertisements
Similar presentations
A NEW METRIC FOR A NEW COHESION POLICY by Fabrizio Barca * * Italian Ministry of Economy and Finance. Special Advisor to the European Commission. Perugia,
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Kristen Himelein 1.
Choosing the level of randomization
Povertyactionlab.org How to Randomize? Abhijit Banerjee Massachusetts Institute of Technology.
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation Wednesday 12 th.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Market Structures and Current Changes
Non-Experimental designs: Developmental designs & Small-N designs
Types of Studies Observational Study –Observes and measures characteristics without trying to modify the subjects being studied Experiment –Impose a treatment.
Problem solving in project management
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
INTRODUCTION TO OPERATIONS RESEARCH Niranjan Saggurti, PhD Senior Researcher Scientific Development Workshop Operations Research on GBV/HIV AIDS 2014,
1 Randomization in Practice. Unit of randomization Randomizing at the individual level Randomizing at the group level –School –Community / village –Health.
Chapter 1 - Introduction & Research Methods What is development?
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
1. 2 IMPORTANCE OF MANAGEMENT Some organizations have begun to ask their contractors to provide only project managers who have been certified as professionals.
Cultivating Demand Within USAID for Impact Evaluations of Democracy and Governance Assistance Mark Billera USAID Office of Democracy and Governance Perspectives.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
1 The Need for Control: Learning what ESF achieves Robert Walker.
Best Practices in Partnering Julia King Tamang
Where did plants and animals come from? How did I come to be?
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Research for the Heath Professional. Overview Initial stages Choosing a Research Method Qualitative Research Designs Quantitative Research Designs Stumbling.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Testing, Experiments, and Media Planning. Market Tests and Experiments Test –Simple field test of advertising Experiment –Carefully designed study of.
European policy perspectives on social experimentation Antoine SAINT-DENIS and Szilvia KALMAN, European Commission - DG Employment, social affairs and.
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
What is randomization and how does it solve the causality problem? 2.3.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
CHOOSING THE LEVEL OF RANDOMIZATION. Unit of Randomization: Individual?
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Lesson Overview Lesson Overview What Is Science? Lesson Overview 1.1 What Is Science?
Randomized Assignment Difference-in-Differences
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Research Design Quantitative. Quantitative Research Design Quantitative Research is the cornerstone of evidence-based practice It provides the knowledge.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Quasi Experimental Methods I
Right-sized Evaluation
NTLL: Not Too Late to Learn TR1-GRU
Quasi Experimental Methods I
Implementation Challenges
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Chapter 12 Power Analysis.
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Designs for Male Circumcision
Steps in Implementing an Impact Evaluation
Presentation transcript:

Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011

Partisans want to show their program had an impact – Further funding, re-election, promotion, depend on it Evaluation is difficult – Participants are often very different from non- participants Often the programs are implemented in specific areas, at a specific moment for a specific reason Volunteers are often more motivated or better informed or… – But we cannot observe the same person both: exposed and not exposed to the program Why are Programs not Evaluated?

Implement it on the ground in the form of a real program, and see if it works To know the impact of a program must be able to answer counterfactual: – How would individual have fared without the program – But can’t observe same individual with and without the program Need an adequate comparison group – individuals who, except for the fact that they were not beneficiaries of the program, are similar to those who received the program Common approaches: – Before and after (But many things happen over time) – Participants vs. Non-participants (But may be different, more motivated, live in a different region, …) How to evaluate the impact of an idea?

4 Based on Orr (1999) Target Population Not in evaluation Sample Population Sample Population Random Assignment Treatment Group Control Group Basic set up of a randomized evaluation

Randomized assignment This method works because of the law of large numbers Both groups (treatment and control) have the same characteristics, except for the program Differences in the outcomes can confidently be attributed to the program

When to do a randomized evaluation? As soon as there is an important question and the answer will help fighting poverty Not too early and not too late… If the program is not too special and therefore can be used in many other contexts To be really effective, time, talents and.. some money are needed

How to introduce randomness Level of randomization: – Person: better to detect very small effects – Group: class, clinics, villages. Sometimes easier to organize Different ways to randomize – Organize a lottery – Randomly assign multiple different treatments – Randomly encourage some more than others – Randomize order of phase-in of a program – And more!

Advantages Changes the research: – Researchers and partners have to work close together on the ground The results are – transparent – Easy to understand and to communicate – Difficult to manipulate – Avoid long debates – Help to fight fads Very efficient to convince donors and policy makers

Limits Not appropriate when: – the impact to measure is a macro impact – the scaling-up of the pilot will modify the impact a lot – the experiment modifies the reality – the beneficiaries are in a context of urgency It takes a lot of time, sometimes difficult to combine with political needs

When NOT to do a randomized evaluation When it is too early, the program still needs to be ameliorate If the project is too small (not enough participants) If it was already demonstrate that this program has a positive impact. Use the money for more beneficiaries! If the program has already started

The question of ethic Resources are very limited any way This random assignment is usually seen as very fair Every project has to go through an ethic committee J-PAL never works on projects if the cost of the evaluation means less beneficiaries. After the evaluation is over, if necessary, the control group will also get the program

Difficulties in the field We face some resistances: – some people just cannot accept the random assignment perceived as unjust – take up is much lower than previous, – not always easy to overcome the cultural differences between researchers and partners – Not easy to deal with very decentralized organizations – The evaluation sometimes goes against some financial or political interests Be very careful in choosing the program!

More information

Many thanks !

Conclusions Policy needs experimentation… – Too many errors are made all the time; too much time is lost; too much money is wasted – Without experimentation, the stakes are too high. What has been scaled up cannot afford to be wrong. No incentive to learn and progress Experimentation needs to be serious… – We need to be rigorous in determining success or failure – Otherwise, everyone is free to defend their pet project Experimentation needs to be creative… – If we are just trying, and accept the possibility of failure, we do not need to think inside the box – This mindset could revolutionize social policy