Wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation Wednesday 12 th.

Slides:



Advertisements
Similar presentations
Chapter 3 Striving for Integrity in the Research Process Zina OLeary.
Advertisements

ENTITIES FOR A UN SYSTEM EVALUATION FRAMEWORK 17th MEETING OF SENIOR FELLOWSHIP OFFICERS OF THE UNITED NATIONS SYSTEM AND HOST COUNTRY AGENCIES BY DAVIDE.
Linking regions and central governments: Indicators for performance-based regional development policy 6 th EUROPEAN CONFERENCE ON EVALUATION OF COHESION.
A NEW METRIC FOR A NEW COHESION POLICY by Fabrizio Barca * * Italian Ministry of Economy and Finance. Special Advisor to the European Commission. Perugia,
EURADWASTE 29 March 2004 LOCAL COMMUNITIES IN NUCLEAR WASTE MANAGEMENT THE COWAM EUROPEAN PROJECT EURADWASTE, 29 March 2004.
The Role of Pilots Alex Bryson Policy Studies Institute ESRC Methods Festival, St Catherines College, Oxford University, 1st July 2004.
From Research to Advocacy
Donald T. Simeon Caribbean Health Research Council
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
Measuring Value: Using Program Evaluation to Understand What’s Working -- Or Isn’t Juliana M. Blome, Ph.D. , MPH Office of Program Analysis and Evaluation.
Mywish K. Maredia Michigan State University
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Ray C. Rist The World Bank Washington, D.C.
8. Evidence-based management Step 3: Critical appraisal of studies
Role of Pharmacoeconomics in a Developing country context Gavin Steel for Anban Pillay Cluster Manager: Health Economics National Department of Health.
National Evaluation and Results Management System– Sinergia – Two decades of lessons and experiences Directorate of Monitoring and Evaluation of Public.
2011 Governance, Risk, and Compliance Conference August 29 – 31, 2011 / Orlando, FL, USA The Top Four Essential Objectives to Auditing ERM Stephen E. McBride,
1 Managing Threats to Randomization. Threat (1): Spillovers If people in the control group get treated, randomization is no more perfect Choose the appropriate.
José Manuel Fresno EURoma meeting Budapest, 8 November 2011.
Office of the Auditor General of Canada The State of Program Evaluation in the Canadian Federal Government Glenn Wheeler Director, Results Measurement.
Y. Rong June 2008 Modified in Feb  Industrial leaders  Initiation of a project (any project)  Innovative way to do: NABC ◦ Need analysis ◦ Approach.
Chapter 8 Experimental Research
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Public Value Innovation and Research Evaluation Discussion by Karen Macours INRA - Paris School of Economics.
Randomized Control Trials for Agriculture Pace Phillips, Innovations for Poverty Action
Copyright 2010, The World Bank Group. All Rights Reserved. Planning and programming Planning and prioritizing Part 1 Strengthening Statistics Produced.
Research Policies and Mechanisms: Key Points from the National Mathematics Advisory Panel Joan Ferrini-Mundy Director, Division of Research on Learning.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Methodological Problems Faced in Evaluating Welfare-to-Work Programmes Alex Bryson Policy Studies Institute Launch of the ESRC National Centre for Research.
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
KATEWINTEREVALUATION.com Education Research 101 A Beginner’s Guide for S STEM Principal Investigators.
Commissioning Self Analysis and Planning Exercise activity sheets.
The shift to programs in the LAC region. What is a program? A program is a coherent set of initiatives by CARE and our allies that involves a long-term.
Impact evaluations: lessons from AFD’s experience Phnom Penh SKY evaluation meeting 4-5 October 2011.
1 Government Social Research Unit Randomised Controlled Trials Conference University of York, September 2006 Why Governments Need.
4/5 June 2009Challenges of the CMEF & Ongoing Evaluation 1 Common monitoring and evaluation framework Jela Tvrdonova, 2010.
Key Principles for Preparing the DCSD Community Plan 1.Integration – Social, Economic, Environmental Well-being focused on outcomes and people centred.
Approach to GEF IW SCS Impact Evaluation Aaron Zazueta Reference Group Meeting Bangkok, Thailand September 27, 2010.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
What is Science? or 1.Science is concerned with understanding how nature and the physical world work. 2.Science can prove anything, solve any problem,
Applying impact evaluation tools A hypothetical fertilizer project.
Experiences, Trends, and Challenges in Management of Development Assistance Operators for Effective Monitoring & Evaluation of Results Linda Morra Imas.
Pay for performance and impact evaluation design Practical lessons from OECD review Y-Ling Chi, OECD.
Development Impact Evaluation in Finance and Private Sector 1.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Developing a Framework In Support of a Community of Practice in ABI Jason Newberry, Research Director Tanya Darisi, Senior Researcher
Randomized controlled trials and the evaluation of development programs Chris Elbers VU University and AIID 11 November 2015.
Corporate-level Evaluation on IFAD’s Private Sector Development and Partnership Strategy 6 th Special Session of the IFAD Evaluation Committee 9 May 2011.
Investing in a healthy community: making the most of NICE’s ROI tools Judith Richardson Health & Social Care Directorate.
Strategies for making evaluations more influential in supporting program management and informing decision-making Australasian Evaluation Society 2011.
Doing the Right Thing Unlocking the voluntary and community sector’s potential for making change happen in health and care.
The Role of Statistics and Statisticians in Effectiveness Evaluations Wendy Bergerud Research Branch BC Min. of Forests November 2002.
"The challenge for Territorial Cohesion 2014 – 2020: delivering results for EU citizens" Veronica Gaffey Acting Director EUROPEAN COMMISSION, DG for Regional.
UNEP EIA Training Resource ManualTopic 14Slide 1 What is SEA? F systematic, transparent process F instrument for decision-making F addresses environmental.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
A RADICAL SHIFT TOWARDS A MORE RESULTS-ORIENTED COHESION POLICY IS BOTH NEEDED AND POSSIBLE Ideas from the Report “An agenda for a Reformed Cohesion Policy”
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
Connect2Complete Theory of Change Development for Colleges and State Offices November 10, 2011 OMG Center for Collaborative Learning.
DATA FOR EVIDENCE-BASED POLICY MAKING Dr. Tara Vishwanath, World Bank.
Evaluation Planning Checklist (1 of 2) Planning Checklist Planning is a crucial part of the evaluation process. The following checklist (based on the original.
Global Educator Exchange (E² ) April 28 – May 1, 2015 Teachers Name: Mireia Gussinyé Project Name: A new methodology for teaching and learning history.
CAN EFFECTIVE PERFORMANCE MANAGEMENT SYSTEM ALONE HELPS IMPROVE SERVICE DELIVERY? Institute of Municipal Finance Officers & Related Professions Cherèl.
Measuring Results and Impact Evaluation: From Promises into Evidence
Technical Assistance on Evaluating SDGs: Leave No One Behind
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation Wednesday 12 th February 2014

Simon Flemington, Chief Executive Officer, LSE Enterprise wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation

wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation Hélène Giacobino, Executive Director, J-PAL Europe

SPARK London Feb 12, 2014 Improving Policies and Building Knowledge: the Role of Creative Experimentation Hélène Giacobino Executive Director J-PAL Europe Support services for social policy experimentation in the EU

SPARK London Feb 12, 2014 The Need for Rigorous Evidence In a context of economic downturn, it is important to optimize public expenditure Promoting the implementation of rigorous evaluation methods to test the effects of new policies is a good way to achieve this goal Evidence-based policy is possible and highly effective As long as we learn from our success and mistakes, this is fine, because even a failed program helps us understand what went wrong Without rigorous evaluation, everybody can favor their own pet project – and even lessons from successes are lost

SPARK London Feb 12, 2014 Social Policy Experimentation Social policy experimentation tests the validity of policies by collecting evidence about the real impact of interventions on people The goal is: – to bring innovative answers to social needs, – to test impact on small scale interventions, – to scale-up the ones whose results were convincing

SPARK London Feb 12, 2014 How to do SPE? It is possible to undertake SPE through different methods: Some of the most common non randomized methods are: -Pretest-posttest (Before and after) -Differences-in-Differences -Regression discontinuity -Statistical Matching They always rely on strong assumptions Randomized Controlled Trials (RCTs) give the most rigorous results (internal validity)

SPARK London Feb 12, 2014 RCTs: a long history Experimental psychology: late 19th century, Education: early 20th century Experimental sociology, early 20th century: – rural health education, – social effects of public housing, – recreation programs for delinquent boys Large-scale randomized clinical trials: a norm since 1962 Drug Amendments Went through substantial debates but today widely accepted A boom in the 60’ in the USA, (250 RCTs done) 90’: J-PAL introduce RCTs in development economics (today: 450 RCTs)

SPARK London Feb 12, 2014 BUT … not every intervention can be evaluated A well designed SPE should include: -an explicit and relevant policy question -a valid identification strategy -a well-powered sample -high quality data. Evaluation is not appropriate when: – the sample size is too small – the impact to measure is a macro impact – the scaling-up of the pilot will modify the impact a lot – the beneficiaries are in a context of urgency 9

SPARK London Feb 12, 2014 AND … not every intervention warrants an impact evaluation Investing in some key rigorous impact evaluations is essential to guide policy decisions and to responsibly allocate public resources Rigorous impact evaluations should be used to test key influential, strategically relevant or novel interventions No added-value to evaluate policies or programs that will benefit a very limited number of people, or to test a policy question that has been already rigorously evaluated in a similar context (except within explicit validation program) 10

SPARK London Feb 12, 2014 Ethical Issues SPE should be designed to follow the ethical principles applicable to evaluations and research projects involving human subjects (even if there are no rules in the country of experimentation) and be approved by an IRB. SPE should include rigorous protection of individual data which is paramount SPE can have different designs to secures fairness, impartiality and transparency (and cost effectiveness!)

SPARK London Feb 12, 2014 Financial Issues Rigorous SPE need good quality data: this is why, if the sample is large and you don’t have existing data, it may be costly. – but financing large-scale interventions without knowing their effects can potentially result in a waste of resources! The costs depend on the program evaluated: it is more expensive if you are dealing with long-term impact

SPARK London Feb 12, 2014 The « Black Box » Issue One common critique: “RCTs can provide whether an intervention was effective, and even measure the size of the impact. But they cannot answer the question of how or why the impact came about.” This is true only if final outcomes are measured. RCTs where intermediate indicators that map to the “theory of change” are collected, and where qualitative methods are also used, can help us understand the how and why.

SPARK London Feb 12, 2014 Randomization is not a substitute for theory Although randomization guarantees the internal validity of the estimate, in order to interpret the result, you need a theoretical framework: The extent to which findings do or don’t generalize beyond a specific context depend on theory: only theory tells you what is likely to matter in the context, and guide replications Value of randomization is that you can more easily be surprised: – You cannot doubt the results when you found them: if they are surprising, instead of shelving the results, you (and others) have to think about what happened.

SPARK London Feb 12, 2014 External Validity: an issue for all types of evaluation External validity: the extent to which we can be confident that the results found in one context will generalize to other contexts External validity is a function of the program being evaluated (where is it being implemented, how replicable is the program model, how much does program implementation depend on context or interaction with the community), not of the evaluation methodology! Nonrandomized impact evaluations are undertaken on a specific program in a specific location, with the downside that they do not control for selection bias  weaker internal validity A randomized evaluation tests a particular question in a specific location, at a specific time and at a specific scale. If properly conducted, and it has strong internal validity If we cannot be confident that the evaluations measure the true impact of the program in a specific context, then we can be less confident in generalizing conclusions to another context

SPARK London Feb 12, 2014 External validity: an issue for all types of evaluation The scope of an evaluation’s true external validity depends on how the evaluation sample is determined: if the evaluation sample is representative of the target population (randomly sampled from a larger population), then the results are generalizable to that population.

SPARK London Feb 12, 2014 Replication is a strategy that enables us to understand how an intervention functions in various settings SPE, as they force researchers to pay attention to context, details, and realities on the ground, allow to test broad theories in a credible way, and produce evidence that can then feed back into our theories Theory and process evaluation data can help us understand the mechanisms through which the impact was produced and therefore to scale-up programs more confidently External validity: solutions

SPARK London Feb 12, 2014 Conclusions Social Policy Experimentation is an important tool to help improve social programs It is important to strategically allocate evaluation resources to key influential, strategically relevant or novel interventions: – to facilitate scale-up – to encourage the replication of the policy in different contexts – to provide valuable information for future policymaking Yet not every intervention can be evaluated And not every intervention warrants an impact evaluation Experimentation needs to be creative… – If we are just trying, and accept the possibility of failure, we do not need to think inside the box – This mindset could revolutionize social policy

wifi: Nesta Guest password: flourish01 follow us on official event hashtag #sparkEC Testing social policy innovation Phil Sooben, The Economic & Social Research Council Dr Simona Milio, LSE Enterprise Hélène Giacobino, J-PAL Europe Jonathan Breckon, Nesta Arnaud Vaganay, LSE Enterprise