Methodological Problems Faced in Evaluating Welfare-to-Work Programmes Alex Bryson Policy Studies Institute Launch of the ESRC National Centre for Research.

Slides:



Advertisements
Similar presentations
Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Advertisements

1 Graham Allen Independent review on Early Intervention The review, chaired by Graham Allen MP, is reporting in two parts: Part 1 (Report published January.
The Role of Pilots Alex Bryson Policy Studies Institute ESRC Methods Festival, St Catherines College, Oxford University, 1st July 2004.
The Institute for Economic and Social Research University of Indonesia
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
1 Research in Micro Finance: Big Questions and How to Answer Them.
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Estimating net impacts of the European Social Fund in England Paul Ainsworth Department for Work and Pensions July 2011
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Benefits and limits of randomization 2.4. Tailoring the evaluation to the question Advantage: answer the specific question well – We design our evaluation.
Assessing the impact of an aging workforce across global organizations.
Does Government Work? Evaluating Programs and Policies Katherine C. Naff, Ph.D. January 22, 2002.
A METHODOLOGY FOR MEASURING THE COST- UTILITY OF EARLY CHILDHOOD DEVELOPMENTAL INTERVENTIONS Quality of improved life opportunities (QILO)
Can Financial Work Incentives Pay For Themselves? Final Report on the Self-Sufficiency Project for Welfare Applicants Reuben Ford, David Gyarmati, Kelly.
Arda ÇELİK DEÜ Endüstri Mühendisliği ABSTRACT Cost analysis can play strategic roles in organizations and plans. As industrial engineers,
Non-Experimental designs: Developmental designs & Small-N designs
Types of Evaluation.
Experiments Types of Experiments Laboratory Experiments Experiments of short term duration and usually conducted in a lab under very controlled conditions.
Formulating the research design
PAI786: Urban Policy Class 2: Evaluating Social Programs.
Chapter 4 Principles of Quantitative Research. Answering Questions  Quantitative Research attempts to answer questions by ascribing importance (significance)
Connecting Work and Academics: How Students and Employers Benefit.
What We Know About Effective Professional Development: Implications for State MSPs Part 2 Iris R. Weiss June 11, 2008.
1 Types of Evaluation. 2 Different types of evaluation Needs assessment Process evaluation Impact evaluation Cost-benefit analysis.
Program Evaluation Using qualitative & qualitative methods.
Operational Issues – Lessons learnt So you want to do an Impact Evaluation… Luis ANDRES Lead Economist Sustainable Development Department South Asia Region.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Education in the UK. Lesson Objectives I will get the opportunity to apply my understanding of the principles of the Welfare State to the different education.
Hertfordshire in Action Working in Partnership to secure effective Transition and Progression.
Developing a result-oriented Operational Plan Training
12 October 2010 Livelihoods and Care: Synergies between Social Grants and Employment Programmes National Labour and Economic Development Institute.
Evaluating the Options Analyst’s job is to: gather the best evidence possible in the time allowed to compare the potential impacts of policies.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Manpower Planning.
Matthew 10/07/13 Parking the hardest to help: short and long-term solutions.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Third Sector Evaluation: Challenges and Opportunities Presentation to the Public Legal Education in Canada National Conference on “Making an Impact” 26.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Data Analysis Econ 176, Fall Populations When we run an experiment, we are always measuring an outcome, x. We say that an outcome belongs to some.
Applying impact evaluation tools A hypothetical fertilizer project.
WASHINGTON STATE UNIVERSITY EXTENSION Evaluation Based Program Development This material was developed in partnership with Rutgers's University and Lydia.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Empirical Tools of Public Finance 3.1 The Important.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
The financial costs and benefits of alcohol The financial costs and benefits of alcohol Christine Godfrey Department of Health Sciences & Centre for Health.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Demonstrating outcomes for funders and contracts By Dawn McAleenan.
Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
Copyright  2006 McGraw-Hill Australia Pty Ltd PPTs t/a Management Accounting: Information for managing and creating value 4e Slides prepared by Kim Langfield-Smith.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
Health Care Delivery & Population- Centered Nursing Topic 9.
Methodological Issues Themes in Psychology. Snapshot Study Snapshot study: takes place at just one point in time, potentially with one participant for.
Introduction Social ecological approach to behavior change
Introduction Social ecological approach to behavior change
Implementing the NHS KSF Action Planning and Surgery Session
Introduction to National Evaluation System
The Public Policy Process
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
REAL (working with Singizi) Presentation to NSA 3rd/4th August
Give 6 different extension strategies and explain the problems that businesses might face implementing them. What’s happening here and what potential problems.
Presentation transcript:

Methodological Problems Faced in Evaluating Welfare-to-Work Programmes Alex Bryson Policy Studies Institute Launch of the ESRC National Centre for Research Methods, St. Anne’s College, Oxford University, 20 th June 2005

The usual suspects: questions posed in W-to-W Evaluation Does ‘it’ work? –Causal identification –with/without experiments –With cross-sectional or longitudinal data How are we going to make it run according to plan? –Operational assessment How did people feel about it?

Harder Questions Not Always Posed How much did it cost? –net benefits, opportunity costs? Who did it work for? –Heterogeneous treatment effects Why did it (not) work? Did it disadvantage others? Under what conditions will we see the same results? –How is it likely to work in the future? –Policy transfer (from pilot to national roll-out; across areas; across providers; interacting with other policies)

Potential for misleading results because wrong methodology for the policy issue at hand, eg. TT instead of ATE when looking to extend a programme nationally 2. correct methodology but implemented poorly – either on the ground (eg. securing RA) or poor data (eg. early on in EMA) 3.Impacts shift in the longer run. Few studies addressing long-term impacts but clear that they are often very different from shorter-term impacts – often reversing earlier results. Eg. GAIN – work first fading relative to human capital investment, as you might expect. 4. When general equilibrium effects are a big deal – that is, when programme has big effects on non-participants, eg. where helping programme participants proves to be at the expense of other disadvantaged groups beyond the programme. Depends on the size of the programme, how effective it is in benefiting participants, and the extent to which participants are capable of taking the jobs of non- participants. 5. Blind empiricism: trusting to the numbers too much. We need THEORY as well as numbers.

The big issue…. Evaluations can never ‘prove’ certain policies work or don’t work because pilots can NEVER give a once and for all answer. Effects differ: –With programme ageing –Size/composition of entry cohorts –Changes in external environment, eg. business cycle –Interactions with other policies Therefore always ask the big questions: –A priori, why do we think policies are going to have a particular impact? –What happens when similar policies evaluated at different points in time, or across regions/countries, produce different results? –How can we learn from these differences? How can we understand what generates them?

Practical steps... 1.Increasing evaluator knowledge of evaluation processes and mechanisms - particularly important in understanding which treatment parameter is appropriate for the policy at hand 2.Getting government and evaluators to understand what data and practical measures are needed in advance to secure the appropriate evaluation methodology 3.More triangulation with alternative techniques/methods addressing the same question - laboratory experiments - qualitative data - purpose built surveys alongside administrative data 4.Importance of replication studies to validate initial findings, understand bias in estimates, review impacts over time: -same data, same methods -same data, alternative methods -extensions to the data, same methods -extensions to the data, alternative methods 5. More meta-analyses

A Footnote: What Does ‘it worked’ mean? 1.What works economically competes with what works politically - Clearly important signalling to electorate (selling welfare) is key. 2. Economic outcomes (poverty reduction, increasing employment/quality of employment) are largely uncontested. But these ‘goods’ - which are both private and public goods - come at a cost. 3. The issue here is: benefits to whom, and at what cost (a) to Exchequer (b) to others (substitution etc.) and (c) what are opportunity costs of not spending money on other potential interventions? 4. Finally, it is inherently more difficult to get at distributional outcomes than it is to get at mean outcomes (a point worth mentioning for a government interested in the distribution of outcomes).