Quantifying the Impact of Social Science Development Research: Is It Possible? Kunal Sen IDPM and BWPI, University of Manchester Based on paper: Literature.

Slides:



Advertisements
Similar presentations
Value for Money: Easier or Harder than it looks? Andrew Rowell, CARE Australia DevPol Conference Feb 2014.
Advertisements

Creating an Early Childhood System Karen Ponder February 9, 2010 Arizona Early Childhood Task Force.
Mywish K. Maredia Michigan State University
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Workshop on Innovations in Governance Measurement April 26, 2013 in Washington, DC Jesper Johnsøn, U4/CMI.
MANAGERIAL ACCOUNTING
Social Impacts Measurement in Government and Academia Daniel Fujiwara Cabinet Office & London School of Economics.
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
VIII Evaluation Conference ‘Methodological Developments and Challenges in UK Policy Evaluation’ Daniel Fujiwara Senior Economist Cabinet Office & London.
Capital Budgeting Evaluation Technique Pertemuan 7-10 Matakuliah: A0774/Information Technology Capital Budgeting Tahun: 2009.
PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.
Health Aspect of Disaster Risk Assessment Dr AA Abubakar Department of Community Medicine Ahmadu Bello University Zaria Nigeria.
Prepared by Arabella Volkov University of Southern Queensland.
Model Building and Simulation Chapter 43 Research Methodologies.
The Social Benefits of Education: New Evidence on an Old Question W. Craig Riddell University of British Columbia.
Key Findings from the Economic Impact Assessment of the CRC Programme 13 December 2005.
Funding Availability and Strategy for different types bank There is substantial variation among bank even in similar. The average small banks uses less.
CENTRE FOR HEALTH ECONOMICS AND DEVELOPEMNT ESTIMATING ECONOMIC AND FISCAL IMPACT OF HEALTH AND NON HEALTH EXPENDITURE FROM THE NATIONAL HEALTH BILL Kenneth.
Technical aspects of NAMAs: Options and methodologies for developing baselines for different categories of NAMAs* Neha Pahuja Associate.
The Economic Payoff to Educational Justice Henry M. Levin WERA December 5, 2008.
Investment Incentives: Evaluating the Impact Aaditya Mattoo Development Economics Research Group, World Bank November 25, 2013.
Randomized Control Trials for Agriculture Pace Phillips, Innovations for Poverty Action
Knowing what you get for what you pay An introduction to cost effectiveness FETP India.
Accelerating Africa’s Growth and Development to meet the Millennium Development Goals: Emerging Challenges and the Way Forward Presentation on behalf of.
Assessing the Impact of Policy Oriented Research Peter Hazell External Coordinator for Impact Assessment at IFPRI 1.
Course on Data Analysis and Interpretation P Presented by B. Unmar Sponsored by GGSU PART 2 Date: 5 July
CORNERSTONES of Managerial Accounting 5e. © 2014 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part,
Logic Models and Theory of Change Models: Defining and Telling Apart
Implementation and process evaluation: developing our approach Ann Lendrum University of Manchester Neil Humphrey University of Manchester Gemma Moss Institute.
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Methods: Pointers for good practice Ensure that the method used is adequately described Use a multi-method approach and cross-check where possible - triangulation.
Measuring Efficiency CRJS 4466EA. Introduction It is very important to understand the effectiveness of a program, as we have discovered in all earlier.
WLE Information Systems Strategic Research AC 5.1 Connecting information to development decisions A systematic approach to analyzing intervention decisions.
Private involvement in education: Measuring Impacts Felipe Barrera-Osorio HDN Education Public-Private Partnerships in Education, Washington, DC, March.
© 2012 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Component 4: Introduction to Information and Computer Science Unit 9/Part f: Components and Development of Large Scale Systems.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
1 Policy Analysis for RISPO II National Workshop XXXXX 2006.
Third Sector Evaluation: Challenges and Opportunities Presentation to the Public Legal Education in Canada National Conference on “Making an Impact” 26.
Strategies for IT Adoption in the Building Industry © 2002 Prof. C.M. Eastman & Dr. R. Sacks Economic Evaluation of IT Impacts.
Decadal Scale Climate Information, Climate Risk Management & Adaptation.
European Commission Joint Evaluation Unit common to EuropeAid, Relex and Development Methodology for Evaluation of Budget support operations at Country.
Copyright © 2009 Pearson Addison-Wesley. All rights reserved. Chapter 3 Valuing the Environment: Methods.
Advice on Data Used to Measure Outcomes Friday 20 th March 2009.
Research Design. Selecting the Appropriate Research Design A research design is basically a plan or strategy for conducting one’s research. It serves.
Software Project Management
Cost Benefit Analysis – overview. Outline Background Overview of methodology Some examples.
Outcomes Working Group: Webinar 2: Theory of Change Facilitators: Frances Sinha, Director EDA Rural Systems (India) and board member of SPTF. Anton Simanowitz,
1 CS-19 Risk Tools and Modeling - Risk Tolerances and Limits Russ Bingham Vice President and Director of Corporate Research Hartford Financial Services.
The student movement has no impact Discuss!. Group discussions 1.What is impact? 2.What impacts do students’ union’s want to have 3.How do we know if.
Future Agricultures and climate change: Outline of a new theme Lars Otto Naess Thomas Tanner FAC Annual Review and Planning Meeting Brighton, UK 30 March.
ICT4D: Evaluation Dr David Hollow DSV, University of Stockholm.
Ecological Risk Assessment (ERA) for California Fisheries
Advancing UK inputs to GFOI
Fundamentals of Monitoring and Evaluation
UKES Annual Conference
Strategic Planning for Learning Organizations
Measuring Outcomes of GEO and GEOSS: A Proposed Framework for Performance Measurement and Evaluation Ed Washburn, US EPA.
College of Public Health and Human Sciences
Single-Case Designs.
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
Integrating Climate Change into Development Programming – Tracking, Measuring & Learning for Adaptive Management Issues and an analytical framework for.
COMMENTS RELATED WITH FP7 Seventh Framework Programme
Considerations in Development of the SBSTA Five Year Programme of Work on Adaptation Thank Mr. Chairman. Canada appreciates this opportunity to share.
WHAT is evaluation and WHY is it important?
Regulated Health Professions Network Evaluation Framework
Methods and Approaches to investigate the UK Education System
Monitoring and Evaluating FGM/C abandonment programs
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
Presentation transcript:

Quantifying the Impact of Social Science Development Research: Is It Possible? Kunal Sen IDPM and BWPI, University of Manchester Based on paper: Literature Review on Rates of Return to Research, available on DFID R4D website.

Quantifying the impact of research: the rate of return to research Similar to any other investment by the public sector, research is expected to yield benefits that are in excess of the costs of funding research. The rate of return to research is one important way that net benefits to funding research can be measured. To calculate the rate of return to research, the present value of the current and future benefits of the research is compared to the total costs of the research, and an internal rate of return is calculated to equalise the revenue stream with the cost outlays. This internal rate of return is the rate of return to research. The higher the rate of return to research, the higher is the expected net payoffs from research, and the stronger case for investing in research as compared to other types of public investment. Or for investing in one type of research versus another.

TWO QUESTIONS WHAT DO WE KNOW ABOUT THE RATE OF RETURN TO DIFFERENT TYPES OF SOCIAL SCIENCE DEVELOPMENT RESEARCH? TO WHAT EXTENT IS IT POSSIBLE TO CALCULATE RATES OF RETURN TO DIFFERENT TYPES OF DEVELOPMENT RESEARCH?

The Causal Chain from Research to Impact 1. Did the research influence policy thinking/decisions/processes (the attribution problem) 2. Did the policy intervention/change/reform lead to the observed outcome (the identification problem) 3. Can the benefits of outcome(s) be quantified? (the measurement problem)

The Attribution Problem The attribution problem can be broken down to the following components: a) how well defined is the set of research users? b) the counter-factual: will the policy change have occurred without the research taking place? c) how important are contextual factors and exogenous events in influencing policy, independent of the research?

The Identification Problem Since developmental outcomes may occur due to many reasons, and policy interventions is one possible cause of such outcomes among many others, it is often difficult to precisely identify whether the policy intervention can be causally related to the outcome in question. There are three different aspects to the identification problem: a) selection bias; b) omitted variable bias; c) Reverse causality.

The Measurement Problem An important requirement in the application of the rate of return approach is that all benefits, past, present and future, can be quantified and expressed in the same unit of value. This leads to five problems in the measurement of these benefits: a) valuing multiple outputs; b) valuing intangible outcomes; c) time-scale of measurement; d) the degree of uncertainty on the size of the impact; e) measuring effects, where there are macro-changes or strong spillover effects.

Methodologies to quantify the impact of policy change/intervention Simulation models Regression based methods Case studies Randomised control trials

The Results Chain for Different Types of Research

The Results Chain for Different Types of Research – contd.

What do we know about the rates of return to different types of social science research? Usable rates of return to research (RORs) exist – agriculture and health research Proxy rates of return do not exist, but there are credible ways to calculate RORs – infrastructure research, economic and social policy research Proxy Rates of return do not exist, and there are no credible ways to calculate RORs - governance research, climate change research.

So can we calculate the rates of return to different types of social science research? A non-starter for research which lead to intangible outcomes, where the time-scale of outcomes is very long and where the identification problem is particularly challenging– governance and climate change research. Possible for economic and social policy research – but the informational requirements for doing so are very high. Already exists for agriculture and health research.

How to improve our ability to measure the impact of research In general, there is a need for investing in improved methodologies that tackle the identification problem (but not necessarily a focus on randomised control trials only). Investing in monitoring and evaluation processes at the start of the research programme to address the attribution problem – creating baselines and using case-studies to track the impact of research. Looking at best practice on how to address the attribution problem – e.g. Fred Cardens work in IDRC. A limited use of methodologies such as willingness to pay where there are clear tangible benefits of research to address the measurement problem.