Download presentation
Presentation is loading. Please wait.
Published byMelvyn Parker Modified over 9 years ago
1
Impact Evaluations and Development Draft NONIE Guidance on Impact Evaluation Cairo Conference: Perspectives on Impact Evaluation Tuesday, March 31, 2009 Frans Leeuw Maastricht University & WODC Jos Vaessen Maastricht University & University of Antwerp
2
Outline 1.Introduction 2.Methodological and conceptual issues for impact evaluation 3.Managing impact evaluations
3
Introduction Drafting the NONIE Guidance NONIE uses the OECD-DAC definition of impacts: Three basic premises: –No single method is best for addressing the variety of questions and aspects that might be part of IE –However, ‘logic of comparative advantages’ –Particular methods or perspectives complement each other in providing a more complete ‘picture’ of impact “The positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. These effects can be economic, sociocultural, institutional, environmental, technological or of other types” (OECD- DAC, 2002: 24).
4
Six methodological and conceptual issues 1.Identify the type and scope of the intervention 2.Agree on the objectives of the intervention that are valued 3.Articulate the theories linking interventions to results 4.Address the attribution problem 5.Build on existing knowledge relevant to the impact of interventions 6.Use a mixed methods approach: the logic of comparative advantages
5
1. Identify the type and scope of the intervention Impact of what vs. impact on what Impact of what: –Continuum of interventions –‘Holistic’ vs. deconstruction Impact on what: –Complexity of processes of change –Levels of impact: institutional vs. beneficiary level
6
1. Identify the type and scope of the intervention
7
2. Agree on the objectives of the intervention that are valued What to evaluate should be a balance between what stakeholders find important and the empirical reality of (processes of) change Intended vs. unintended effects Short-term vs. long-term effects Sustainability of effects Translate objectives into measurable indicators, but at the same time do not lose track of aspects that are difficult to measure
8
3. Articulate the theories linking interventions to results Interventions are theories: opening up the ‘black box’ Theories are partly ‘hidden’ and require reconstruction Theory-based IE continuum of options ranging from: telling the causal story to benchmark for formal testing of causal assumptions
9
4. Address the attribution problem Attribution problem: to what extent can results of interest be attributed to an intervention? Importance of counterfactual analysis value target variable time ‘ before ’‘ after ’ a b c
10
4. Address the attribution problem Experimental, quasi-experimental and regression-based techniques have a comparative advantage in addressing the issue of attribution: –Counterfactual analysis –Systematic treatment of threats to validity of claims is possible (and should be done!) Limitations in applicability
11
5. Build on existing knowledge relevant to the impact of interventions Most interventions are not ‘new’ rely on similar mechanisms of change Example of types of mechanisms: –Situational mechanisms –Action-formation mechanisms –Transformational mechanisms Systematic review and synthesis approaches are useful tools for learning about the existing evidence on interventions
12
6. Use a mixed methods approach: the logic of comparative advantages Particular methods have comparative advantages in addressing specific aspects of impact Conceptual framework by Campbell, Cook, Shadish: internal validity: Is there a causal relationship between intervention and effects? external validity: Can we generalize findings to other settings? construct validity: Do the variables that we are measuring adequately represent the phenomena we are interested in?
13
6. Use a mixed methods approach: the logic of comparative advantages Example of how the logic works: impact of incentives on LU change and farmer livelihoods E.g. randomized experiment can test effectiveness of different incentives on LU change and/or socio-economic effects of these changes (internal validity) E.g. survey data and case studies can tell us how incentives have different effects on particular types of farm households (strengthens internal validity and increases external validity of findings) E.g. semi-structured interviews and focus group conversations can tell us more about the nature of effects in terms of production, consumption, poverty, etc. (construct validity)
14
Managing impact evaluations Determine if an IE is feasible and worth the cost Start early – getting the data Front-end planning is important
15
THANK YOU jos.vaessen@metajur.unimaas.nl jos.vaessen@ua.ac.be
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.