Download presentation
Presentation is loading. Please wait.
Published byMilo King Modified over 8 years ago
1
Evaluation Value Chain Interventions LEI approach in practice Giel Ton Agricultural Economics Research Institute - LEI Wageningen UR 3 November 2011 The Hague
2
Introduction Each researcher in LEI has its own expertise and methodological wish-list: multiple ways to do evaluations: Models/scenarios Econometrics Case studies Stakeholder processes Steps taken in the Impact Evaluation Theme-group in LEI (2009-2011) We adopt Theory-Based Evaluations We want to improve our research designs in a peer-to-peer process of quality checks and mixed-method design We generated a track-record on credible and rigourous methods (LEI’s ‘selling point’)
3
Improving rigour
4
Core steps Making value chain impacts researchable: Define what is considered to be the ‘intervention’ Define what are relevant ‘outcome indicators’ (what is the ‘intervention logic’) Choose a core design (considering ‘counterfactual’) Find additional methods to decrease validity threats Anticipate possible implementation failures of core method
5
Making impacts of interventions researchable.......
6
Checking for validity threats We propose to check the core research method design on the most obvious threats to validity, exploring the issue from four different angles: a)statistical conclusion validity –when using statistics, do it properly b)internal validity –resolve the issue of causality/attribution c)construct validity are the concepts used properly defined and operationalized d)external validity under what conditions/settings does the conclusion/recommendation apply Source: Shadish, W. R., T. D. Cook, et al. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Houghton Mifflin Co. Boston, MA.
7
Income impacts of micro-irrigation technology
8
Intervention Logic
9
Intervention: micro-irrigation technology for horticulture Core method: ‘pipeline design’ with retrospective baseline Comparing income streams between yearly customer cohorts Asking in each interview also on the respondents agricultural system ‘before’ and ‘after’ adoption Added mixed method: On non-monetary outcomes (‘wellbeing’) Livelihood impact case studies On context Sector-studies on dynamics in markets and the institutional environment On methodological assumptions Recall bias (repeating measurements in the same households with different recall periods) Selection bias: applying a matching procedure to reduce context difference between respondents in the cohorts that are compared Method
11
Impacts of certification schemes
12
Intervention Logic
13
Intervention: training on Good Agricultural Practices coupled with niche market access Core method: difference-in-difference, impact on farmers knowledge and practices Added mixed methods: On differences in context Qualitative case studies on differences between tea factories (e.g. history of training, additional stimuli to lead farmers) Inclusion of questions to check on differences in ‘access conditions’ of households for some ‘necessary’ equipment/resources On wider impacts Qualitative studies on how certification influenced/contributed to sector-wide policies (e.g. ‘child labour’, ‘traceability systems’, ‘internal control systems’) Method
14
Impacts of innovation grants on collective marketing groups
15
Intervention Logic
16
Intervention: support to value-addition through collective processing Core method: difference-in-difference and time-series; largely through-qualitative interviews on a random sample of enterprises to explore patterns (“for whom does it work under what conditions”) Additional mixed methods: To enhance learning (in network/platform of organisations) Use data to compile training material on illustrative learning experiences (resolving tensions in collective action) To understand context Household survey on key variables in the geographical areas where these enterprises function (rich/poor, trust levels, support context) Method
18
INTERVENTION LOGICS - Time-consuming process to align different stakeholders to define logic, question and indicators OUTCOME INDICATORS – Away form ‘nitty-gritty’ inmediate outcome indicators or ‘far-away’ ultimate outcomes (MDGs): need for more simple intermediate outcome indicators that allow benchmarking, that can be (partly) attributed to the intervention, and that are still ‘researchable’ RIGOUR IN RESEARCH METHODS - Budget, time and political constraints inherent to contracted research Challenges
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.