Download presentation
Presentation is loading. Please wait.
1
The added value of evaluation
Julian Barr UK Evaluation Society & Itad Ltd British Academy and Government Social Research Network Evaluation: impact, challenges and complexity 12th July 2017
2
Frame Evaluation as a field of practice
Evaluation - quality and rigour The Value of Evaluation Challenges in evaluation
3
Evaluation as a field of practice
“Evaluation determines the merit, worth, or value of things. …” (Scriven, 1991). Worth is complex, personal, political Evaluation is inherently political “Evaluation is an objective process of understanding how a policy or other intervention was implemented, what effects it had, for whom, how and why.” (Magenta Book, 2011) Considers a range of outcomes and impacts (not just achievement of positive planned ones) Explores causality – through a range of methods Concerned equally with how impacts are achieved and how much impact is achieved Pays attention to power dynamics and socio-economic winners and losers
4
Evaluation as a field of practice
Characteristics of a distinct field Development of evaluation theories and approaches Emergence of evaluation associations & societies, nationally and internationally (102 listed) Global coherence: IOCE, UN Year of Evaluation Journals and conferences devoted entirely to evaluation Academic positions in evaluation A profession Evaluators come from many different primary disciplines and professions Generation of standards, ethics of practice and capabilities specific to the field Specific training in evaluation methods and approaches Multi-discipline / Trans-discipline Evaluation draws on many disciplines’ concepts and methods, but adapted to the social and political practice of evaluation.
5
Evaluation as a field of practice
Evaluation and Research - spectrum of views Purposively different Evaluation - informs decision making in policy and practice Research - contributes to knowledge in specific subjects Much shared methodological territory Both empirically based Some consider evaluation an applied social science… … one that draws evaluative conclusions about the delivery and effectiveness of interventions that have social and economic objectives Others are clear it is not the simple application of social science methods to solve social problems.
6
Evaluation quality and rigour
What type of evaluation is best? One size does not fit all Evaluation is pluralistic range of appropriateness of designs, methods and approaches Validity of an evaluation needs to be related to purpose
7
Evaluation quality and rigour
Many ways to establish quality and rigour Trustworthy evaluation needs designs & methods that are: unbiased, precise, conceptually sound, and repeatable. Strong advocacy for quantitative (quasi)experiment designs, and focus on counterfactual causality use of the Maryland Scale (counterfactual strength - 1: before/after or with/without 5: RCT) But quality and rigour are not methodologically dependent …or absolute: ‘right rigour’ Non-experimental methods can be rigorous Mixed methods may confer added rigour
8
Evaluation quality and rigour
Causal inference - the counterfactual to counterfactuals Generative frameworks: Theory-based approaches; test & confirm causal processes; Theories of Change, Contribution Analysis, Process Tracing, Realist Evaluation Comparative frameworks: Case-based approaches; Comparison across and within cases of causal factors; QCA, meta-ethnography Participatory frameworks: Validation by participants of effect caused by intervention Complexity – theory-based approaches, developmental evaluation, systems thinking, modelling, Bayesian analysis Realist evaluation draws on a generative notion of causation which involves an iterative process of theory building, testing and refinement which allows causal statements about attribution to be made. Evaluation findings should demonstrate what worked, for whom, how, and in what circumstances. The need for small n approaches arises when data are available for only one or a few units of assignment, with the result that experiments or quasi-experiments in which tests of statistical differences in outcomes between treatment and comparison groups are not possible. For large n analyses, experiments provide a powerful tool for attributing cause and effect. The basis for experimental causal inference stems from the manipulation of one (or more) putative causal variables and the subsequent comparison of observed outcomes for a group receiving the intervention (the treatment group) with those for a control group which is similar in all respects to the group receiving the intervention, except in that it has not received the intervention (Duflo et al., 2008, White, 2011). The small n approaches outlined above draw on a different basis for causal inference. They set out to explain social phenomena by examining the underlying processes or mechanisms which lie between cause and effect. Whereas experimental approaches infer causality by identifying the outcomes resulting from manipulated causes, a mechanism-based approach searches for the causes of observed outcomes. Theory-based and case-based approaches are especially suited to unpicking ‘causal packages’ - how causal factors combine - and what might be the contribution of an intervention. However such approaches are not good at estimating the quantity or extent of a contribution. Their overarching aim is to build a credible case which will demonstrate that there is a causal relationship between an intervention and observed outcomes. Mohr (1999) suggests the analogy of a medical diagnosis or a detective investigating a case as being a good one to describe the process of elimination and accumulation of evidence by which causal conclusions can be reached. Multiple causal hypotheses are investigated and critically assessed. Evidence is built up to demonstrate the different connections in the causal chain, with the ultimate goal of providing sufficient proof to demonstrate a plausible association, as in the case of Contribution Analysis, or to substantiate a causal claim “beyond reasonable doubt”.
9
The Value of Evaluation
Not just based on rigour Cost : Benefit of evaluation maximise the marginal value of new knowledge Benefit greatest where: uncertainty is high the evidence levers high numbers (people benefitted, £s budgeted) uptake pathways are clear (comms, audience, opportunity) Proportionate cost
10
Challenges in Evaluation
Rigour across methods and approaches Open debate on quality in evaluation More evaluation-oriented version of the Maryland scale. Method wars? Pax methologica? More complex evaluands Funding for evaluation patchy pros & cons Maximising the Value of Evaluation Communication & uptake
11
Resources Alternative approaches to causality:
the DFID funded ‘Stern report’ on methods for impact assessment [ White & Phillips 3ie ‘attribution in small n impact evaluations’ [ paper_15.pdf]
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.