Download presentation
Presentation is loading. Please wait.
1
Evaluability Assessments
An overview 11 May 2015 Rick Budapest
2
DME Objectives re Evaluability Assessment
To use Evaluability Assessments to identify which of the measures planned in the operational programmes would be suitable for an evaluation, what additional information is necessary for us to be able to start the evaluation work, what methodology would be the most suitable to apply in the evaluation Seek clarifications on this statement 11 May 2015 Rick Budapest
3
Davies - Background experience
M&E consultant 1990> Int. development aid Online consultation re criteria for evaluability of Theories of Change, 2012 Evaluability Assessment of two DFID portfolios of projects, Literature review of Evaluability Assessment, 2013 Quality Assurance of one Evaluability Assessment Evaluation journal: Reflections on the review, 2015 11 May 2015 Rick Budapest
4
Background to the literature review
A DFID funded review Following experiences with an EA of two portfolios of projects (VGAW, E&A) Online literature search in Nov 2012 133 documents, published since 1979 44% by international agencies Review focused on the latter Skype interviews with AusAID, IDRC, DFID, GAVI, IADB, NDC, USAID, UNEG 11 May 2015 Rick Budapest
5
The Evaluability Assessment report
Includes 18 recommendations about the use of Evaluability Assessments 3 checklists to aid Evaluability Assessment Outline structure for Terms of Reference for an Evaluability Assessment. Report available online assessments Bibliography available online report.htm 11 May 2015 Rick Budapest
6
Other resources Evaluability Assessments: Reflections on a review of the literature. Davies, R., Payne, L.,2015. Evaluation 21, 216–231. Predicting evaluability: An example application of Decision Tree models. "Rick on the Road" blog. 2013 Criteria for Assessing the Evaluability of Theories of Change. "Rick on the Road" blog. 2011 11 May 2015 Rick Budapest
7
What is evaluability? OECD definition:
“The extent to which an activity or project can be evaluated in a reliable and credible fashion”. An Evaluability Assessment should examine evaluability: In principle, given the nature of the project design, In practice, given data availability to carry out an evaluation and the systems able to provide it. It should also examine the likely usefulness of an evaluation, and its practicality. 11 May 2015 Rick Budapest
8
Exercise 1 In pair discussions
Identify projects you think should not have been evaluated, either at all, or at that time Identify the reasons why you think this is so. Share results with other pairs on the same table 11 May 2015 Rick Budapest
9
How is it different from an evaluation?
Judgements are not made about what has been achieved Assessments are made about the possibility of making such judgements and their likely utility It is a meta-analysis: an analysis of the possibility of an analysis 11 May 2015 Rick Budapest
10
Why bother? Avoid doing evaluations that will provide little added value Improve the value of evaluations that are carried out 11 May 2015 Rick Budapest
11
When to do an Evaluability Assessment ?
Four options: Prior to project approval E.g. IADB) After project approval, prior to M&E framework E.g. AusAID Prior to evaluations DFID, USAID, and others As a first step within an evaluation USAID, DFID 11 May 2015 Rick Budapest
12
2012 Survey by Monk 11 May 2015 Rick Budapest
13
What kinds of outputs are expected?
Specific recommendations re Project design M&E Framework Evaluation programing: If or when to evaluate Design of specific evaluations Supporting analysis for Terms of Reference for M&E Framework development Evaluation proposals Need to manage expectations from an EA. This isn’t an evaluation and not be confused with one. V. important to emphasise different expectations of outputs etc. Need to be very clear about what you get out of an EA 11 May 2015 Rick Budapest
14
Management options Expertise Coverage Outsourcing: Most common
But IADB is an exception Contracting: Separate contract to evaluations But USAID has had exceptions Expertise Evaluation & subject matter most common Short desk studies the exception Coverage Full coverage – in specific domains only Sampled coverage – IADB for a decade Purposeful selection – e.g. large complex projects 11 May 2015 Rick Budapest
15
Time and cost Two basic approaches At what cost?
Desk based: 2 to 5 days per project On-site: 2 weeks or more per project Sets of projects can take up to 6 months At what cost? Little data available in reports, but £4000 minimum. The ratio between costs of Evaluability Assessment and Evaluation is what matters If cost is 10% than only a small improvement in evaluation quality will cover the cost 11 May 2015 Rick Budapest
16
Staging Lots of different stage models. See Annex D for 12 Preparation
Identify relevant stakeholders, and boundaries Identify relevant documents, and boundaries Analysis of Theory of Change / project design Data availability Demand Practicality Summarise key issue and recommendations 11 May 2015 Rick Budapest
17
Analysis using Checklists
Widely used (11 of 19 agencies) They encourage Comprehensive coverage of relevant issues, Visibility of those that are not covered. They can be used as Stand-alone tools along with ratings, or Be supported by comment and analysis, or Have a more background role informing the coverage of a detailed narrative report. Checklists can take the questions listed Separate them out Provide a score and a note taking space for answers to each Sometimes with annotations to advise on scoring 11 May 2015 Rick Budapest
18
Analysis using Checklists and Scoring
Aggregation of judgements into a total score enables comparisons across projects and across time, and lessons learned The use of minimum threshold scores is not advisable unless there are very goods grounds for defining such a threshold Weighting should be explicit, and can be done pre or post assessment 11 May 2015 Rick Budapest
19
Suggested Evaluability criteria
For Project Design 11 May 2015 Rick Budapest
20
Suggested Evaluability criteria
For Information availability 11 May 2015 Rick Budapest
21
Suggested Evaluability criteria
For Practicality & Demand This is where engaging with stakeholders is especially important. Not only on practicality issues but also on evaluation demand – potential usefulness - what sort of evaluation questions are stakeholders interested in? 11 May 2015 Rick Budapest
22
[Reflect on and use results of first exercise]
Which of these criteria do you think would be relevant if you were doing an Evaluability Assessment of …. Which of the relevant criteria would you give a higher than average weighting to and why? What criteria are missing and why are they important? [Reflect on and use results of first exercise] 11 May 2015 Rick Budapest
23
Case example 1: SIDA human rights project portfolio
Focused on Logframe (i.e. Theory of Change) only Systematic rating of all 28 projects on 13 evaluability criteria Rating data made available in the report …enabling development of a simple predictive model of the evaluability status of a project 11 May 2015 Rick Budapest
24
Evaluability Decision Tree
Leaves (boxes) = predicted and actual evaluability status 1 = predicted to be higher than average, 2 = lower Numbers underneath = actual number of observed cases
25
Evaluability Decision Tree
Numbers of projects in each category 11 May 2015 Rick Budapest
26
One half of the original data set
11 May 2015 Rick Budapest
27
Case example 2: DFID Empowerment & Accountability projects
A policy area with no specific boundaries Very diverse interventions and contexts High level Theory of Change with little detail about expected causal linkages High reliance on secondary data with variable coverage Most stakeholders were inaccessible 11 May 2015 Rick Budapest
28
Case example 2: DFID Empowerment & Accountability projects
Conclusion: This portfolio is not currently evaluable Recommendations Develop a database Identify sub-sets of projects with common outcomes Use inductive methods to identify causal configurations in those sub-sets 11 May 2015 Rick Budapest
29
Risks and Concerns Conflation and confusion of purpose
Evaluation overload Delay Additional cost Ineffectiveness Challenges to a basic project design that has already been approved 11 May 2015 Rick Budapest
30
Additional material 11 May 2015 Rick Budapest
31
Contents page of an Evaluability Assessment report
Objectives of the Evaluability Assessment Scope of the inquiry Documents and people consulted Limitations Analysis Project design Information availability Demand Practicality Recommendations Design, data, demand, practicalities Annexes Checklists and ratings data Bibliography and Persons consulted 11 May 2015 Rick Budapest
32
Ex-Ante Evaluation overlaps with Evaluability Assessment
Linkage between supported actions, expected outputs and results Relevance and clarity of proposed programme indicators Quantified baseline and target values Suitability of milestones Administrative capacity, data collection procedures and evaluation Consistency of financial allocations 11 May 2015 Rick Budapest
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.