Download presentation
Presentation is loading. Please wait.
Published byBethany McDowell Modified over 9 years ago
1
From Monitoring Through Evaluation To Impact Assessment The case of NHDRs and vulnerable groups Andrey Ivanov Human Development Adviser, Bratislava RSC
2
Main issues in this presentation n Measuring what? n Measuring how? n The role of quantitative data n The case of NHDRs n Links to poverty measurements
3
Some terminological clarity Three distinct (but often confused) concepts: n Monitoring n Evaluation n Impact assessment n All relate to quality but quality of different aspects of development process and have different purpose, applicability and scope
4
Measuring what? n Monitoring – primarily of progress. Relatively easy if major components are clear and traceable n Evaluation – of process but also outputs and outcomes. Feasible if these elements are replicable and provide grounds for comparability n IA – long-term changes in development situation. Difficult because of correlations and mutual influences
5
Applicability: monitoring Applicable to process (NHDR elaboration) and implementation (applicable both to programs and projects). For that purpose needed: n Progress indicators related both to the outputs and outcomes (reaching benchmarks) n Consistency of procedures (administrative, accountancy) n Consistency of process (participation, consultation of stake-holders etc.)
6
Applicability: evaluation Applicable to process but even more so to its outcomes (the NHDR itself) For that purpose needed: n Measurable outputs and outcomes (linked to benchmarks). But what is “NHDR outcome”? n Measurable inputs (to assess efficiency) n Again, consistency of procedures and process (participation, consultation of stake-holders etc.) One common “trap”: aligning evaluation to indicators available and biased towards measuring inputs instead of outputs
7
Applicability: IA Applicable only to outcomes (but what is the outcome – the NHDR itself or the process). “Sexy” but difficult to achieve n Applicable at different levels – that of programs (long-term change in development situation) but also of projects (small- scale development interventions n The higher the level, the higher the correlations n Outcomes not always quantifiable (what is quantifiable usually is not an outcome). When it is quantifiable, data is not available (disaggregated poverty or unemployment The common “trap”: broadly used as a term but with vague conceptual justification and argumentation
8
Why impact assessment in different areas? Of legislation (regulations IA): n Could outline the consistency with other pieces of legislation n Could suggest additional areas needing legislative involvement Of policies: n Could outline the consistency with priorities n Could measure the advance in strategic areas The two areas – a kind of “advance warning” scenario building Of projects : n Could measure the change after the involvement (and not just count the inputs) n Could measure the efficiency of the process (to what extent there could be better alternative solutions)
9
Impact assessment: NHDRs What is the outcome and impact: n A book? Number of pages? n Process? Change in the paradigm? Possible indicators: n Press coverage? n Content analysis of political documents? n Policies implemented? n Projects implemented? n “Ideas leakages”?
10
Impact assessment: Roma Report What is the outcome and impact: n Again, a book? A knowledge site? n A data base behind it? n New attitude to quantitative information in Roma targeted programs and projects? n Again, change in the paradigm? Possible indicators (behind a “Policy Impact” Award): n HR paradigm influenced n Policies influenced n Ideas “leaked”
11
What is the (often) existing practice ? When policies are concerned: n Governments are usually convinced they possess “universal wisdom”. n Impact assessment - usually post factum by the opponents politically biased. When projects are concerned: n Usually focused on the “compliance with the budget” n “Properly” defined objectives (easy to monitor) n Confusion between means and objectives, outputs and outcomes n “Static” approach often applied n “Corporate interest”: often nobody really needs adequate assessment
12
How to reach there? n Clearly defined objectives, clear and realistic targets n Involvement of the target group and other beneficiaries n Measurable baseline indicators n Consistent and adequate data n Constituencies indeed interested in measuring the impact n Take into consideration externalities – both positive and negative
13
Example: Employment generation projects n Inputs: training courses, presentations, practical exercises n Outputs: number of people with requalification course passed n Outcome: number of former unemployed who found jobs n Sustainability: duration of the job n Impact: HH incomes increased, poverty indicators improved n Positive externalities: reduced drop-our tares, reduced societal fragmentation Minimum necessary data: n Employment status, unemployment rates, incidence of poverty, levels of income, qualification levels n The costs of “reaching the beneficiary”, the costs of the alternative approaches, opportunity costs n The costs of non-involvement at all
14
Example: Employment generation projects Possible sources: n Targeted small-sample surveys n Community level data collection (not sample-based) n Interviews the beneficiaries on the specific involvement and how did it influenced them n Interviewing the other actors involved n Comparing the trends within the group with the overall trends At the end the data should allow building alternative scenarios to compare with and measure the efficiency of the specific project
15
Main conclusions n Clear understanding of the differences between monitoring, evaluation and IA is a must n All three should not be treated as substitutive n Should not be susceptible to short-term political agenda n Data is crucial. Quantitative information components feeding progress indicators should be embedded from the very beginning of the project n Wide range of stakeholders should be involved n Composite indices should be used carefully, for evaluation of complex, multidimensional processes
16
What can we offer? n Statistical capacity development support (“Measuring Human Development” manual and KM) n Targeted data collection (“Vulnerable Groups” survey) 1. Methodological aspects of vulnerability research
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.