Download presentation
Presentation is loading. Please wait.
Published byShona Berry Modified over 6 years ago
1
Monitoring and Evaluation of Peacekeeping and Peacebuilding
Evaluating Peace Operations: Theoretical Perspectives and Operational Challenges Benjamin de Carvalho, NUPI Presentation at the Monitoring and Evaluation of Peacekeeping and Peacebuilding workshop co-organized by the International Peace Institute (IPI) and the Norwegian Institute of International Affairs (NUPI) New York, 7-8 May 2009
2
Overview of the Presentation
Why Evaluate Peacekeeping and Peacebuilding? Perspectives on Evaluation How, When and What to Evaluate in Peace Operations Evaluating Peacebuilding in Complex and Unstable Environments: Challenges
3
Working definitions: Evaluation and Monitoring
Evaluation: ”Systematic assessment of policies, programs or institutions with respect to their conception and implementation as well as the impact and utilization of their results” (Source Paffenholz and Reichler) Monitoring: ”a continuing function that uses systematic collection of data on specified indicators” (Source OECD)
4
1. Why Evaluate and Monitor Peacekeeping and Peacebuilding?
5
Peacekeeping and Peacebuilding Evaluation: Some Historical Perspectives
Distinguish between evaluations of peacekeeping and peacebuilding (evaluating PK less complex) Peacebuilding from late 1990s Evolution of peacebuilding field: After 1990s, focus on ‘lessons learned’ From more ad-hoc in 1990s to more professionalization today Peacebuilding evaluation borrows heavily from development field
6
Peacekeeping and Peacebuilding Evaluation: Some Historical Perspectives (cont’d)
Donors’ increased focus on accountability, as reluctant to fund activities which do not have clear impact More focus on impact and effectiveness New Public Management UN Security Council focuses increasingly on indicators of progress and figures UN system: increased focus on joint planning and indicators of progress (IMPP)
7
2. Perspectives on Evaluation
8
What do we evaluate? Evaluating Policies, Outcome(s) and Impact: Effective if outcomes are defined and met Evaluating Internal Process(es): Effective if internal processes work Symbolic Effect: Effective if perceived as effective System-wide Impact: Effective if system as a whole is progressing
9
What do we Evaluate? Impact Effects Outcome(s) Results Output(s)
Overall effect Effects Outcome(s) Immediate results Results Delivery of planned activities Output(s) Activities Input (Source Spurk IAM and OECD)
10
(i) Evaluating Outcome(s) and Impact
Most common form of evaluation Sees activities and organizations as discrete activities and processes Organization or activity effective if it has clear objectives and if these are met Focuses largely on statistical analysis Can include cost-benefit analysis
11
(i) Challenges to Evaluating Outcome(s)
Outcomes and Impact often difficult to measure Overly focused on statistical analysis? Outcomes not necessarily the result of activity Difficult to point at causality and clear link between input and outcome/impact
12
(ii) Evaluating Internal Process(es)
Focus on processes within organizations Organizations and processes as discrete entities Effectiveness depends on whether internal processes work Indicators of process: description of process rather than documented effects Often easier than to evaluate outcome
13
(ii) Evaluating Internal Process(es): Challenges
Assumes clear link between internal process and activities and outcomes Can lead to too much emphasis on process at the expense of outcomes Does not take into account coherence and coordination with other actors and processes
14
(iii) Assessing Symbolic Effect(s)
Sees organization and process as interacting with environment Effectiveness is about mastering signals and symbols, rather than substance Internally: Evaluation as a way of boosting morale (is what we are doing important?) Externally: Effectiveness depends of whether organization or process is perceived as effective and legitimate What to evaluate? Higher order impact?
15
(iii) Assessing Symbolic Effect(s): Challenges
Little focus on actual change Can lead to misguided resource allocation (if successes lead to continued activity at the expense of other activities) People’s impressions can be misleading
16
(iv) System-wide Evaluation
Evaluation of the response by the whole system (higher order impact) Generally aggregate findings from a series of evaluations Focus on system as a whole Effective if most outcomes and impacts met? Needs to be informed by clear theoretically founded ”story” or ”angle” in order to interpret findings
17
(iv) System-wide Evaluation: Challenges
Complex and must be undertaken as a joint evaluation Actors have different timeframes, methodologies and methods, measures of effectiveness and benchmarks Which actors to include? How do we weight different or contradictory perspectives? How do we take into account the self-interest of other evaluators?
18
3. Methods: When, How and What do we Evaluate?
19
When? Before Baseline study Ex-ante evaluation During
Real-Time Evaluation (RTE): peer-review of fast evolving operation undertaken at an early phase Formative evaluation Mid-term evaluation After Summative evaluation Ex-post evaluation (Source OECD 2002)
20
What Criteria should We use?
DAC Criteria for Evaluating Development Assistance Relevance Efficiency Effectiveness Impact Sustainability ALNAP/OECD adaptation for conflict prevention and peacebuilding Relevance/ Appropriatedness Connectedness Coherence Coverage Efficiency Effectiveness Impact
21
Considerations of Method
Overall initial understanding of context and starting point central, as getting it wrong will make it more dificult to evaluate Evaluation must be planned at early stage Risk of being ad hoc, and ’going through the motions’ Relevance of evaluation: Early focus on benchmarks central. Benchmarks must relate to ’angle’, perspective or theory and context
22
What to evaluate? Process evaluation: An evaluation of the internal dynamics of implementing organizations Programme evaluation: Evaluation of a set of time bound interventions, marshalled to attain specific objectives Cluster evaluation: an evaluation of a set of related activities, projects and/or programmes System-wide evaluation: Evaluation of the response by the whole system to a particular disaster event or complex emergency, or of the system-wide impact of activities (Source OECD)
23
4. Challenges to the Evaluation of Peace Operations
24
Some technical challenges
Which methods should one apply? How does one operationalize? Difficult and expensive to gather data How to understand effectiveness? How to set goals? How does one ensure learning?
25
Evaluating in complex and volatile settings
Baseline? Methods? Scope? Indicators, evaluation criteria? Purpose? Follow-up, Purpose, Lessons-Learned? Are wide-ranging quantitative methods borowed from the development field possible?
26
Setting a baseline Is information about status ex-ante available?
Compatibility of information from other actors? Is there time to conduct a baseline study? Does the conflict environment allow for a wide-ranging study? Formal vs. informal processes? Services and economy often informal during conflicts Is data available when provision of services has broken down? Eg. infant mortality, SGBV Existing figures often outdated Methods
27
Methodological Issues
Quantitative vs. Qualitative Methods Can we evaluate Impacts (eg. Democracy) or should we stick to Outcomes? Are the underlying ”theories of change” robust enough? Can we expect evaluators to understand the conflict and context?
28
Do we evaluate too little?
Do we evaluate too much? Focus on evaluation can come at the expense of activities and processes: UNICEF: 2-5% of Country Program Funds OCHA: goal of at least 1% of funding UNHCR: Almost 1% of tota lbudget Do we evaluate too little? Do we have too little information about the effectiveness of activities?: UN DPKO: 0,1 % of total budget UN DPA: 0,03%
29
What happens to evaluations?
To what extent are they useful? To what extent are they used to improve process? Are they used to legitimize decisions already taken? Are they used as a bulwark against policy-makers and donors
30
Broader Considerations
Many tools for evaluating impact come from the development field. Are these adequate for activities which often are more medium-term? Evaluation of peacebuilding must rely on theories of change, which at best are educated guesses about peaceful development, and often wrong. Peacebuilding is a young field. Do we have enought comparative material? Do we set our expectations too high?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.