Ten years of Evaluability at the IDB Yuri Soares, Alejandro Pardo, Veronica Gonzalez and Sixto Aquino Paris, 16 November, 2010.

Slides:



Advertisements
Similar presentations
Managing Environmental and Social Risks Anis Dani Lead Evaluator, IEGCC October 10, 2012.
Advertisements

The concepts/mechanisms/tools for developing a Joint Programme: Critical issues and UNDG Joint Programme Guidance and formats.
9 th Annual Public Health Finance Roundtable November 3, 2012 Boston, MA Peggy Honoré.
Ray C. Rist The World Bank Washington, D.C.
ASSESSORS ORIENTATION 2008 DEPARTMENTAL SERVICE EXCELLENCE AWARDS.
1 INTERNAL CONTROLS A PRACTICAL GUIDE TO HELP ENSURE FINANCIAL INTEGRITY.
National Evaluation and Results Management System– Sinergia – Two decades of lessons and experiences Directorate of Monitoring and Evaluation of Public.
System Office Performance Management
Action Implementation and Monitoring A risk in PHN practice is that so much attention can be devoted to development of objectives and planning to address.
ACCREDITATION SITE VISITS.  DIVISION 010 – SITE VISIT PROCESS  DIVISION 017 – UNIT STANDARDS  DIVISION 065 – CONTENT STANDARDS.
System Office Performance Management
Purpose of the Standards
Health Systems and the Cycle of Health System Reform
Change Request Management
Continuous Quality Improvement (CQI)
EVALUATION IN THE GEF Juha Uitto Director
Information Technology Audit
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
Performance Measurement and Analysis for Health Organizations
Internal Control in a Financial Statement Audit
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Performance Measurement 201: Best practices in performance measure design & implementation Ia Moua, Deputy Director, Grants & Program Development Patrick.
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
The Instructional Decision-Making Process 1 hour presentation.
Alaska Staff Development Network – Follow-Up Webinar Emerging Trends and issues in Teacher Evaluation: Implications for Alaska April 17, :45 – 5:15.
LIBERIA THE BIG PICTURE Can the Agency tell the truth about results? Can the participants tell the truth about results?
ROLE OF INFORMATION IN MANAGING EDUCATION Ensuring appropriate and relevant information is available when needed.
SESSION 3: FROM SETTING PRIORITIES TO PROGRAMMING FOR RESULTS.
Subcommittee on Design New Strategies for Cost Estimating Research on Cost Estimating and Management NCHRP Project 8-49 Annual Meeting Orlando, Florida.
1 Status of PSC recommendations (January December 2007) Portfolio Committee on Public Service and Administration 14 March 2008.
Third Sector Evaluation: Challenges and Opportunities Presentation to the Public Legal Education in Canada National Conference on “Making an Impact” 26.
Regional Policy Veronica Gaffey Evaluation Unit DG Regional Policy International Monitoring Conference Budapest 11 th November 2011 Budapest 26 th September2013.
Tracking national portfolios and assessing results Sub-regional Workshop for GEF Focal Points in West and Central Africa June 2008, Douala, Cameroon.
PUBLIC–PRIVATE PARTNERSHIP (PPP) FRAMEWORK AND GUIDELINES Syed M. Ali Zaidi, P.Eng. PM(Stanford), Ph.D. Director, Strategic Partnerships Alberta Infrastructure.
Credit risk in banks - importance of appraisal and monitoring PRESENTED BY : KRATI VERMA (09bshyd0390)
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Independent Office of Evaluation IFAD’s Approach to Evaluation of Agriculture programmes Presentation at ECD Workshop, Addis Ababa, 6 November 2015.
Kathy Corbiere Service Delivery and Performance Commission
Ohio Improvement Process (OIP) Facilitating District-wide Improvement in Instructional Practices and Student Performance.
AUDIT COMMITTEES’ PERSPECTIVE ON PERFORMANCE MANAGEMENT AND INSIGHTS INTO THEIR OVERSIGHT ROLE.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Key Components of a successful Proposal OAS / IACML Workshop on Technical Assistance. San José, Costa Rica May 8, 2007 By: José Luis Alvarez R. Consultant.
An overview of OECD Strategies for Improving Regulatory Performance Regulatory Reform and Building Governance Capacities – New Delhi 3 December 2009 Mr.
Basic Concepts of Outcome-Informed Practice (OIP).
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
HLC Criterion Five Primer Thursday, Nov. 5, :40 – 11:40 a.m. Event Center.
2015/16 Staff Performance Appraisals Webinar for ANR Supervisors Spring 2016.
IMPLEMENTING LEAPS IN CENTRAL AND EASTERN EUROPE: TRAINERS’ HANDBOOK Monitoring and Evaluating Results.
Module 6.0: Communication Protocol DIT Installation Series Trainer Name Date.
Info-Tech Research Group1 Info-Tech Research Group, Inc. Is a global leader in providing IT research and advice. Info-Tech’s products and services combine.
Selection Criteria and Invitational Priorities School Leadership Program U.S. Department of Education 2005.
IL Reform Risk-Based Approach & GAC Agenda Dec
Chapter 6 Internal Control in a Financial Statement Audit McGraw-Hill/IrwinCopyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
EVALUATING EPP-CREATED ASSESSMENTS
Maintenance BC - NZTA assessment in TIO
Institutional Effectiveness Plan
5 April 2016 Briefing to the Higher Education Portfolio Committee on review of the draft APPs.
Continuous Improvement through Accreditation AdvancED ESA Accreditation MAISA Conference January 27, 2016.
08 March 2016 Briefing to the Portfolio Committee of Tourism on review of the draft APP.
Business Environment Dr. Aravind Banakar –
Business Environment Dr. Aravind Banakar –
Business Environment
Business Environment
Business Environment
Business Environment
Draft OECD Best Practices for Performance Budgeting
Portfolio, Programme and Project
February 21-22, 2018.
Presenter: Kate Bell, MA PIP Reviewer
Presentation: Audit of Predetermined Objectives
Presentation transcript:

Ten years of Evaluability at the IDB Yuri Soares, Alejandro Pardo, Veronica Gonzalez and Sixto Aquino Paris, 16 November, 2010

Preliminaries Evaluability is the “ability of an intervention to demonstrate in measurable terms the results it intends to deliver” (IDB, 2002, 2006, 2010). In terms of Development Effectiveness, the Bank’s capacity to manage for results depends to a great extent on having operations that feature the characteristics needed for results to be measured, as well as the understanding of the main factors affecting the process by which they are generated.

IDB Experience The IDB’s Evaluation Office has produced three assessments of project evaluability: 2001, 2005, and These exercises systematically reviewed the universe of IDB projects approved during these years. Thus, the institution now has longitudinal data on project evaluability. The 2005 Evaluability Report recommended that evaluability standards be introduced as a criterion for project approval. This recommendation was adopted and included in the institution’s mandate as part of its Ninth General Capital Increase (2010). The institution is currently moving toward implementing this mandate; and OVE has been instructed to perform evaluability assessments on a yearly basis.

Evaluability Dimensions The evaluability consists of nine dimensions including substantive and formal dimensions. Substantive dimensions are those that assess the proper identification and linkages between the conceptual elements of an intervention. Formal dimensions are those that go to the “classical” measures of evaluability, such as the identification of indicators and baselines. Substantive Diagnostic Objectives Logic Assumptions and risks Formal Output baselines Outcome baselines Output indicators Outcome indicators Monitoring and evaluation

Substantive Dimensions  Diagnosis: Evidence-based identification of the problem and its roots causes.  Objectives: Identification of what project expect to achieve. Objectives must be S.M.A.R.T. (Specific, Measurable, Agreed upon, Realistic, Temporal).  Logic: why this particular intervention and why not others? Causal chain: components  create conditions  produce outputs  achieve outcomes  Risks: Quality of analysis in the identification of assumptions & risks. Risk Evaluation, Follow-up and Mitigation

Formal Dimensions  Outcome indicators : measures of expected results during and/or at end of project  Output Indicators: measure of expected products executed as part of the operation  Indicators must be mutually exclusive, valid, adequate, and reliable  Baselines for outcomes: Ex-ante assessments of conditions expected to change as a result of project  Baselines for outputs: Ex-ante assessments of the goods and services present prior to the project  Monitoring and Evaluation: Identification of systems and resources for data collection.

Protocol To ensure the proper application of the exercise, a protocol was designed and implemented, consisting of three steps:  Write-up of findings. Project assessments are done by peer review. Reviewers meet, discuss the proposed operation, and produce a note reporting on findings in each evaluability dimensions.  Collegial review. The findings of the note are then discussed by a collegial group in the office. This same group reviews all projects to ensure consistency across projects.  Scoring. Once a final draft is produced, the team and collegial group agree on a scoring for each of the dimensions. The scoring scale is a 1-4 scale, with two adequate and two inadequate categories, and is based on a scoring guide.

Principles The method adheres to a series of principles:  Quality. Reviews are done by peer reviews; these peers contain staff with knowledge of the sector and problematic.  Independence. To avoid conflicts of interest, staff who may have had involvement in a project do not participate in the review.  Accountability. A manager oversees the overall exercise and is responsible for its quality.  Consistency. All reviews are validated by the same collegial group, so as to ensure consistency across different project reviews.

Results

An Operations microscope The review tracks the evaluability of the Bank’s principal production function: the design of lending operations. This can provide insights regarding specific problems encountered in projects in order to improve them. For example:  Identify what efforts are needed to ensure that problem situations are accurately dimensioned and ensure that the Bank has evidence regarding the aptness of the intervention models  Identify overlooked risks that can impact the viability of the interventions in order to ensure better risk management.  Address and quantify “classical” evaluability questions, such as if outcome indicators are adequately defined, and sufficient for the scope of the operation’s objectives, and if they have baseline data. They have clear implications for the institution’s monitoring and data collection efforts.

A Institution microscope Our experience with evaluability shows that it can also be a valuable tool to look at how the Bank operates. We have done this by analyzing the link between evaluability trends and:  changes in quality review process,  variations in incentives faced by teams and managers,  impact of the organizational changes in the quality of projects,  allocation of staff and resources, and  fluctuations in lending portfolio

Oversight example: In 2005 and in 2009, the IDB analyzed the functioning of the institution’s quality control function. Findings included that Managers, Bank committees, and peer review instances were not providing sufficient guidance to project teams, and in almost no case were they providing oversight of evaluability-related issues of project design. Also, divisions formally responsible for quality control were mostly absent from the review process. National systems example. In 2005 found evidence that the Institution’s bureaucratic incentives were not aligned with those of borrowers. OVE recommended the use evaluability in order determine the Bank’s value in project design in order to inform to decisions of increased use of national planning systems Risk example. In 2005 and in 2009 the Evaluability assessment looked at how the institution managed risk at the project level. The results found that for sovereign guaranteed operations issues of risk were rarely identified, while for private sector operations risks were always front and center, but were exclusively focused on financial risks responding to the repayment related incentives

Concluding remarks Evaluability provides CEDs with an opportunity to play an important role in assessing and influencing the way development institutions discharge their mandates Evaluability assesses critical elements of the quality of the work of the organizations as related to their capacities for management for results Evaluability can be used as a tool for looking at how institutions organize and for steering steer critical institutional improvement processes