EVAL 6000: Foundations of Evaluation

Slides:



Advertisements
Similar presentations
Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Advertisements

Introduction to Monitoring and Evaluation
Yvonne Belanger, Duke University Library Assessment Conference
Johns Hopkins University School of Education Johns Hopkins University Evaluation Overview.
Donald T. Simeon Caribbean Health Research Council
MODULE 8: PROJECT TRACKING AND EVALUATION
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011.
What You Will Learn From These Sessions
Monitoring and Evaluation for HES Activities
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Global Poverty Action Fund Community Partnership Window Funding Seminar January 2014 Global Poverty Action Fund Community Partnership Window Funding Seminar.
Planning and Strategic Management
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
PPA 502 – Program Evaluation
PPA 502 – Program Evaluation
PPA 503 – The Public Policy-Making Process
Results-Based Management: Logical Framework Approach
On Cost-benefit Evaluation Methods of Government- invested IT Projects CNAO's Wuhan Resident Office Haiyan zhang.
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Evaluation. Practical Evaluation Michael Quinn Patton.
EDU555 CURRICULUM & INSTRUCTION ENCIK MUHAMAD FURKAN MAT SALLEH WEEK 4 CURRICULUM EVALUATION.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
How to Develop the Right Research Questions for Program Evaluation
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Strategic Planning. Definitions & Concepts Planning: is a scientific approach for decision making. Planning: is a scientific approach for decision making.
STUDY PLANNING & DESIGN TO ENHANCE TRANSLATION OF HEALTH BEHAVIOR RESEARCH Lisa Klesges, Russell Glasgow, Paul Estabrooks, David Dzewaltowski, Sheana Bull.
The County Health Rankings & Roadmaps Take Action Cycle.
Performance Measurement and Analysis for Health Organizations
Measuring & Assessing Democratic Governance Pro-poor & gender-sensitive indicators Lorraine Corner.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Program Evaluation EDL 832 Jeffrey Oescher, Instructor 6 June 2013.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
Monitoring & Evaluation Presentation for Technical Assistance Unit, National Treasury 19 August 2004 Fia van Rensburg.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
1 Designing Effective Programs: –Introduction to Program Design Steps –Organizational Strategic Planning –Approaches and Models –Evaluation, scheduling,
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
EDPQS in 10 minutes: Overview of European Drug Prevention Quality Standards (EDPQS) With financial support from the Drug Prevention and Information Programme.
Chapter 1 The Nature of Strategic Management
S519: Evaluation of Information Systems Result D-Ch10.
 Now we are ready to write our evaluation report.  Basically we are going to fill our content to the checklist boxes we learned in lec2. S519.
Mountains and Plains Child Welfare Implementation Center Maria Scannapieco, Ph.D. Professor & Director Center for Child Welfare UTA SSW National Resource.
Screen 1 of 22 Food Security Policies – Formulation and Implementation Policy Monitoring and Evaluation LEARNING OBJECTIVES Define the purpose of a monitoring.
 Welcome, introductions  Conceptualizing the evaluation problem  stakeholder interests  Divergent and convergent processes  Developing evaluation.
Screen 1 of 20 Vulnerability Vulnerability Assessment LEARNING OBJECTIVES Define the purpose and scope of vulnerability assessment. Understand how vulnerability.
Planning for Results National Peer to Peer (NPtP) ROMA Training Project Goal 5 – Agencies improve capacity to Achieve Results.
INTEGRATED ASSESSMENT AND PLANNING FOR SUSTAINABLE DEVELOPMENT 1 Click to edit Master title style 1 Evaluation and Review of Experience from UNEP Projects.
IFPRI INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE Mutual Accountability and Joint Sector Reviews in the Implementation of CAADP Godfrey Bahiigwa – IFPRI/ReSAKSS.
Chapter Thirteen – Organizational Effectiveness.  Be able to define organizational effectiveness  Understand the issues underpinning measuring organizational.
Evaluation design and implementation Puja Myles
Program Evaluation Overview. Definitions of Program Evaluation systematic collection of information abut the activities, characteristics, and outcome.
Planning Evaluation Setting the Course Source: Thompson & McClintock (1998) Demonstrating your program’s worth. Atlanta: CDC/National Center for Injury.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Social Analysis Workshop on Country Analytical Work June 19, 2001 Anis Ahmad Dani World Bank, Social Development Department.
Helping Teachers Help All Students: The Imperative for High-Quality Professional Development Report of the Maryland Teacher Professional Development Advisory.
Logical Framework Approach 1. Approaches to Activity Design Logical Framework Approach (LFA) – Originally developed in the 1970s, this planning process.
Developing Monitoring & Evaluation Frameworks: Process or Product? Anne Markiewicz.
Session 2: Developing a Comprehensive M&E Work Plan.
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Evaluation What is evaluation?
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Logic Models How to Integrate Data Collection into your Everyday Work.
Module 2 Basic Concepts.
Module 1: Introducing Development Evaluation
Strategic Planning for Learning Organizations
S519: Evaluation of Information Systems
Civil Society Facility and Media Programme Call for proposals: EuropeAid/162473/DH/ACT/Multi Webinar no. 3: Preparing effective Concept Note.
Presentation transcript:

EVAL 6000: Foundations of Evaluation Final lecture!

(Semi) In-Depth Examination of Five Evaluation Approaches Utilization-focused evaluation Participatory evaluation Theory-driven/theory-based evaluation CIPP model for evaluation Consumer-oriented evaluation (Scriven’s Key Evaluation Checklist approach)

To facilitate a clearer understanding of these evaluation approaches, we will use Heifer Project International (HPI) as a case example to provide context and to discuss how these approaches might be applied in practice

Heifer Project International (HPI) Aim is to reduce poverty, hunger, and social inequities through strategies aimed at creating self-reliance rather than providing short-term relief “Passing on the gift” is one of the unique attributes that sets apart Heifer from other international development initiatives

Goals Values Cornerstones Indicators Food & Income Security Resource Sharing (POG) Environmental Protection Education & Empowerment Policy, Practice, & System Change Relationships Fostering Cornerstones Values Basic Needs Livestock Care & Management Environment Care & Management Education Empowerment System & Policy Improvement Cornerstones Passing on the Gift Accountability Sharing & Caring Sustainability & Self-Reliance Improved Animal Management Nutrition & Income Gender & Family Focus Genuine Need & Justice Improved Environment Full Participation Training & Education Spirituality Indicators Food Security Income Gender Equity Organizing and Action for Social Change Strengthening Communities Policy Change

Utilization-Focused Evaluation (UFE) Evaluation done for and with specific intended primary users for specific, intended uses Premised on the assertion that evaluations should be judged by their utility and actual use

Utilization-Focused Evaluation (UFE) Evaluator is charged with giving careful consideration to how everything that is done, from beginning to end, will affect use Is personal and situational, with strong emphasis on the “personal factor”

Utilization-Focused Evaluation (UFE) Does not give primacy to any specific method, model, approach, or ideological orientation (with the exception of an emphasis on use) Does emphasize The Program Evaluation Standards as a basis for accountability and quality assurance

Utilization-Focused Evaluation (UFE) Advance organizers What decisions, if any, are the evaluation findings expected to influence? When will decisions be made? By whom? When, then, must the evaluation findings be presented to be timely and influential? What is at stake in the decisions? For whom? What controversies or issues surround the decision? What is the history and context of the decision-making process? What other factors (values, politics, personalities, promises already made) will affect the decision making?

Utilization-Focused Evaluation (UFE) Advance organizers, continued How much influence do you expect the evaluation to have—realistically? To what extent has the outcome of the decision already been determined? What data and findings are needed to support decision making? What needs to be done to achieve that level of influence? How will we know afterward if the evaluation was used as intended?

Participatory Evaluation An extension of the more restrictive stakeholder-based approach (with elements of UFE) Emphasis on increasing use through participation Includes aspects of organizational learning and capacity building through stakeholder participation

Participatory Evaluation Evaluator is a coordinator and responsible for technical support, training, and quality control Ultimately, the evaluator works collaboratively/in partnership with a select group of intended users

Participatory Evaluation Two primary forms Practical participatory evaluation (PPE) Utilization-oriented (with an emphasis on formative evaluation) Transformative participatory evaluation (TPE) Democratic, emancipatory, empowerment-oriented

Participatory Evaluation Who controls? Technical decision making (evaluator vs. stakeholder) Stakeholder selection for participation? Stakeholders selected for participation (diverse vs. limited) How deep? Stakeholder participation (involved in all aspects of inquiry vs. involved as a source for consultation)

Original dimensions of PPE

Modified dimensions of PPE (Cullen, 2010)

Theory-Driven/Based Evaluation Any evaluation strategy or approach that explicitly integrates and uses stakeholder, social science, some combination of, or other types of theories in conceptualizing, designing, conducting, interpreting, and applying an evaluation

Theory-Driven/Based Evaluation Sometimes referred to as program-theory evaluation, theory-based evaluation, theory-guided evaluation, theory-of-action, theory-of-change, program logic, logical frameworks, outcomes hierarchies, realist or realistic evaluation, and, program theory-driven evaluation science

Theory-Driven/Based Evaluation All, in some form or another, aim to determine how, why, when, and for whom a program works and under what conditions (i.e., causal explanation)

Core Principles and Subprinciples of Theory-Driven Evaluation 1. Theory-driven evaluations/evaluators should formulate a plausible program theory a. Formulate program theory from existing theory and research (e.g., social science theory) b. Formulate program theory from implicit theory (e.g., stakeholder theory) c. Formulate program theory from observation of the program in operation/exploratory research (e.g., emergent theory) d. Formulate program theory from a combination of any of the above (i.e., mixed/integrated theory) 2. Theory-driven evaluations/evaluators should formulate and prioritize evaluation questions around a program theory a. Formulate evaluation questions around program theory b. Prioritize evaluation questions 3. Program theory should be used to guide planning, design, and execution of the evaluation under consideration of relevant contingencies a. Design, plan, and conduct evaluation around a plausible program theory b. Design, plan, and conduct evaluation considering relevant contingencies (e.g., time, budget, use) c. Determine whether evaluation is to be tailored (i.e., only part of the program theory) or comprehensive 4. Theory-driven evaluations/evaluators should measure constructs postulated in program theory a. Measure process constructs postulated in program theory b. Measure outcome constructs postulated in program theory c. Measure contextual constructs postulated in program theory 5. Theory-driven evaluations/evaluators should identify breakdowns, side effects, determine program effectiveness (or efficacy), and explain cause-and-effect associations between theoretical constructs a. Identify breakdowns, if they exist (e.g., poor implementation, unsuitable context, theory failure) b. Identify anticipated (and unanticipated), unintended outcomes (both positive and negative) not postulated by program theory c. Describe cause-and-effect associations between theoretical constructs (i.e., causal description) Explain cause-and-effect associations between theoretical constructs (i.e., causal explanation) i. Explain differences in direction and/or strength of relationship between program and outcomes attributable to moderating factors/variables ii. Explain the extent to which one construct (e.g., intermediate outcome) accounts for/mediates the relationship between other constructs

CIPP Model for Evaluation The model’s core concepts are denoted by the acronym CIPP, which stands for evaluations of an entity’s context, inputs, processes, and products Generally targeted toward program managers and other decision makers http://www.wmich.edu/evalctr/archive_checklists/cippchecklist_mar07.pdf

CIPP Model for Evaluation Context evaluations are applied to assess needs, problems, assets, and opportunities, plus relevant contextual conditions and dynamics to help decision makers define goals and priorities and to help the broader group of users judge goals, priorities, and outcomes Input evaluations serve program planning by helping identify and then assess alternative approaches, competing action plans, staffing plans, and budgets for their feasibility and potential cost-effectiveness to meet targeted needs and achieve defined goals

CIPP Model for Evaluation Process evaluations are used to assess the implementation of plans to help staff carry out activities and later to help the broad group of users judge program implementation and expenditures and also interpret outcomes Product evaluations are used to identify and assess costs and outcomes (intended and unintended, short-term and long-term) and may be divided into assessments of impact, effectiveness, sustainability, and transportability

The Relevance of Four Evaluation Types to Formative and Summative Evaluation Roles Context Input Process Product Formative Evaluation: Prospective application of CIPP information to assist decision making and quality assurance. Guidance for determining areas for improvement and for choosing and ranking goals (based on assessing needs, problems, assets, and opportunities, plus contextual dynamics). Guidance for choosing a program strategy (based on identifying and assessing alternative strategies and resource allocation plans). Examination of the work plan. Guidance for implementing the operational plan (based on monitoring and judging activities and delivering periodic evaluative feedback). Guidance for continuing, modifying, adopting, or terminating the effort (based on assessing outcomes and side effects). Summative evaluation: Retrospective use of CIPP information to sum up the effort’s merit, worth, probity, equity, feasibility, efficiency, safety, cost, and significance.   Comparison of goals and priorities to assessed needs, problems, assets, opportunities, and relevant contextual dynamics. Comparison of the program’s strategy, design, and budget to those of critical competitors and to goals and targeted needs of beneficiaries. Full description of the actual process and record of costs. Comparison of the designed and actual processes and costs. Comparison of outcomes and side effects to goals and targeted needs and, as feasible, to results of competitive programs. Interpretation of results against the effort’s assessed context, inputs, and processes.

Consumer-Oriented Evaluation Predicated on “values” and “valuing” Values (aka, criteria and standards) brought to bear are derived from multiple sources (e.g., definitional, needs of impacted population, legal, ethical, functional/logical) Targeted toward those affected by programs (i.e., consumers) http://www.wmich.edu/evalctr/archive_checklists/kec_feb07.pdf

Consumer-Oriented Evaluation Requires evaluators to investigate values in terms of process, outcomes, costs, comparisons, and generalizability under the “Subevaluations” checkpoints in the Key Evaluation Checklist (KEC) Explicit integration of empirical “facts” with values (i.e., the fact-value synthesis) as well as the integration of multiple values (i.e., the value synthesis)

Consumer-Oriented Evaluation Organized around 15 checkpoints Preliminaries Executive summary Preface Methodology Foundations Background and context Descriptions and definitions Consumers (impactees) Resources (a.k.a., “strengths assessment”) Values

Consumer-Oriented Evaluation Checkpoints, continued Subevaluations Process Outcomes Costs Comparisons Generalizability Conclusions & Implications Synthesis (possible) Recommendations & Explanations (possible) Responsibility & Justification Report & Support Metaevaluation