When are Impact Evaluations (IE) Appropriate and Feasible? Michele Tarsilla, Ph.D. InterAction IE Workshop May 13, 2013.

Slides:



Advertisements
Similar presentations
Research Strategies: Joining Deaf Educators Together Deaf Education Virtual Topical Seminars Donna M. Mertens Gallaudet University October 19, 2004.
Advertisements

Note: more information is available at:
Theory-Based Evaluation:
TVET working group contributions. What are the possible options for obtaining decent living and working conditions without joining the informal economy?
Empowering tobacco-free coalitions to collect local data on worksite and restaurant smoking policies Mary Michaud, MPP University of Wisconsin-Cooperative.
Postgraduate Course 7. Evidence-based management: Research designs.
The complex evaluation framework. 2 Simple projects, complicated programs and complex development interventions Complicated programs Simple projects blue.
The concepts/mechanisms/tools for developing a Joint Programme: Critical issues and UNDG Joint Programme Guidance and formats.
Mywish K. Maredia Michigan State University
National Human Resources for Health Observatory HRH Research Forum Dr. Ayat Abuagla.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Experimental Research Designs
5.00 Understand Promotion Research  Distinguish between basic and applied research (i.e., generation of knowledge vs. solving a specific.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
8. Evidence-based management Step 3: Critical appraisal of studies
PPA 502 – Program Evaluation
Refining a Theory of Change 1 Barbara Reed & Dan Houston November 2014.
ARQ part II data management Training pack 2: Monitoring drug abuse for policy and practice.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Codex Guidelines for the Application of HACCP
Evaluation Total ODA Impact Stefan Molund, SIDA Fourth meeting of the DAC Network on Development Evaluation, Paris, 30 – 31 March 2006.
Experimental Design The Gold Standard?.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar May 2014 Note: These slides are intended as guidance.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Operational Issues – Lessons learnt So you want to do an Impact Evaluation… Luis ANDRES Lead Economist Sustainable Development Department South Asia Region.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Peter Hansen Perspectives on Development Aid Health Impact Assessment ASPHER/EAGHA Consultative Workshop Brussels, 6 February 2012.
Module 2 Stakeholder analysis. What’s in Module 2  Why do stakeholder analysis ?  Identifying the stakeholders  Assessing stakeholders importance and.
NCATE Standard 3: Field Experiences & Clinical Practice Monica Y. Minor, NCATE Jeri A. Carroll, BOE Chair Professor, Wichita State University.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
1 MODEL ACADEMIC CURRICULUM MODULE 13 Assessing and Evaluating Responses.
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
1 Contact: From Evaluation to Research Description of a continuum in the field of Global Education.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Impact Evaluations and Development Draft NONIE Guidance on Impact Evaluation Cairo Conference: Perspectives on Impact Evaluation Tuesday, March 31, 2009.
ASEF Risk Communication for Public Health Emergencies, 2015 Overview.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Development Impact Evaluation in Finance and Private Sector 1.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Replication in Prevention Science Valentine, et al.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
Understanding the Research Process
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar 2015 Update Note: These slides are intended as guidance.
Dr. Kathleen Haynie Haynie Research and Evaluation November 12, 2010.
Dr. Aidah Abu Elsoud Alkaissi An-Najah National University Employ evidence-based practice: key elements.
Understanding Results
Laura E. Pechta, PhD, Centers for Disease Control and Prevention
Building a Strong Outcome Portfolio
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
Monitoring and Evaluating FGM/C abandonment programs
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

When are Impact Evaluations (IE) Appropriate and Feasible? Michele Tarsilla, Ph.D. InterAction IE Workshop May 13, 2013

A Few Initial Remarks IE represent only one of the different types of international development evaluation IE definitions vary (e.g., OECD-DAC and Randomistas Movement) and clarity of specifications is often lacking Disagreement within the development community over the most adequate design or method to be used in IE IE sometimes are feasible and appropriate but their questions are not relevant

WHEN is IE appropriate? When there is a real need for original and new information on a certain intervention’s effects Examples: - the extent to which an untested hypothesis holds (e.g. in case of an innovative or pilot projects); -which combination of activities or dosage of intervention is the most effective/contributing to the attainment of the envisaged impacts; When the time elapsed between baseline and follow-up is sufficient (it does not always need to be 4-5 years) Not appropriate when essential goods/services are denied to the comparison group; Data Quality review ensured throughout data collection

WHEN are IE appropriate? When the right sampling frame is in place and the target population has been adequately identified (IE often misses the effects of intervention on the most marginalized, invisible, mobile population) When a communication strategy AND a dissemination strategy are in place (as IE strive to contribute to public knowledge, effects/results that will need to make sense for the general public - effect size vs. statistical significance) When the intervention being evaluated is linear and not too complex or multi-level (recursive logic models or multiple causal pathway for each individual)

When are IE Appropriate? When a good monitoring system is in place (monitoring data will allow opening the so-called black box – the how/why’s etc.) When selection bias and other threats to validity have been adequately addressed When triangulation is pursued in a systematic fashion in the course of data collection When no major deviation from the implementation strategy is envisaged; No major adjustments are expected to take place throughout implementation. When context and other environmental factors influencing the impact are qualified

When are IE Appropriate? Some exceptions to this is the Familias en Accion in Colombia) When there is sufficient time available for the findings (likely to be yielded by the evaluation) as to inform the decision that will need to be made When there are rival plausible explanation for the results observed

When are IE Feasible? When you have sufficient resources (money and technical expertise - better if independent); When you have created sufficient support from local and national authorities; When a good understanding of the evaluation rationale, timeline and practical implications among implementing partners is there (otherwise, the risk is that the control or comparison group will sabotage the activity) When you have a baseline or a baseline could be reconstructed (e.g., by exploiting medical records/data collected by other donors in the target areas of interest)

When are IE Feasible? When you have identified and tested for alternative explanations of the differences between the treatment and the comparison/control group When a clear theory of change is available and agreed upon and understood by the program/project and the evaluation team; When an evaluability assessment has been conducted When RFPs for evaluators and implementers clearly call for joint planning/data collection and collaboration on evaluation-related activities; When incentives for implementers and evaluators are aligned with each other Not feasible when the intervention is universal and no untreated comparison/control group could be identified Not feasible when randomization is not possible (e.g. infrastructure)

Thank you! Contact info: Michele Tarsilla