Failing Forward Global Overview.

Slides:



Advertisements
Similar presentations
Donald T. Simeon Caribbean Health Research Council
Advertisements

Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Research Ethics Levels of Measurement. Ethical Issues Include: Anonymity – researcher does not know who participated or is not able to match the response.
Lessons Learned for Strong Project Delivery & Reporting Sheelagh O’Reilly, Kristin Olsen IODPARC Independent Assessors for the Scottish Government IDF.
English Word Origins Grade 3 Middle School (US 9 th Grade) Advanced English Pablo Sherman The etymology of language.
Acquisitions, a Publisher’s Perspective Craig Duncan Development Manager External Development Studio Building the partnership between.
Stuart Hollis Where are we now? An exploration of the provision of teacher training programmes for the Learning and Skills Sector following the 2007 Workforce.
1 OPHS FOUNDATIONAL STANDARD BOH Section Meeting February 11, 2011.
Developing a result-oriented Operational Plan Training
ADEPT 1 SAFE-T Judgments. SAFE-T 2 What are the stages of SAFE-T? Stage I: Preparation  Stage I: Preparation  Stage II: Collection.
OVERVIEW Partners in Pregnancy is a community program aimed at giving young couples the resources they need with their pregnancy. Young pregnant couples.
W. BentzEMBA 8021 Agenda Today Points regarding the practice questions Update regarding the final input Quiz Overview of product control system variances.
CAPHC-SHN Paediatric Medication Reconciliation Collaborative Listen, Reflect and Move Forward Early Implementation Data Update and Key Learnings Elaine.
KTE Community of Practice Case study: Using an educational influential network to disseminate an evidence based tool Team: Christie Brenchley, Mary Egan,
Prepared by: Forging a Comprehensive Initiative to Improve Birth Outcomes and Reduce Infant Mortality in [State] Adapted from AMCHP Birth Outcomes Compendium.
Six Steps To Problem Solving A simple systematic approach to problems and issues faced by students By MK NKWANE g15N7271 TUESDAY GROUP.
DRAFT January 2015 Prepared by: A ndrew C hang & C ompany, LLC CRDP Phase 2 Survey Results DISCLAIMER: This data is representative of the survey respondents.
OOAD Using the UML - Use-Case Analysis, v 4.2 Copyright  Rational Software, all rights reserved 1 Use Case Analysis – Part 4 Analysis Mechanisms.
This is a front cover page and can only be used at the beginning of a PowerPoint presentation once. Do not add anything else to this screen if it is being.
Monitoring and Evaluating Rural Advisory Services
PROCESSING DATA.
The British Accreditation Council: ensuring standards
Chapter 33 Introduction to the Nursing Process
HRM 498 ASSIST Experience Tradition/hrm498assist.com
Understanding NMC allegations data, and developing a coding frame to categorise future allegations data Rob Francis Matt Reynolds March 2017 Restricted.
5 steps to align your talent strategy to the organisational strategy
Enacting Multiple Strategies and Limiting Potential Successes: Reflections on Advocacy Evaluation, Competing Objectives, and Pathways to Policy Change.
Chapter 03 Project Design
IAEA E-learning Program
Background on Compression Planning Design #1
Trends in Virginia's Community Health Needs Assessments
Working with mascil resources How can the mascil resources be used?
LEARNING REPORT 2016 Disasters and Emergencies Preparedness Programme
VEX IQ Challenge STEM Research Project Judging Process
Programme Board 6th Meeting May 2017 Craig Larlee
Community program evaluation school
Evaluation of Nutrition-Sensitive Programs*
Evaluating Partnerships
CILIP Professional Registration & Portfolio Building
Title: Validating a theoretical framework for describing computer programming processes 29 November 2017.
“CareerGuide for Schools”
MKT 498 Education for Service-- snaptutorial.com.
MKT 498 Teaching Effectively-- snaptutorial.com
The Centre for Community-Driven Research
Multi-Sectoral Nutrition Action Planning Training Module
Projecting Health Engaging communities through visual communication
Using Evidence to Refine a Partnership
Data and Data Collection
Social Media and Networking for a University
Structures for Implementation
Resource 1. Evaluation Planning Template
Final Research Question
SRH & HIV Linkages Agenda
4.2 Identify intervention outputs
Project Management Process Groups
Studio 4. Project Planning and Literature Review
Accelerating implementation of First Link®
Foster Carer Retention Project Michelle Galbraith Project Manager
MAP-IT: A Model for Implementing Healthy People 2020
The Early Help Assessment Journey. How to Assure a Quality Journey.
Completing the Child’s Plan (Education – Single Agency Assessment)
CITE THIS CONTENT: RYAN MURPHY, “QUALITY IMPROVEMENT”, ACCELERATE UNIVERSITY OF UTAH HEALTH CURRICULUM, JANUARY 30, AVAILABLE AT: 
What is The Model for Improvement?
Experiencing the Model for Improvement
Collaborating to Conduct a Workforce Needs Assessment
Time Scheduling and Project management
The Early Help Assessment Journey. How to Assure a Quality Journey.
UNFCCC Needs-based Finance (NBF) Project
CITE THIS CONTENT: RYAN MURPHY, “QUALITY IMPROVEMENT”, ACCELERATE UNIVERSITY OF UTAH HEALTH CURRICULUM, JANUARY 30, AVAILABLE AT: 
Root Cause Analysis Identifying critical campaign challenges and diagnosing bottlenecks.
Presentation transcript:

Failing Forward Global Overview

What are we looking at? 114 CARE Evaluations 2015-2018 What evaluators told us went wrong Qualitative coding with MAXQDA 04 03 02 01 Our meta-analysis is based on what evaluators reported as challenges, gaps, and lessons learned across 114 of CARE’s project evaluations from 113 projects, across regions, sectors, and member partners. We selected final evaluations across CARE’s body of work that were written from 2015-2018, although a couple mid-term reports were selected where there was no final evaluation available. We coded the information based on what the evaluators told us went wrong in a project using MAXQDA and then exported to Excel for further analysis, visualization, and dissemination.

Grouped into Overarching Categories Given the descriptive and varying nature of CARE’s evaluations, we felt that a qualitative analysis best captures project failure to promote learning. We initially looked at 7 overarching categories, each with a number of feeder coders, as well as failures that around technology, those that indicated that CARE caused harm, and finally anything else we identified we labeled as miscellaneous. As you can see here, failures around implementation were present in 87.7% of the evaluations we looked at. REPERCUSSIONS

Each category made up of specific codes “While interventions around input challenges have been well-designed, more follow up was needed during the project to ensure the success of every intervention to its full potential.” Looking a little bit further into what that means, there were 10 feeder codes under implementation. Failures around quality of implementation were present in 60.5% of evaluations and evaluators noted that we were missing key necessary activities in just over half of the evaluations we analyzed.

MAXQDA Coding For anyone not familiar with MAXQDA, this is what it looks like- it’s very simple to use, you just highlight the segments and drag the appropriate code onto it. As you can see here, the same segments can and frequently were coded for multiple failures. When questions or reflections arose, it was easy to insert a memo for easy communication between team members. We merged codes every couple of weeks and at the end of coding, before analysis, documents were organized into sets based on region and sector for ease of analysis. Additionally, after finishing coding, we identified which failures happened during the design phase and added the “design” code to these segments. 92.1% of evaluations analyzed indicated a failure during the design phase. After this code was added, we reached a total of 3,808 coded segments in our evaluations.

Top 10 Failures Also looking at the global overview, here were the top ten specific failures, color-coded by their overarching category. It's interesting to note that all of these were present in over 40% of analyzed evaluations.

MEAL “The baseline data received by the Evaluation Team does not provide any information on the average income level at the project start.”

Regional Breakdowns

Regional Breakdowns

Sectoral Breakdowns

CMP Breakdowns

What do you want to see? Implementation Partnership Gender MEAL Scale Caused Harm Technology Human Resources Budget Regional Go to slide based on what people want to see Implementation – 16 MEAL – 11 Partnership – 12 Scale – 13 Gender – 14 HR– 15 Budget – 16 Repercussions

Implementation “While interventions around input challenges have been well-designed, more follow up was needed during the project to ensure the success of every intervention to its full potential.”

Limitations & Lessons learned qualutative 01 Evaluator Bias 65 codes over 6 categories Coder Bias 02 quantitative Bias- we’re subject to both evaluator bias and coder bias. Reporting bias was evident in comparing the qualitative and quantitative reports from the same project. For one project where ex analyzed both, the qualitative report identified 65 codes segments for failure over 6 master buckets, as well as design failures, whereas the quantitative report had only 5 coded segments, all with the MEAL category. These two evaluations were coded by the same person in the same day, eliminating the discrepancy from intercoder variability, and limited the influence of intracoder variability. Final evaluations are subject to the bias and focus of the evaluators, an important consideration when reviewing failure results. The project could have benefited for more time at the end to review our codes and our work as a team and collectively reach consensus on codes to establish more intercoder reliability. Refine our codebook- have all coders code the same set of evaluations and discuss the discrepancies and edit the code book where necessary. Establishing this clear foundation before starting requires a larger time contribution on the onset but will help mitigate confusion and discrepancies during the analysis stage. Refine the Code book 5 codes all in MEAL 03

What’s Next? What do we do next with this? Who should see this? 1 3 What do we do next with this? Who should see this? 2 4 Where do we share this? Action steps?

“Blame is not for failure, it is for failing to help or ask for help “Blame is not for failure, it is for failing to help or ask for help.” – Jorgen Vig Knudstorp, CEO of Lego

The Web of Codes Here is a web of what those codes are color-coded by region. This offers an interactive map to be able to zoom in on different failure categories or different projects and explore their connections.

3 months 215 hours 50-60 hours What does it take? once MAXQDA was introduced 215 hours of coding data 50-60 hours merging, organizing, and cleaning data Once MAXQDA was introduced, the coding process took 3 months to complete with roughly 215 hours of raw coding split between three coders, However, one coder had previously read many of the reports, decreasing her raw coding time. Merging, organizing, and cleaning the data took an additional 50-60 hours over the course of the project. We intended to review all codes as a team to decrease inter-coder subjectivity, however, due to time constraints, we were not able to do this, and reviewed only categories where we anticipated discrepancies.

Partnership “Sadly, at endline the relationship was severely stressed due to untimely resource distribution and misunderstandings regarding the reasons for delayed payments.”

Scale “Thus, for the majority of the villages in the study, access to school reverted back to conditions resembling those prior to the arrival of the … program. This is a troubling finding.”

Gender “Any future project should also improve the gender-responsive delivery by planning for mitigating activities to support women who face initial resistance from their spouse when joining the VSLG.”

HR “The overall team spirit and morale were low, and this was apparent in interviews, meetings and discussions.”

Budget “Delayed funds disbursal and hence recruitment of coordinators delayed commencement of activities.”