Presentation is loading. Please wait.

Presentation is loading. Please wait.

Failing Forward Global Overview.

Similar presentations


Presentation on theme: "Failing Forward Global Overview."— Presentation transcript:

1 Failing Forward Global Overview

2 What are we looking at? 114 CARE Evaluations 2015-2018
What evaluators told us went wrong Qualitative coding with MAXQDA 04 03 02 01 Our meta-analysis is based on what evaluators reported as challenges, gaps, and lessons learned across 114 of CARE’s project evaluations from 113 projects, across regions, sectors, and member partners. We selected final evaluations across CARE’s body of work that were written from , although a couple mid-term reports were selected where there was no final evaluation available. We coded the information based on what the evaluators told us went wrong in a project using MAXQDA and then exported to Excel for further analysis, visualization, and dissemination.

3 Grouped into Overarching Categories
Given the descriptive and varying nature of CARE’s evaluations, we felt that a qualitative analysis best captures project failure to promote learning. We initially looked at 7 overarching categories, each with a number of feeder coders, as well as failures that around technology, those that indicated that CARE caused harm, and finally anything else we identified we labeled as miscellaneous. As you can see here, failures around implementation were present in 87.7% of the evaluations we looked at. REPERCUSSIONS

4 Each category made up of specific codes
“While interventions around input challenges have been well-designed, more follow up was needed during the project to ensure the success of every intervention to its full potential.” Looking a little bit further into what that means, there were 10 feeder codes under implementation. Failures around quality of implementation were present in 60.5% of evaluations and evaluators noted that we were missing key necessary activities in just over half of the evaluations we analyzed.

5 MAXQDA Coding For anyone not familiar with MAXQDA, this is what it looks like- it’s very simple to use, you just highlight the segments and drag the appropriate code onto it. As you can see here, the same segments can and frequently were coded for multiple failures. When questions or reflections arose, it was easy to insert a memo for easy communication between team members. We merged codes every couple of weeks and at the end of coding, before analysis, documents were organized into sets based on region and sector for ease of analysis. Additionally, after finishing coding, we identified which failures happened during the design phase and added the “design” code to these segments. 92.1% of evaluations analyzed indicated a failure during the design phase. After this code was added, we reached a total of 3,808 coded segments in our evaluations.

6 Top 10 Failures Also looking at the global overview, here were the top ten specific failures, color-coded by their overarching category. It's interesting to note that all of these were present in over 40% of analyzed evaluations.

7 MEAL “The baseline data received by the Evaluation Team does not provide any information on the average income level at the project start.”

8 Regional Breakdowns

9 Regional Breakdowns

10

11

12 Sectoral Breakdowns

13 CMP Breakdowns

14 What do you want to see? Implementation Partnership Gender MEAL Scale
Caused Harm Technology Human Resources Budget Regional Go to slide based on what people want to see Implementation – 16 MEAL – 11 Partnership – 12 Scale – 13 Gender – 14 HR– 15 Budget – 16 Repercussions

15 Implementation “While interventions around input challenges have been well-designed, more follow up was needed during the project to ensure the success of every intervention to its full potential.”

16 Limitations & Lessons learned
qualutative 01 Evaluator Bias 65 codes over 6 categories Coder Bias 02 quantitative Bias- we’re subject to both evaluator bias and coder bias. Reporting bias was evident in comparing the qualitative and quantitative reports from the same project. For one project where ex analyzed both, the qualitative report identified 65 codes segments for failure over 6 master buckets, as well as design failures, whereas the quantitative report had only 5 coded segments, all with the MEAL category. These two evaluations were coded by the same person in the same day, eliminating the discrepancy from intercoder variability, and limited the influence of intracoder variability. Final evaluations are subject to the bias and focus of the evaluators, an important consideration when reviewing failure results. The project could have benefited for more time at the end to review our codes and our work as a team and collectively reach consensus on codes to establish more intercoder reliability. Refine our codebook- have all coders code the same set of evaluations and discuss the discrepancies and edit the code book where necessary. Establishing this clear foundation before starting requires a larger time contribution on the onset but will help mitigate confusion and discrepancies during the analysis stage. Refine the Code book 5 codes all in MEAL 03

17 What’s Next? What do we do next with this? Who should see this?
1 3 What do we do next with this? Who should see this? 2 4 Where do we share this? Action steps?

18 “Blame is not for failure, it is for failing to help or ask for help
“Blame is not for failure, it is for failing to help or ask for help.” – Jorgen Vig Knudstorp, CEO of Lego

19 The Web of Codes Here is a web of what those codes are color-coded by region. This offers an interactive map to be able to zoom in on different failure categories or different projects and explore their connections.

20 3 months 215 hours 50-60 hours What does it take?
once MAXQDA was introduced 215 hours of coding data 50-60 hours merging, organizing, and cleaning data Once MAXQDA was introduced, the coding process took 3 months to complete with roughly 215 hours of raw coding split between three coders, However, one coder had previously read many of the reports, decreasing her raw coding time. Merging, organizing, and cleaning the data took an additional hours over the course of the project. We intended to review all codes as a team to decrease inter-coder subjectivity, however, due to time constraints, we were not able to do this, and reviewed only categories where we anticipated discrepancies.

21 Partnership “Sadly, at endline the relationship was severely stressed due to untimely resource distribution and misunderstandings regarding the reasons for delayed payments.”

22 Scale “Thus, for the majority of the villages in the study, access to school reverted back to conditions resembling those prior to the arrival of the … program. This is a troubling finding.”

23 Gender “Any future project should also improve the gender-responsive delivery by planning for mitigating activities to support women who face initial resistance from their spouse when joining the VSLG.”

24 HR “The overall team spirit and morale were low, and this was apparent in interviews, meetings and discussions.”

25 Budget “Delayed funds disbursal and hence recruitment of coordinators delayed commencement of activities.”


Download ppt "Failing Forward Global Overview."

Similar presentations


Ads by Google