Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lessons learnt in evaluating literacy projects

Similar presentations


Presentation on theme: "Lessons learnt in evaluating literacy projects"— Presentation transcript:

1 Lessons learnt in evaluating literacy projects
Zenex Literacy Symposium 24 October 2018 Crowne Plaza, Rosebank

2 Introduction Asked to briefly highlight lessons learnt during recent literacy evaluations These lessons apply beyond literacy and education, but particularly important in these contexts Fundamental “lesson” is that M&E is not adding sufficient value to the system or fulfilling the important role that it could be Let’s start by defining some basic evaluation concepts…

3 Evaluations in terms of change theory (simplified)
Inputs Activities Outputs Outcomes Impacts Examples: Funding, HR etc. Train teachers, distribute LTSM Trained teachers, resourced classrooms Improved teaching practices Improved learner performance Process evaluation Also have formative evaluations that inform the design Mention that sometimes there can be a long chain of outcomes The decision is often how far along this chain we want to go Outcomes evaluation Impact evaluation

4 More on impact evaluations
An impact evaluation aims to estimate the causal impact of the programme Randomised control trials, natural experiments or quasi-experimental designs Most suitable when delivery model is clearly specified and delivered consistently The implementer must buy in to consistent delivery over time and beneficiaries Impact evaluations of literacy programmes tend to be expensive Often needs (1) primary in-school data collection, (2) control group (3) a baseline and endline and (4) large samples But expense should be weighed against cost of scaling up or indefinitely funding unproven programmes: Relatively few programmes result in meaningful learner improvements Education programmes use layered, complex theories of change with many assumptions An adaptive, learn-as-you-go approach to programme design often appropriate in early programme phases, but not suitable for impact evaluations

5 Lesson 1: Evaluation type should match programme stage
Evaluation design should fit the programme stage E.g. Process evaluations can provide valuable implementation insights, be done earlier and are (generally) cheaper But impact evaluations are necessary (eventually) Given the number of programmes implemented, we’re not learning enough If large programmes are to be scalable and replicable they should reach a level of maturity where an impact evaluation can be run However if programmes forever remain in the realm of experimentation and learning through doing and never finalises a narrow precise model of delivery, it is dangerous to try to scale it up

6 Lesson 2: Make M&E design part of programme design
Evaluations often designed after programme implementation Hence cannot evaluate key programme components Lack of innovation: e.g. phased roll-outs or testing tweaks to programmes Not enough formative evaluations Too often evaluations surfaces issues monitoring should have picked up M&E concepts and thinking often adds value to programme design Evaluators sometimes put too much emphasis on what is easily measurable But implementers often benefit from the clarity, rigour and focus introduced

7 Measurement of educational quality inherently difficult
Many programmes aim to improve school or teaching quality But quality is not easily measurable Learner performance improvement is a good indicator of quality But measurement it isn’t always possible It also doesn’t necessarily tell us about mechanisms of change Other quality indicators have shortcomings, for example: Perception data is important, but is subjective and, often, biased Lesson observations or teacher competency testing often not feasible 1. Impartiality: Respondents (e.g. teachers or principals) are not truly impartial observers. As they have directly benefitted from the programme they often do not want these benefits to be taken away (as a result of negative feedback) or might simply might not want to seem ungrateful. Many educators also suffer from a degree of status quo bias, which in dysfunctional schools can result in low expectations for change or results. 2. Principal-agent problem: While learners are usually the final programme beneficiaries, they are often too remote from the interventions and / or are too young to be reliable respondents. Officials (e.g. teachers) are thus called on to speak on behalf of learners. What results is a type of principal-agent problem, where the principal (learner) has no say in ensuring that the agent (teachers) responds in their best interests. 3. Resistance: Educators are often understandably resistant to any interference or observation in their work – a view reinforced by the protective stance frequently adopted by teacher unions – and might simply give the expected response to avoid deeper scrutiny of their work. 4. Confidentiality: Despite researchers’ best attempts to assure respondents that answers are recorded anonymously, respondents are often still weary of expressing negative or controversial opinions, fearing that such opinions might have repercussions for them or their school. 5. Social desirability bias: Respondents often tell interviewers the answer that they think researchers want to hear or what they believe is socially expected of them. This bias is the result of the human nature to please and conform, and hence affects virtually all surveys regardless of context. However the factors mentioned in the preceding points make educational research particular susceptible to this bias.

8 Lesson 3: Use a range of measures / data sources
Must employ a range of them to form a rounded picture of quality Most quality indicators are at best proxies These tools could include: Case studies Learner workbooks and other forms of structured documentary analysis Structured observation tools (e.g. training observation) Some other useful techniques: Consistently structured interview questions across respondents Validating interview responses with supporting document requests

9 Summary of lessons We can increase the value and contribution of evaluations if we: Ensure evaluation type matches programme stage Make M&E design part of programme design Use a range of tools / data sources to provide textured findings

10


Download ppt "Lessons learnt in evaluating literacy projects"

Similar presentations


Ads by Google