Presentation is loading. Please wait.

Presentation is loading. Please wait.

Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.

Similar presentations


Presentation on theme: "Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection."— Presentation transcript:

1 Monitoring and evaluation 16 July 2009 Michael Samson msamson@epri.org.za UNICEF/ IDS Course on Social Protection

2 Overview of monitoring and evaluation (M&E)  Why M&E?  What issues can M&E address?  Key methodological options  Good practices –An 8-step implementation framework –Key “good practice” issues  Conclusions

3 What is monitoring?  Monitoring: is the program progressing as planned?  Routine collection of administrative data  Mostly input and output indicators, but sometimes outcome indicators  Operational indicators  Monitoring complements impact assessment (evaluation)  Point to what should be further researched  Administrative data can be used in impact evaluation analysis 3

4  Impact evaluation: what changes in outcomes (and of which size) can be attributed to the programme and only the programme? With project With out project Indicator Year 30% 40% 50% 12345 15 % 4 What is evaluation (impact assessment)? SOURCE: Regalia 2007

5 Why M&E? There are three major motivations:  To serve strategic objectives: are social protection instruments achieving the main policy goals?  To serve operational objectives: how can one further improve implementation and delivery?  To serve learning objectives: what can you learn from the programmes? PLUS ONE MORE: To mobilise political will to sustain and expand the programme

6 Specifically…  To analyze alternative designs and intervention schemes  To learn what works and make the best use of limited budget resources (Measure effectiveness of alternatives)  To improve program’s design and operation  Sequential learning  Programs are in constant evolution  IE useless if it is an ex-post, retrospective, one shot “judgment”  To ensure program’s sustainability  Ensure adequate budget allocations through rigorous evidence  Ride out changes in administration 6

7 Why M&E? A concrete test. M&E may be appropriate if the answer is “YES” to any of the following questions:  Is the programme of strategic relevance for national public policy?  Will the evaluation contribute to improving the implementation or development of the programme?  Can the evaluation results influence future design of the programme and other programmes?

8 M&E activities can be designed to address a broad ranges of questions. Some examples include:  Does the programme reach the intended beneficiaries?  How does the programme affect the beneficiaries?  Does the programme generate the desired outcomes?  What is the impact on the rest of the population?  Are there better ways to design the programme?  Can the programme be managed more efficiently?  Are the allocated resources being spent efficiently?  Were the costs of setting up and running the programme justified?

9  Distributions of gains (and losses): Do transfers increase food per capita expenditures (improve school progression/health outcomes) more for extremely poor vs. poor (females vs. males/rural vs. urban)?  Alternative designs: how would program outcomes change under alternative designs?  different size or design of the transfers  different recipients (males vs. females)  alternative delivery mechanisms for basic services in CCT  Cost – benefit analysis 9 Types of questions IE can address:

10 Does the programme reach the intended beneficiaries? A framework for analysing success and error:

11 Qualitative methods  Participant Observation involves field researchers spending an extended amount of time in residence with a programme community.  Case studies involve detailed or broad studies of a specific intervention involving open-ended questioning and the recording of personal stories.  Participatory learning and action involves a facilitator assisting the active involvement of those who have a stake in the programme.  Logical framework analysis involves identifying inputs, outputs, outcomes, and impacts; their causal relationships, indicators and the assumptions or risks that may influence success and failure.

12 Quantitative methods  Randomised experimental design –The need for a control group –The advantages –The disadvantages –The ethical dilemma  Quasi-experimental design –Alternatives to a control group: the credible comparison group –Propensity scoring –The advantages: practicality and ethics –The disadvantages: “unobserved heterogeneity”

13 Complementing other evaluations  Quantitative IEs are complemented by other evaluations  Rigorous qualitative evaluations are critical to complement and get a better understanding of quantitative IE results  Require very specific (and rare) skills  Qualitative results can be very useful for operational purposes  Should be carried out on the same sample of the quantitative IE  Operational and “process” evaluations  Based on MIS data and field observations  Should be carried out by social transfer specialists in close coordination with program executing agency  Routine analysis of administrative data from MIS 13

14 Types of design: Experimental design  An experimental design is possible when the programme placement is done in a random way.  This process guarantees that both treatment and control (comparison group) have the same observable and unobservable characteristics before the treatment;  Therefore differences in outcomes after the programme can be attributed to the programme.

15 Control groups  Simply comparing eligible participants with eligible non participants might lead to very biased estimates if program’s participation is voluntary (as it usually is with social transfers)  Eligible non participants might differ from eligible participants in many observable and unobservable ways  Reasons of non participation might be correlated with outcomes  We can control for observables but we are left with unobservables A comparison (control) group needs to be built  “Comparison” group needs to be as identical in observable and unobservable dimensions as possible to the “treatment” group receiving the program (and should not be “contaminated”) 15

16 Constructing control and treatment groups  The roll out pace determines whether the evaluation will estimate short/medium or long term impacts Eligible population Sample Treatment Group Control Group Randomization: first stage sampling external validity Second stage random assignment internal validity 16

17 Problems with experimental design  However, experimental design can be unfeasible due to ethical and political considerations. How to justify that you are excluding a person/family who needs a social protection programme?  Experimental design does not suit the case of universal policies: all eligible population gets the programme.  It is necessary to be very careful with the implementation of this design: contamination, drop- outs, skim-creaming may threaten the random nature of both groups (and its internal validity).

18 Counterfactual: What would have happened to the same household if it did not receive the transfer?  The same household is never observed at the same point in time with and without the programme …  …before and after comparison is not a good counterfactual (especially for social transfers programmes) Treatment Phase I Control Phase I/ Treatment Phase II Average impact on per capita food expenditures (C$) – SSN Nicaragua 18

19 Quasi-experimental design  A quasi-experimental design is based on the attempt to build a comparison group that was not generated randomly;  There are different QE methods: different assumptions and different results. Methods can be quite complex which makes it difficult to be understood by a broader audience.  Advantages: In general it is cheaper and quicker to be implemented. It can be done when the programme has already started (ex-post evaluation).

20 Regression discontinuity RD: baseline RD: post intervention 20

21 Issues in considering methodological options  The absence of a universal framework  Constraints in implementing M&E activities  Two basic classes of methodologies: quantitative and qualitative  The importance of a comprehensive (mixed) approach

22 How to implement an evaluation of a social transfer programme in 8 steps:  STEP 1: Decide whether or not to evaluate the programme.  STEP 2: Make clear the evaluation objectives.  STEP 3: Identify a truly independent and qualified evaluation team.  STEP 4: Fully design the evaluation.  STEP 5: Mobilise the required data.  STEP 6: Analyse the data.  STEP 7: Report the results.  STEP 8: Most importantly, reflect the results in improved programme delivery.

23 Knowledge transfer  Adequate staffing and mix of skills in the social transfer executing agency  Evaluation implemented in close collaboration with program executors  Impact evaluation team should ensure  Systematization of processes and data documentation  Adequate transfer of knowledge  Social transfer executing agency could periodically convene an external committee of experts in charge of quality control of the impact evaluation process 23

24 Lessons of international “good practice”  The importance of a thorough understanding of the administrative and institutional details of the programme.  The importance of an in depth understanding of the social and policy context.  Be open-minded about your source of data.  Be careful about simply comparing outcomes for programme participants and non-participants.  Effective evaluations must be adequately resourced.

25 Conclusions (M&E)  M&E is important: –to inform policy-makers about the strategic impact of social transfers, –to improve the delivery of social transfer programmes, –to provide an evidence base for better policy-making.  M&E can address many issues, depending on the needs of policy-makers.  Policy-makers face a range of methodological options—but no one model works best in every case.  The 8-step framework provides a starting point for implementing an M&E process.


Download ppt "Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection."

Similar presentations


Ads by Google