Download presentation
Presentation is loading. Please wait.
Published byGervase Jeffery Hutchinson Modified over 8 years ago
1
Randomized Evaluation: Dos and Don’ts An example from Peru Tania Alfonso Training Director, IPA
2
Outline Design Implementation Analysis
3
Outline Design – Research question – Power – Randomization – Sampling Implementation Analysis
4
Research question Do make sure the research question is policy relevant Do make sure your indicators are answering your research question
5
Power Don’t conduct an under-powered evaluation – What does it mean to be under-powered? – Sample size and power
6
Power Do power calculations first – Effect size – Sample size – Getting data – (What will take-up be?)
7
Power Do cluster your standard errors when doing power calculations – Bad examples (two districts, 10,000 people)
8
Randomization Do Ensure balance – Stratification – Re-randomizing – Costs and benefits
9
Sampling Do make sure your sampling frame is as close to your target population as possible – Effect size
10
Outline Design Implementation – Measurement – Monitoring – Attrition Analysis
11
Measurement Don’t collect data differently for treatment and control groups – Introducing bias
12
Measurement Don’t use as your primary indicator something that may change with the intervention, even when the outcome does not
13
Monitoring Do monitor your intervention to ensure the treatment groups are receiving the treatment, and control groups are not – Contamination
14
Monitoring Do collect process indicators to unpack the black box
15
Attrition Do whatever it takes to minimize attrition – Attrition bias
16
Outline Design Implementation Analysis – Treatment integrity – Attrition – Final outcomes – Subgroup analyses – Covariates
17
Integrity of design “Once in treatment, always in treatment” Don’t switch treatment or control status, based on compliance – Intention to Treat – Treatment on Treated
18
Attrition “Once in sample, always in sample” Do not ignore “attritors” – Attrition bias
19
Attrition Don’t relax just because rates of attrition are the same in treatment and control groups – How do we test – How do we know
20
Final outcomes Don’t run regressions on 20 different outcomes and only report on 1 or 2 “significant impacts” Do report on all outcomes
21
Sub-group analysis Don’t run regressions on 20 different subgroups and only report on 1 or 2 “significant impacts”
22
Covariates Do specify the regression(s) you plan to run beforehand Do include covariates that you stratified on and those helpful for absorbing variance.
23
External Validity Do be modest about the external validity of your results – Consider the context (needs assessment) – Consider the process (process evaluation)
24
Cost effectiveness Do have listened to Iqbal’s lecture yesterday – Not sure if he is presenting or covered this…just a guess…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.