Presentation is loading. Please wait.

Presentation is loading. Please wait.

Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University.

Similar presentations


Presentation on theme: "Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University."— Presentation transcript:

1 Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University

2 Randomized Evaluations Also known as: Random Assignment Studies Randomized Field Trials Social Experiments Randomized Controlled Trials (RCTs) Randomized Controlled Experiments 2

3 Start with simple case: Take a sample of program applicants Randomly assign them to either: Treatment Group – is offered treatment Control Group - not allowed to receive treatment (during the evaluation period) What does the term random mean? 3

4 Key advantage of experiments Because members of the groups (treatment and control) do not differ systematically at the outset of the experiment, any difference that subsequently arises between them can be attributed to the treatment rather than to other factors. 4

5 Relative to results from non-experimental studies, results from experiments are: Less subject to methodological debates Easier to convey More likely to be convincing to program funders and/or policymakers

6

7  Despite great methodological advantage of experiments, they are also potentially subject to threats to their validity. For example,  Internal Validity (e.g. survey non-response, no-shows, crossovers, duration bias, etc.)  External Validity (e.g. are the results generalizable to other populations?)  It is important to realize that some of these threats also affect the validity of non- experimental studies 7

8 Design the study carefully Randomly assign people to treatment or control Collect baseline data Verify that assignment looks random Monitor process so that integrity of experiment is not compromised 8

9 Collect follow-up data for both the treatment and control groups in identical ways. Estimate program impacts by comparing mean outcomes of treatment group vs. mean outcomes of control group. Assess whether program impacts are statistically significant and practically significant. 9

10 Basic setup of a randomized evaluation Target PopulationPotential participantsEvaluation Sample Treatment Group ParticipantsNo-Shows Control group

11 What I haven’t told you: Standard 4 balsakhis were randomly assigned to schools: half got the balsakhi, half didn’t Strategy: compare low-performing students in balsakhi standards with low-performing students in non-balsakhi standards How might this help? Answers the question: What is the impact of a balsakhi on low-performing children?

12 When assignment is random, standards with a balsakhi should provide an unbiased control group Let’s repeat the exercise comparing low- performing children in balsakhi schools to low- performing children in no-balsakhi schools

13 Average post-test score for low-scoring students in balsakhi grades 45.57 Average post-test score for low-scoring students in non- balsakhi grades 34.97 Difference10.60

14 Another estimate of impact: having a balsakhi in your grade vs. not having a balsakhi in your grade, whether or not you learned from the balsakhi Mean Score for Children with a balsakhi 50.00 Mean score for children without a balsakhi 43.54 Difference6.45

15 Summary table 15 Source: Arceneaux, Gerber, and Green (2004) MethodEstimate 1 – Pre-Post26.42 2 – Simple Difference-5.05 3 – Difference-in-Differences6.82 4 - Regression2.3 5 – Randomized Simple Difference 10.60

16 If properly designed and conducted, social experiments provide the most credible assessment of the impact of a program Results from social experiments are easy to understand and much less subject to methodological quibbles Credibility + Ease of understanding =>More likely to convince policymakers and funders of effectiveness (or lack thereof) of program 16

17 However, these advantages are present only if social experiments are well designed and conducted properly Must assess validity of experiments in same way we assess validity of any other study Must be aware of limitations of experiments 17


Download ppt "Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University."

Similar presentations


Ads by Google