Presentation is loading. Please wait.

Presentation is loading. Please wait.

TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011.

Similar presentations


Presentation on theme: "TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011."— Presentation transcript:

1 TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011

2 Life Cycle of an Evaluation Other types of Evaluations: Review / Cost-benefit analysis / Cost-effectiveness Analysis

3 Balsakhi Program

4 Class sizes are large Many children in 3 rd and 4 th standard are not even at the 1 st standard level of competency Social distance between teacher and some of the students is large What was the issue?

5 Hire local women (balsakhis) from the community Train them to teach remedial competencies – Basic literacy, numeracy Identify lowest performing 3 rd and 4 th standard students – Take these students out of class (2 hours/day) – Balsakhi teaches them basic competencies Proposed solution

6 Pros Reduced class size Teaching at appropriate level Reduced social distance Improved learning for lower-performing students Improved learning for higher-performers Cons Less qualified Teacher resentment Reduced interaction with higher-performing peers Possible outcomes

7 We want to look at impact What do we do? Study design

8 What is the impact of the balsakhi? What would have happened without the balsakhi program?

9 Look at average change in test scores over the school year for the balsakhi children Pre-post (Before vs. After)

10 What was the impact of the balsakhi program? Before vs. After Impact = 26.42 points? 75 50 25 0 2002 2003 26.42 points?

11 Impact: What is it? Time Primary Outcome Impact Counterfactual Intervention

12 Impact: What is it? Time Primary Outcome Impact Counterfactual Intervention

13 Impact: What is it? Time Primary Outcome Impact Counterfactual Intervention

14 Impact is defined as a comparison between: 1.the outcome some time after the program has been introduced 2.the outcome at that same point in time had the program not been introduced the ”counterfactual” 14 How to measure impact?

15 1.Randomized Experiments Also known as: – Random Assignment Studies – Randomized Field Trials – Social Experiments – Randomized Controlled Trials (RCTs) – Randomized Controlled Experiments 15 Impact evaluation methods

16 2. Non- or Quasi-Experimental Methods a. Pre-Post b.Simple Difference c.Differences-in-Differences d.Multivariate Regression e.Statistical Matching f.Interrupted Time Series g.Instrumental Variables h.Regression Discontinuity 16 Impact evaluation methods

17 There are more sophisticated non-experimental methods to estimate program impacts: – Multivariable Regression – Matching – Instrumental Variables – Regression Discontinuity These methods rely on being able to “mimic” the counterfactual under certain assumptions Problem: Assumptions are not testable Other Methods

18 We constructed results from the different ways of estimating the impacts using the data from the schools that got a balsakhi 1.Pre – Post (Before vs. After) 2.Simple difference 3.Difference-in-difference 4.Other non-experimental methods 5.Randomized Experiment Methods to estimate impacts

19 TRANSLATING RESEARCH INTO ACTION II – What is a randomized experiment? The Balsakhi Example povertyactionlab.org

20 The basics Start with simple case: Take a sample of schools Randomly assign them to either:  Treatment Group – is offered treatment  Control Group - not allowed to receive treatment (during the evaluation period) 20

21 Suppose we evaluated the balsakhi program using a randomized experiment QUESTION #1: What would this entail? How would we do it? QUESTION #2: What would be the advantage of using this method to evaluate the impact of the balsakhi program? 21 6 – Randomized Experiment Source: www.theoryofchange.org

22 How to Randomize, Part I - 22 Random assignment in Vadodara 2006 Baseline Test Scores 20 10 0 Treat Compare 28

23 Key advantage of experiments Because members of the groups (treatment and control) do not differ systematically at the outset of the experiment, any difference that subsequently arises between them can be attributed to the program rather than to other factors. 23

24 Pratham had received funding to introduce balsakhis in all schools Principals wanted balsakhis in their school Teachers wanted balsakhis in their class If denied a balsakhi, they may block data collection efforts Constraints of design

25 Random assignment

26 Round 1 Red: Std. 3 Blue: Std. 4 Round 1 Red: Std. 3 Blue: Std. 4 Rotation design Round 2 Std 3 from Round 1  Std 4 in Round 2 —————————————————————————— Std 4 from Round 1  Std 3 in Round 2

27 Round 1 Red: Std. 3 Blue: Std. 4 Comparison Red: Treatment Blue: Control Std 3 Std 4 Rotation design

28 The counterfactual: School Groups Year 1 (2001-2002)Year 2 (2002-2003) Grade 3Grade 4Grade 3Grade 4 Group ABalsakhiNo Balsakhi Balsakhi Group BNo BalsakhiBalsakhi No Balsakhi

29 Key steps in conducting an experiment 1. Design the study carefully 2. Collect baseline data 3. Randomly assign people to treatment or control 4.Verify that assignment looks random 5.Monitor process so that integrity of experiment is not compromised 29

30 Key steps in conducting an experiment (cont.) 6.Collect follow-up data for both the treatment and control groups 7.Estimate program impacts by comparing mean outcomes of treatment group vs. mean outcomes of control group. 8.Assess whether program impacts are statistically significant and practically significant. 30

31 Impact of Balsakhi - Summary MethodImpact Estimate (1) Pre-post26.42* (2) Simple Difference-5.05* (3) Difference-in-Difference6.82* (4) Regression1.92 *: Statistically significant at the 5% level

32 Impact of Balsakhi - Summary MethodImpact Estimate (1) Pre-post26.42* (2) Simple Difference-5.05* (3) Difference-in-Difference6.82* (4) Regression1.92 (5)Randomized Experiment5.87* *: Statistically significant at the 5% level

33 Bottom Line: Which method we use matters! Impact of Balsakhi - Summary MethodImpact Estimate (1) Pre-post26.42* (2) Simple Difference-5.05* (3) Difference-in-Difference6.82* (4) Regression1.92 (5)Randomized Experiment5.87* *: Statistically significant at the 5% level

34 This program improved average test score by nearly 10% The lowest-performing kids in balsakhi schools improved the most (relative to the control group) Highly cost effective: intervention cost less than Rs.100 per child and results far outweigh the cost Can be scalable, especially in light of RTE Act: remedial education Results


Download ppt "TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011."

Similar presentations


Ads by Google