Girls’ scholarship program.  Often small/no impacts on actual learning in education research ◦ Inputs (textbooks, flipcharts) little impact on learning.

Slides:



Advertisements
Similar presentations
Value Added in CPS. What is value added? A measure of the contribution of schooling to student performance Uses statistical techniques to isolate the.
Advertisements

Reading Recovery: Can School Psychologists Contribute? Ruth M. Kelly, Western Illinois University Kelly R. Waner, Special Education Association of Adams.
Individual, Home, and Community Factors PISA/PIRLS Task Force International Reading Association January 2005.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Eric A. Hanushek Stanford University
Together we LEAD!.
Child Care Subsidy Data and Measurement Challenges 1 Study of the Effects of Enhanced Subsidy Eligibility Policies In Illinois Data Collection and Measurement.
Yudatiningsih I.1,Sunartono H.1,SuryawatiS.2
Difficult Conversations
Monitoring and Evaluation of ICT Use in Rural Schools Ok-choon Park Global Symposium on ICT in Education – Measuring Impact: Monitoring and Evaluation.
Saint James School. St. James School: Profile Early Childhood (Age 3) to 8 th grade 340 students; 205 families Average class size: 20.
1 Chapter 4 The Designing Research Consumer. 2 High Quality Research: Evaluating Research Design High quality evaluation research uses the scientific.
Are the future teachers apt for teaching? Teachers Aptitude Testing.
Education, Life Cycle and Mobility: A Latin American Perspective
AMY ELLEN SCHWARTZ NEW YORK UNIVERSITY LEANNA STIEFEL NEW YORK UNIVERSITY ROSS RUBENSTEIN SYRACUSE UNIVERSITY JEFFREY ZABEL TUFTS UNIVERSITY Can Reorganizing.
Understanding Stanford 10 Results
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Performance Based Incentives for Learning in the Mexican Classroom Brian Fuller, MPA, Foundation Escalera Victor Steenbergen, MPA Candidate, London School.
Information Campaigns to Strengthen Participation & Improve Public Schools: Experimental Evidence from Two Studies in Uttar Pradesh and Madhya Pradesh.
Missing Data Issues in RCTs: What to Do When Data Are Missing? Analytic and Technical Support for Advancing Education Evaluations REL Directors Meeting.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Session 8: Strategies to reduce violence
School meals and child outcomes in India Farzana Afridi, Delhi School of Economics IGC-ISI Conference, 20 th – 21 st December, 2010.
Economics 172 Issues in African Economic Development Lecture 13 February 28, 2006.
Providing Leadership in Reading First Schools: Essential Elements Dr. Joseph K. Torgesen Florida Center for Reading Research Miami Reading First Principals,
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Pascaline Dupas, UCLA Pupil-Teacher Ratios, Locally-Hired Contract Teachers, and School-Based Management: Evidence from Kenya Making Schools Accountable:
Lessons for Education Policy in Africa Evidence from Randomized Evaluations in developing countries James Habyarimana Georgetown University.
Incentives James Habyarimana Impact Evaluation Initiative World Bank.
Field Experiments II: Opening My Laptop John A. List University of Chicago, NBER, IZA, and RFF A.Framed Field Experiment: Classroom Performance B.Artefactual.
Dr. Tracey Bywater Dr. Judy Hutchings The Incredible Years (IY) Programmes: Programmes for children, teachers & parents were developed by Professor Webster-Stratton,
Gender and Impact Evaluation
Curriculum and Learning Omaha Public Schools
© New Zealand Ministry of Education copying restricted to use by New Zealand education sector. Page 1 Consider the Evidence Evidence-driven.
Lessons for Education in Africa Evidence from Randomized Evaluations in Kenya Esther Duflo J-PAL A B D U L L A T I F J A M E E L P O V E R T Y A C T I.
Economics 172 Issues in African Economic Development Lecture 14 March 2, 2006.
What Was Learned from a Second Year of Implementation IES Research Conference Washington, DC June 8, 2009 William Corrin, Senior Research Associate MDRC.
Michigan MSPs June 2007 Wendy Tackett, PhD, iEval
Adolescent girls, school, HIV, and pregnancy: evidence from Kenya Michael Kremer, Harvard University Esther Duflo, Pascaline Dupas, Samuel Sinei; Edward.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 31.
The Policy Choices of Effective Principals David Figlio, Northwestern U/NBER Tim Sass, Florida State U July 2010.
Research on teacher pay-for-performance Patrick McEwan Wellesley College (Also see Victor Lavy, “Using performance-based pay to improve.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Pilot Program: Conditional Cash Transfers (CCT) to Increase Girls’ Participation in Education Kano & Bauchi States, Nigeria Presented by Sadi Yahaya SESP.
Conditional Cash Transfer (CCT) Programme Kano State Nigeria.
Evaluation of the DC Opportunity Scholarship Program: Final Report Conducted by Westat, University of Arkansas, Chesapeake Research Associates Presented.
Applying impact evaluation tools A hypothetical fertilizer project.
Web-Homework Platforms: Measuring Student Efforts in Relationship to Achievement Michael J. Krause INTRODUCTION Since the 2007 fall semester, I have used.
What is randomization and how does it solve the causality problem? 2.3.
Chapter 7 Rewards and Performance Management
Measuring Impact 1 Non-experimental methods 2 Experiments
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
T EACHER INCENTIVES AND LOCAL PARTICIPATION : E VIDENCE FROM A RANDOMIZED PROGRAM IN K ENYA Joost de Laat Michael Kremer Christel Vermeersch.
Success at ‘A’ Level:. Home/Sixth Form links minutes to: Explain how A level works! Why Year 12 is so important! Establishing a strong relationship.
On the use and misuse of computers in education: Evidence from a randomized experiment in Colombia Felipe Barrera-Osorio, World Bank Leigh L. Linden, Columbia.
Does Parent Involvement Really Make a Difference? Laura Chianese.
Addressing Learning Problems in Elementary School Ellen Hampshire.
Common Pitfalls in Randomized Evaluations Jenny C. Aker Tufts University.
TRANSLATING RESEARCH INTO ACTION Randomized Evaluation Start-to-finish Abdul Latif Jameel Poverty Action Lab povertyactionlab.org.
Impact of School Grants in Primary Schools In The Gambia Impact Evaluation Team The Gambian Team AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
EXPERIMENTAL RESEARCH
School Quality and the Black-White Achievement Gap
EDUC Quiz #1 (W. Huitt) EDUC 2130 W. Huitt
1/18/2019 1:17:10 AM1/18/2019 1:17:10 AM Discussion of “Strategies for Studying Educational Effectiveness” Mark Dynarski Society for Research on Educational.
Impact Evaluation Methods: Difference in difference & Matching
Explanation of slide: Logos, to show while the audience arrive.
Presentation transcript:

Girls’ scholarship program

 Often small/no impacts on actual learning in education research ◦ Inputs (textbooks, flipcharts) little impact on learning ◦ De-worming affected attendance but not test scores  What is often most important in education policies and programs? Incentives  What happens if we offer direct incentives for student learning?  What happens if only offer this for a disadvantaged subgroup? Girls

 The debate over cash incentives ◦ “Pros”  Incentives to exert effort  Helps with self-control problems  Externalities to effort ◦ Possible “cons”  Exacerbate inequality  Weaken intrinsic motivation (short or long run)  Gaming the system (cramming, cheating)  Merit awards could affect ◦ Eligible students’ own effort ◦ Other students effort & teacher effort could be either complements or substitutes

The Girls Scholarship Program Randomized evaluation in Kenyan primary schools 63 treatment & 64 comparison schools Balanced treatment groups Announced an award for girls in treatment schools Based on end of year standardized test scores Top 15% of grade 6 girls in program schools win award 1000 KSh (US$12.80) for winner and her family 500 KSh (US$6.40) for school fees Public recognition at an award ceremony Two cohorts of scholarship winners, 2001 & 2002 Survey data on attendance, study habits, attitudes

 Program implemented in two districts: Teso & Busia  Randomization and awards stratified by district ◦ Historical and ethnic differences in the two districts ◦ NGOs have poor relations with some Teso communities ◦ Tragic lightning incident early 2001

 School attrition: five Teso schools pulled out in 2001  Test attrition: treatment vs. comparison with complete 2001 data: ◦ Teso 54% vs. 66%, Busia 77% vs. 77%  Differential test attrition: significantly more high- achieving students took the 2001 exam in comparison schools relative to program schools, likely to bias estimated program impacts toward zero in Teso

How to deal with it? Make sure that no one drops out from your original treatment and control groups. If there is still attrition… Check that it is not different in treatment and control. Also check that it is not correlated with observables. If there is differential attrition 1.Impute outcome variable based on baseline covariates 2.Bounds: run the analysis under the “best-case” and “worst-case” scenario. Either the best or the worst students are the ones that drop out at a rate that is equal to the rate of differential attrition

 Estimate total effects and district effects  Estimate effects for treatment schools (T):

 Cheating is likely not a concern  Evidence of “learning”: consistent effects over two years and two cohorts  No effect on tutoring, household textbook purchases, self-esteem, attitudes toward school, amount of chores at home  Teachers report more parental support in Busia  Student and teacher attendance increased

 Important to think through programmatic issues when designing interventions – incentives must be aligned (teachers, parents, students…)  Randomizing by school can help to pick up within class/school externalities  Things can go wrong – need to monitor attrition  Large and persistent gains in learning are possible to achieve

 What if instead of linking student performance to students, we made the teachers responsible?  Randomized evaluation in Kenya (Glewwe, Ilias and Kremer (2004))  Offered teachers prizes based on schools’ average scores ◦ Top scoring schools and most improved schools (relative to baseline) ◦ Each category 1st, 2nd, 3rd and 4th prizes were awarded (21% to 43 % of teacher monthly salary) ◦ Penalized teachers for dropouts by assigning low scores to students who did not take the exam

What was affected: Treatment scores 0.14 sd above control Strongest for geography, history, and religion (most memorization) Exam participation rose Extra-prep sessions What was NOT affected: Dropout/ repetition rates Teacher attendance Homework assignment or pedagogy Lasting test score gains Conclusions: Teachers’ effort concentrated in improving short-run outcomes, rather than stimulating long-run learning

 Busia: Overall (0.18 – 0.20 s.d.) ◦ Persistent effect for girls the next year ◦ Spillover effect for boys  Teso: Scholarship less successful: either no significant program effect or unreliable estimates  Merit-based scholarships can motivate students to exert effort ◦ Test score and attendance gains among girls in the medium-run  Positive classroom externalities ◦ Initially low-achieving girls, boys, and teachers  Possible multiple equilibria in classroom culture  Cost-effective way to boost test scores  Equity concerns – may wish to restrict to particular areas or populations