Carina Omoeva, FHI 360 Wael Moussa, FHI 360

Slides:



Advertisements
Similar presentations
Value Added in CPS. What is value added? A measure of the contribution of schooling to student performance Uses statistical techniques to isolate the.
Advertisements

Class 18 – Thursday, Nov. 11 Omitted Variables Bias
Baseline for school surveys - Young Lives longitudinal survey of children, households & communities every 3 years since ,000 children Ethiopia,
According to the Statistical Yearbook for 2010, in 2008/09 year, only 41% of the total number of children in Serbia, aged between 0 and 7 years, were enrolled.
EBI Statistics 101.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Clustered or Multilevel Data
Using Growth Models for Accountability Pete Goldschmidt, Ph.D. Assistant Professor California State University Northridge Senior Researcher National Center.
Impact Evaluation Session VII Sampling and Power Jishnu Das November 2006.
SAMPLING AND STATISTICAL POWER Erich Battistin Kinnon Scott Erich Battistin Kinnon Scott University of Padua DECRG, World Bank University of Padua DECRG,
Two-Way Analysis of Variance STAT E-150 Statistical Methods.
Analysis of Clustered and Longitudinal Data
Sampling and Participants
1. An Overview of the Data Analysis and Probability Standard for School Mathematics? 2.
Research Methodology Lecture No :16
Chapter 12 Multiple Regression and Model Building.
Student Engagement Survey Results and Analysis June 2011.
A statistical method for testing whether two or more dependent variable means are equal (i.e., the probability that any differences in means across several.
Cara Cahalan-Laitusis Operational Data or Experimental Design? A Variety of Approaches to Examining the Validity of Test Accommodations.
Why Is It There? Getting Started with Geographic Information Systems Chapter 6.
A Framework of Mathematics Inductive Reasoning Reporter: Lee Chun-Yi Advisor: Chen Ming-Puu Christou, C., & Papageorgiou, E. (2007). A framework of mathematics.
AME Education Sector Profile
Multilevel Data in Outcomes Research Types of multilevel data common in outcomes research Random versus fixed effects Statistical Model Choices “Shrinkage.
Stat 112 Notes 9 Today: –Multicollinearity (Chapter 4.6) –Multiple regression and causal inference.
28. Multiple regression The Practice of Statistics in the Life Sciences Second Edition.
Targeting of Public Spending Menno Pradhan Senior Poverty Economist The World Bank office, Jakarta.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
1 Children Left Behind in AYP and Non-AYP Schools: Using Student Progress and the Distribution of Student Gains to Validate AYP Kilchan Choi Michael Seltzer.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
The School of Education- St. John’s University
Approaches to quantitative data analysis Lara Traeger, PhD Methods in Supportive Oncology Research.
Why Is It There? Chapter 6. Review: Dueker’s (1979) Definition “a geographic information system is a special case of information systems where the database.
Modeling Poverty Martin Ravallion Development Research Group, World Bank.
Assessment, Information Systems, Monitoring, and Statistics (AIMS) Planning for National EFA Mid-Decade Assessment October 2005 Guidelines on Methods.
The Long-Term Effects of Universal Primary Education:
Learning Objectives : After completing this lesson, you should be able to: Describe key data collection methods Know key definitions: Population vs. Sample.
Examining Achievement Gaps
Education Equity Research Initiative: Focus on Equity to Inform Internal Evaluations lessons from Jordan Amy Mulcahy-Dunn, Chris Cummiskey, Simon King,
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Analysis for Designs with Assignment of Both Clusters and Individuals
HIGH-LEVEL PROJECTIONS FOR EDUCATION (HIPE)
A Structured Approach to Equity Analysis of SEL outcomes: Evidence from IRS’s 3EA in Niger Silvia Diazgranados, Senior Researcher for Education, IRC;
Differentially Private Verification of Regression Model Results
3 Research Design Formulation
Intervention Study: Kenya PRIMR Case Regression Analysis
How High Schools Explain Students’ Initial Colleges and Majors
12th Regional Meeting of National EFA Coordinators July 2011 , Seoul Korea EMIS Development in Lao PDR.
Impact of State Reporting Laws on Central Line– Associated Bloodstream Infection Rates in U.S. Adult Intensive Care Units Hangsheng Liu, Carolyn T. A.
Universal Primary/Basic Education: Goal #2
Evaluation of a Multi-Site, Out-of-School-Time Program: Contextual, Individual, and Combined Influences on Outcomes Amy Corron, United Way of Greater Houston.
Universal Primary/Basic Education: Goal #2
Competition, financial innovation and commercial
Esteban Villalobos, Diego Portales University
March 2017 Susan Edwards, RTI International
Marissa Gargano Sarrynna Sou Susan Edwards Chris Cummiskey
IEPI – Participate | Collaborate | Innovate
International Center for Equity in Health
Chapter 1 The Where, Why, and How of Data Collection
From GLM to HLM Working with Continuous Outcomes
Objective of the workshop
Chapter 1 The Where, Why, and How of Data Collection
Young Lives, University of Oxford
Lecture 1: Descriptive Statistics and Exploratory
SAMPLING AND STATISTICAL POWER
Sample Sizes for IE Power Calculations.
The Where, Why, and How of Data Collection
Educational Testing Service
CS639: Data Management for Data Science
Felipe Barrera-Osorio (HDNED World Bank)
Chapter 1 The Where, Why, and How of Data Collection
Presentation transcript:

Carina Omoeva, FHI 360 Wael Moussa, FHI 360 Integrating Equity into Assessment of Learning Outcomes: a Structured Approach to Equity Analysis Collaborative partnership focused on strengthening data and research evidence for equity in education Premised on the idea that in order to have better understanding of equity, we need to take responsibility for producing better, deeper data Carina Omoeva, FHI 360 Wael Moussa, FHI 360

The Structured Approach to Equity Analysis Developed by the Education Equity Research Initiative as a process for performing analysis of outcomes in education with a consistent eye towards equity Consists of a series of questions formulated to examine an aspect of the data, and coded in a Stata tool Designed to provide a common framework for analysis across equity categories The final aim of the sequence is to not only capture disparities in outcomes but also identify the change in equity of the program/policy impact One of the first steps taken by the initiative to develop a common framework to apply to existing and future data Addresses the question – what do we need to start looking at, in order to begin examining equity in our programsCarina Preview of the panel: Fhi 360 – framework, Save the Children – within school and between school analysis; RTI

Level of Analysis Dimension of Analysis Questions Student   Descriptive analysis to examine variability across key equity dimensions Individual What is the overall shape of the distribution of outcomes? Background Are the lowest performing students (at the bottom decile/ quintile of the distribution) substantively different from the higher performing students by gender, socioeconomic status, ethnicity/race, language, urban/rural, geographic location, disability status? School Cluster (school, district, subnational unit) What is the overall school-level distribution for outcomes? How large is the variability in outcomes between clusters (schools, districts, regions)? What is the student composition, along gender, socioeconomic, ethnic, and other student subgroups, among low performing schools? Impact evaluation with lens on equity Program/Intervention What is the effect of the program on outcomes of interest? is it homogeneous or heterogeneous across the different student subgroups? Does the program/ policy have an equity building effect? Inputs Resource and input allocation Classroom What are pupil-teacher ratios for different schools or clusters? How large is the disparity between high performing and low performing schools? What are the average teacher characteristics in low performing schools versus high performing schools? Resources How are public resources/ program resources allocated between schools? Other Adding a time dimension

Overview of the sequence Identify equity dimensions and population composition relevant to the project context Examine disparities in outcomes between individual students as well as groups of students (equity groupings) Students can be grouped into performance categories Construct statistical profiles for each category along observable student characteristics (including equity group membership) Generate statistics of performance metrics and analyze distributions at aggregated levels (school, district, etc.) Estimate program impacts, overall and for each group Test for heterogeneity of estimated impacts

What are the equity dimensions present in the sample? As a first step, we perform a simple tabulation of equity dimensions – the social and demographic characteristics predictive of outcomes Recommended dimensions include gender, ethnicity (proxied by language), SES, disability A crosstab helps determine if subgroup sample sizes are adequate for group-level inference Additional dimensions – Practical Recommendations

What is the overall distribution of outcomes? The overall shape of the distribution provides a gauge of the overall disparity across individuals and clusters of learners Distributional plots are useful to examine disparities between groups, but also within groups

Separate lowest performing students from rest of sample Pre-treatment We divide students into performance groups to construct statistical profiles for each With reading assessment, levels are based on reading ability Students are identified as non-readers, non-fluent readers, and fluent readers Cutscores are determined using ORF and reading comprehension performance However, can be identified as bottom 10%, or bottom 25% of learners Post-treatment

Are the lowest performing learners different on equity profile?   Non-Reader Non-Fluent Reader Fluent Reader Age 9.51 9.24* 9.07* Corridor: Classroom and Teacher Characteristics: Region 1 0.25 0.35* 0.41* Multi-Grade Classroom 0.02 0.01 Region 2 0.48 0.42* 0.38* Length of Period 38.96 39.53 37.66 Region 3 0.28 0.23 0.21 Class Size 31.96 32.31 32.74 Equity Groups: Teacher is Male 0.53 0.59* 0.64* Female 0.42 0.51* Teacher is Female 0.47 0.36* Male 0.58 0.49* Teacher - Secondary Education or Lower 0.69 0.65 0.6* Home Language - National 0.82 0.84 Teacher - Postsecondary Education 0.31 0.35 0.4* International 0.18 0.16 Teacher Experience 11.57 10.03* 10.69 High SES 0.56* Teacher is Salaried 0.87 0.89 0.96* Low SES 0.52 0.44* Observations 474 688 211 * p < 0.10

What is the distribution of lowest performers across clusters (schools)? Using performance categories, we can examine the share of nonreaders between and within schools Having non-performers clustered within some schools vs. spread evenly across all is potentially relevant for implementation Can be repeated for larger units (e.g. districts) In this example, we look at baseline and endline; only 20 percent of schools have less than 10% non-readers Pre-treatment Post-treatment

What is the effect of the program on outcomes? In case of a true random experiment, simple differences in means can be attributed to the program From an equity perspective, we must disaggregate program effects by equity dimension to test for impact heterogeneity

Examining Program Impact by Equity Dimension A more elegant method of estimating program impacts is via regression analysis In the case of longitudinal student data with two time periods, we apply a difference-in-differences approach with multi-level effects (school and student) In the case of true randomization of the treatment, HLM (random effects) would be the preferred model specification—otherwise, student-level fixed effects To test for heterogeneity of impact, we interact the treatment*post variable with equity group indicators ( 𝑰 𝒈 ), as follows: 𝑌 𝑖𝑠𝑡 =𝛼𝑇𝑟𝑒𝑎𝑡+𝛽𝑃𝑜𝑠𝑡+∑ 𝛾 𝑔 𝐼 𝑔 +∑ 𝛿 𝑔 𝑇𝑟𝑒𝑎𝑡∗𝑃𝑜𝑠𝑡∗ 𝐼 𝑔 + 𝜐 𝑖 + 𝜐 𝑠 + 𝜀 𝑖𝑠𝑡 In this case, the impact parameters are 𝛿 1 , 𝛿 2 , …, 𝛿 𝑔

Program Impact by Equity Dimension   (1) (2) (3) (4) Program Impact: Overall +4.26*** Effect on Males +3.76*** Effect on Females +4.79*** Effect on National Home Language +4.39*** Effect on International Home Language +3.58** Effect on High SESa +5.29*** Effect on Low SESa +2.97* In this example, the overall program impact is estimated at +4.26 WPM Differences in coefficients are testable using simple t-tests available with most standard statistical software Same principle applies to any other educational outcome Note: *** p < .01, ** p < .05, * p < .10; a denotes statistically significant difference in impact estimates between groups

Does the program have an equity-building effect?

Does the program have an equity-building effect? – Another example In this example, the program built equity on the gender dimension, but worsened it for the wealth dimension

Additional analyses Examining resource inputs and financing across schools Additional time dimensions to examine change in trajectory

Concluding Remarks The proposed structured approach to equity analysis can be applied to different outcomes, and to both exploratory/ descriptive and impact studies Stata tool provides standardized code that can be adapted to any dataset with individual-level data The approach is also flexible with the number of equity dimensions and degree of fragmentation in terms of student population make up Additional steps should include examining resource inputs and financing across schools, and added time dimensions to examine change in trajectory

THANK YOU LEARN MORE: www.educationequity2030.org FOLLOW US @equity2030 | #equity2030 SUPPORT US educationequity@fhi360.org