Download presentation
Presentation is loading. Please wait.
Published byJane Bridges Modified over 9 years ago
1
Nature gives us correlations… Evaluation Research (8521) Prof. Jesse Lecy Lecture 0 1
2
2 Policy / Program Black Box Input Outcome The Program Evaluation Mindset (something happens here)
3
3 Policy / Program Input Outcome The Program Evaluation Mindset (something happens here) The slopes tells us how much impact we expect a program to have when we spend one additional unit of input. The outcome is some function of the program and the amount of inputs into the process. It can sometimes be represented by this simple input-output equation.
4
Effect
5
5 Heart Rate Treatment (Caffeine) Control (No caffeine)
7
7 http://www.radiolab.org/2010/oct/08/its-alive/ 4:15-
8
How do we know when the interpretation is causal? Effect?
9
NATURE GIVES US CORRELATIONS 9
10
xy z Example #1
11
xy z Example #2
12
xy z Example #3
13
xy xy
14
Examples of Poor Causal Inference
15
1.Ice cream consumption causes polio 2.Investments in public buildings creates economic growth 3.Early retirement and health decline 4.Hormone replacement therapy and heart disease: In a widely-studied example, numerous epidemiological studies showed that women who were taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better than average diet and exercise regimes. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than cause and effect as had been supposed. (Wikipedia)
16
Examples of Complex Causal Inference
17
MODERN PROGRAM EVAL 17
18
To Experiment or Not Experiment 18 http://www.youtube.com/watch?v=exBEFCiWyW0
19
CASE STUDY – EDUCATION REFORM 19
20
Classroom Size and Performance http://www.publicschoolreview.com/articles/19 State Laws Limiting Class Size Notwithstanding the ongoing debate over the pros and cons of reducing class sizes, a number of states have embraced the policy of class size reduction. States have approached class size reduction in a variety of ways. Some have started with pilot programs rather than state-wide mandates. Some states have specified optimum class sizes while other states have enacted mandatory maximums. Some states have limited class size reduction initiatives to certain grades or certain subjects. Here are three examples of the diversity of state law provisions respecting class size reduction. California – The state of California became a leader in promoting class size reduction in 1996, when it commenced a large-scale class size reduction program with the goal of reducing class size in all kindergarten through third grade classes from 30 to 20 students or less. The cost of the program was $1 billion annually.
21
Classroom Size and Performance http://www.publicschoolreview.com/articles/19 Florida – Florida residents in 2002 voted to amend the Florida Constitution to set the maximum number of students in a classroom. The maximum number varies according to the grade level. For prekindergarten through third grade, fourth grade through eighth grade, and ninth grade through 12th grade, the constitutional maximums are 18, 22, and 25 students, respectively. Schools that are not already in compliance with the maximum levels are required to make progress in reducing class size so that the maximum is not exceeded by 2010. The Florida legislature enacted corresponding legislation, with additional rules and guidelines for schools to achieve the goals by 2010. Georgia -- Maximum class sizes depend on the grade level and the class subject. For kindergarten, the maximum class size is 18 or, if there is a full-time paraprofessional in the classroom, 20. Funding is available to reduce kindergarten class sizes to 15 students. For grades one through three, the maximum is 21 students; funding is available to reduce the class size to 17 students. For grades four through eight, 28 is the maximum for English, math, science, and social studies. For fine arts and foreign languages in grades K through eight, however, the maximum is 33 students. Maximums of 32 and 35 students are set for grades nine through 12, depending on the subject matter of the course. Local school boards that do not comply with the requirements are subject to lose funding for the entire class or program that is out of compliance.
22
Class Size Case Study - The Theory: Class Size Test Scores Class Size Test Scores SES ? Scenario 1 Scenario 2
23
Example: Classroom Size 23 ∆Y ∆X The regression coefficient represents a slope. In policy we think of the slope as an input-output formula. If I decrease class size (input) standardized tests scores increase (output). Note changes in slopes and standard errors when you add variables.
24
The Naïve Model:
25
With Teacher Skill as a Control:
26
Add SES to the Model:
27
Example: Classroom Size 27 Why are slopes and standard errors changing when we add “control” variables?
28
How do we interpret results causally? Class Size Test Scores SES Teacher Skill X
29
COURSE OUTLINE 29
30
The Origins of Modern Program Evaluation 30 The “Great Society” introduced unprecedented levels of spending on social services – marks the dawn of the modern welfare state. Econometrics also comes of age, creating tools the provide opportunity for rigorous analysis of social programs.
31
31
32
32
33
33 We need effective programs, not expensive programs
34
Modern Program Evaluation 34 Course Objectives: 1.Understanding why regressions are biased Seven Deadly Sins of Regression: 1.Multicollinearity 2.Omitted variable bias 3.Measurement error 4.Selection / attrition 5.Misspecification 6.Population heterogeneity 7.Simultaneity
35
Modern Program Evaluation 35 Course Objectives: 2.Understand tools of program evaluation – Fixed effect models – Instrumental variables – Matching – Regression discontinuity – Time series – Survival analysis
36
Modern Program Evaluation 36 Course Objectives: 3.How to talk to economists (and other bullies)
37
Modern Program Evaluation 37 Course Objectives: 4.Correctly apply and critique evaluation designs Experiments Pretest-posttest control group Posttest only control group Quasi-Experiments Pretest-posttest comparison group Posttest only comparison group Interrupted time series with comparison group Reflexive Design Pretest-posttest design Simple time series
38
Course Structure 38 First half: Understanding bias – No text, course notes online – Weekly homework Second half: Evaluation design – Text is required – Campbell Scores policy-research.net/programevaluation
39
Evaluating Internal Validity: The Campbell Scores 39 A Competing Hypothesis Framework Omitted Variable Bias Selection Nonrandom Attrition Trends in the Data Maturation Secular Trends Study Calibration Testing Regression to the Mean Seasonality Study Time-Frame Contamination Factors Intervening Events Measurement Error
40
Homework Policy 40 Homework problems each week 1 st half of semester – Graded pass/fail – Submit via D2L please – Work in groups is strongly encouraged Campbell Scores are due each class for the second half of the semester Midterm Exam (30%) – Confidence intervals, Standard error of regression, Bias Final Exam (20%) – Covers evaluation design and Internal validity No late homework accepted! 50% of final grade. See syllabus for policy on turning in by email.
41
41 1st Qu.75.5 Median89 Mean83.47 3rd Qu.96 1st Qu.82 Median86 Mean87.21 3rd Qu.97.5
42
42
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.