Download presentation
Presentation is loading. Please wait.
Published byEmerald Hodge Modified over 9 years ago
1
1 Learning vs. accountability What is (are) the purpose(s) of evaluation? Alberto Martini
2
2 Learning how to spend effectively is a different task than being accountable for the money spent
3
3 Being accountable for how the money was spent Looks at the past Rarely used for future decisions Does not cumulate in time Lots of data, little to say Makes people feel good
4
4 Learning how to spend effectively It is much more difficult Looks at the past but is used in the future Does cumulate in time Makes people feel bad most of time, especially politicians
5
5 So, learn what works, for whom, and why This is not emphasized yet very much by the European Commission, but there is more attention given to impact evaluation now than just three years ago For sure impact evaluation involves fundamentally different cognitive tasks than monitoring for accountability
6
An example of a widely used policy: Giving grants to private enterprises to invest or to innovate. Is this an effective use of the money? 6
7
Is it enough to compare firms who get the subsidy and those who do not even apply for the subsidy? 7 ? no
8
Is it enough to compare firms before they get the subsidy and where they are two years later? 8 ? no
9
AVERAGEN PRE65.0002400 POST75.0002400 OBSERVED CHANGE10.000 R&D EXPENDITURES AMONG THE FIRMS RECEIVING GRANTS Is 10.000 the true average impact of the grant? 9
10
10
11
11
12
Things change over time by “natural dynamics” How do we disentangle the change due to the policy from the myriad changes that would have occurred anyway? 12
13
AVERAGEN T=060.0002600 T=175.0002400 DIFFERENCE TREATED - NON TREATED +15.000 IS 15.000 THE TRUE IMPACT OF THE POLICY? 13 DIFFERENCE TREATED - NON TREATED
14
WITH-WITHOUT 14
15
We cannot use experiments with firms, for political-practical reasons There are lots of non-experimental counterfactual methods 15
16
16
17
17
18
18
19
19
20
20 THE PROBLEM HERE IS THAT THEY ARE DIFFERENT AFTER BUT WERE SO ALSO BEFORE
21
21
22
22
23
23
24
24
25
25
26
26
27
27
28
28
29
POST DIFFE- RENCE PRE DIFFE- RENCE 29
30
POST DIFFE- RENCE PRE DIFFE- RENCE 30
31
POST DIFFERENCE =15.000 - PRE DIFFERENCE =10.000 = Impact = 5000 31
32
HOW DO WE KNOW THE PARALLELISM ASSUMPTION IS TRUE? With four observed means, we cannot The parallelism becomes testable if we have two additional pre-intervention data points PRE-PRE 32
33
33
34
34
35
35
36
36
37
37
38
38
39
39 An italian case: law 488 Grants to support investment 3 Billion Euro 6000 Firms. Located in distressed areas 500,000 average grant value Which employment effect? About 2 Jobs Created per firm AVERAGE COST PER JOB CREATED: 250,000 EURO
40
40
41
41
42
42 DOES IT VARY BY SIZE OF THE GRANT?
43
43
44
44 What have we learned? It takes time and effort to learn Effort pays off in the long run Overall average impacts are not terribly informative If impacts vary across beneficiaries, Subgroup analysis can provide useful information for targeting IN EVERY CASE, RATIONING IS IMPORTANT TO HAVE
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.