ALMP
Volume of spendings on ALMP Total spending on ALMPs was 66.6 billion Euros for the EU15 in 2003 Graph 1 from Kluve JLE 2010
Jochen Kluve, 2010. The effectiveness of European active labor market programs, Labour Economics, 17, 904–918.
https://ec.europa.eu/info/sites/info/files/european-semester_thematic-factsheet_active-labour-market-policies_en.pdf
Basic typology of ALMPs Subsidies employment in private sector Start-up grants (entrepreneurships) Subsidies employment in public sector (public jobs) Training (vocational, firm specific, general), on-site, in-class Job search (counselling, monitoring, punishing, guidance) Targeting specific groups of people (disabled, youngsters, older, women with children) Targeting specific problems (skills mismatch, spatial mismatch, LTU, Table 1 from Kluve JLE 2010 ‘Black Box’ approach: Contracting-out of public employment services Rewarding contractors. Outcome-based payment. Emphasis on what service providers have achieved rather than the process and its costs (UK’s 1980s) Space for innovative and to improve efficiency Competition and lower costs Matching contextual conditions Greater flexibility to tailor services to individual clients. Critic: under-servicing of some jobseekers Critic: agency profiteering
Heckman, LaLonde, Smith. THE ECONOMICS AND ECONOMETRICS OF ACTIVE LABOR MARKET PROGRAMS, HBLE III.
Jochen Kluve, 2010. The effectiveness of European active labor market programs, Labour Economics, 17, 904–918.
Key questions Do participants benefit from these programs? Are these programs worthwhile social investments (cost-benefits)? Typology of outcomes Table 4 by Card in EJ 2010 Employment / unemployment Duration - the time to exit from registered unemployment (recycling) Earnings Welfare effects Medium and long-term effects Other?
David Card, Jochen Kluve and Andrea Weber, 2010 David Card, Jochen Kluve and Andrea Weber, 2010. ACTIVE LABOUR MARKET POLICY EVALUATIONS: A META-ANALYSIS, The Economic Journal, 120 (November), F452–F477.
Empirical methodology Intro - basics Parametric Cross-sectional estimator Before and after estimator Difference-in-Difference estimator IV Matching Discontinuity design Duration analysis Experimental methods - Randomised programme evaluations also troublesome: Eligibility, application, or acceptance into a program Randomizing eligibility Drop-outs and attrition lead to “intent to treat” effect Presence of alternative treatment to other (D=1) group.
Jochen Kluve, 2010. The effectiveness of European active labor market programs, Labour Economics, 17, 904–918.
Common methodology problems 1 There is no inherent method of choice for conducting program evaluations The better data, the simpler methodology Key problem is the unobservability of counterfactuals. We have to infer them from particular means, based on some assumptions. Difficulty of controlling for selection biases that may lead to specious positive or negative programme effects. The choice of an appropriate estimator should be guided by the economics underlying the problem, the data that are available, and the evaluation question being addressed. Assumptions needed for identification of average treatment effects are not statistically testable No objective measure exists that could describe how good or appropriate the approach chosen by an evaluation study is
Common methodology problems 2 Potential sources of bias in the estimates could be (a) changes in behavior before entry into a program (so-called “threat effects”) Valuation of outcomes by individuals is not necessary the same as the outcome we measure Short duration of observed spells (disappearance) Evaluating ongoing programs disrupts bureaucratic procedures and the program evaluated is no more the ongoing program that one seeks to evaluate. The threat of disruption leads local bureaucrats to oppose their adoption. Programs affect both participants and non-participants but the "treatment effect" ignores the indirect effects of programs on non-participants assuming they are negligible. Most studies miss cost-benefit analysis The choice of an evaluation method depends on the question being asked in the evaluation and on the economic model generating participation and outcomes. Sample attrition (both D=1 and D=0) Publication bias
Data sources Existing data sets vs. ad-hoc (costs 4 and time) LFS, SILC: Small samples, little or no info about ALMP Own data collection Own sampling scheme, own choice of variables, period Costly and time consuming Problem to get support for sampling Retrospective (recall errors) and follow-up data collection (delay) Social security records (linked unemployment – welfare – work – tax records) Large samples Cheap Little demographics information Legal restrictions, anonymization Scandinavia and US (Un)Employment office registry data (duration, past unemployment history, repeated spells) Problem of multiple jobs and sources of earnings HBLE III: pp. 1992 – 1998
Empirical evidence 1 Despite low expenditures, US programs have been evaluated more extensively Germany (45 estimates), Denmark (26 estimates), Sweden (19 estimates) and France (14 estimates). Table 2 by Card in EJ 2010. Table 2 from Kluve JLE 2010 to show absence of post-com countries Existing evaluations inconclusive and heterogeneous. Impact depends a lot on treated people Little consensus on whether ALMP actually reduce unemployment or raise the number of employed workers, on which type of program seems most promising, and on the question what a given country can learn from ALMP experiences in another country. Displacement of non-participants
Empirical evidence 2 – typical findings Table 5 by Card in EJ 2010 Longer-term evaluations tend to be more favourable than short-term evaluations. Statistically insignificant differences in the distribution of positive, negative and insignificant programme estimates from experimental and non-experimental evaluations, and between published and unpublished studies. ALMP programmes do not appear to have differential effects on men versus women. Traditional training programs have a modest likelihood of generating a significant positive impact on post-program employment rates. Relative to training, both private sector incentive programs (subsidies and start-ups) and Services and Sanctions (counceling) show a significantly better performance. Direct employment programs (public works) are less likely to estimate a significant positive impact on post-program employment outcomes. Programs targeting youths are significantly less likely to be effective. ALMP is more likely to work when the unemployment rate is higher, in particular in case of training programs. More HBLE III. Pp. 2069 – 2080 Findings from European evaluations
David Card, Jochen Kluve and Andrea Weber, 2010 David Card, Jochen Kluve and Andrea Weber, 2010. ACTIVE LABOUR MARKET POLICY EVALUATIONS: A META-ANALYSIS, The Economic Journal, 120 (November), F452–F477.
Jochen Kluve, 2010. The effectiveness of European active labor market programs, Labour Economics, 17, 904–918.
David Card, Jochen Kluve and Andrea Weber, 2010 David Card, Jochen Kluve and Andrea Weber, 2010. ACTIVE LABOUR MARKET POLICY EVALUATIONS: A META-ANALYSIS, The Economic Journal, 120 (November), F452–F477.