Evaluating the effectiveness of innovation policies Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys

Slides:



Advertisements
Similar presentations
Debt Sustainability and Debt Composition UNCTAD Paper by Heiner Flassbeck and Ugo Panizza.
Advertisements

Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
The World Bank Human Development Network Spanish Impact Evaluation Fund.
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Mywish K. Maredia Michigan State University
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Pooled Cross Sections and Panel Data II
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
TAFTIE Policy Forum „Measuring innovation” New trends and challenges in innovation measurement Fred Gault UNU-MERIT.
PEPA is based at the IFS and CEMMAP © Institute for Fiscal Studies Identifying social effects from policy experiments Arun Advani (UCL & IFS) and Bansi.
OVE’s Experience with Impact Evaluations Paris June, 2005.
Goal Paper  Improve our understanding on whether business training can improve business practices and firm outcomes (sales, profits, investment) of poor.
TRADUIRE LA RECHERCHE EN ACTION Employment RCTs in France Bruno Crépon.
INNOVATION AND ECONOMIC PERFORMANCE: AN ANALYSIS AT THE FIRM LEVEL IN LUXEMBOURG Vincent Dautel CEPS/INSTEAD Seminar “Firm Level innovation and the CIS.
14/04/11 Relaxing Credit Constraints: The Impact of Public Loans on the Performance of Brazilian Firms IDEAS International Assembly 2011 * Corresponding.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
M. Velucchi, A. Viviani, A. Zeli New York University and European University of Rome Università di Firenze ISTAT Roma, November 21, 2011 DETERMINANTS OF.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
Policy options and recommendations José Palacín Chief, Innovative Policies Development UNECE Minsk, 19 June 2014.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Do multinational enterprises provide better pay and working conditions than their domestic counterparts? A comparative analysis Alexander Hijzen (OECD.
Evaluation of an ESF funded training program to firms: The Latvian case 1 Andrea Morescalchi Ministry of Finance, Riga (LV) March 2015 L. Elia, A.
Linking micro data for the analysis of ICT effects Mika Maliranta, ETLA Istat – Stat Fin Workshop, June 26th and 27th, Rome.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
The World Bank Monitoring and evaluation of science, technology & innovation An International Perspective.
1 Exports and Productivity Link in Manufacturing: Microeconomic Evidence from Croatia Gorana Lukinić Čardić Dubrovnik, June 23, 2010.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advice on Data Used to Measure Outcomes Friday 20 th March 2009.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
“Assessing the impact of public funds on private R&D. A comparative analysis between state and regional subsidies ” Sergio Afcha and Jose Garcia-Quevedo,
Assessing the impact of innovation policies: a comparison between the Netherlands and Italy Elena Cefis and Rinaldo Evangelista (University of Bergamo,
Ifo Institute for Economic Research at the University of Munich Employment Effects of Innovation at the Firm Level Stefan Lachenmaier *, Horst Rottmann.
What is randomization and how does it solve the causality problem? 2.3.
Innovation surveys: design, implementation, lessons learnt Micheline Goedhuys.
Current practices in impact evaluation Howard White Independent Evaluation Group World Bank.
Chile’s Supplier Development Program Irani Arráiz May 2015.
Randomized Assignment Difference-in-Differences
Overview of evaluation of SME policy – Why and How.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
SEL1 Implementing an assessment – the Process Session IV Lusaka, January M. Gonzales de Asis and F. Recanatini, WBI
What can a CIE tell us about the origins of negative treatment effects of a training programme Miroslav Štefánik miroslav.stefanik(at)savba.sk INCLUSIVE.
Effects of migration and remittances on poverty and inequality A comparison between Burkina Faso, Kenya, Nigeria, Senegal, South Africa, and Uganda Y.
MATCHING Eva Hromádková, Applied Econometrics JEM007, IES Lecture 4.
MERIT1 Does collaboration improve innovation outputs? Anthony Arundel & Catalina Bordoy MERIT, University of Maastricht Forthcoming in Caloghirou, Y.,
Network analysis as a method of evaluating support of enterprise networks in ERDF projects Tamás Lahdelma (Urban Research TA, Finland)
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Paul Wright Chief Executive United Kingdom Science Park Association.
IP4INNO Module 4B: Business Planning with IP The 7Ps Assessment Tool Timing recommendation: 1.5 hours Name of speakerVenue & date.
The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The Fundamental Evaluation Problem and its Solution.
Quasi Experimental Methods I
L. Elia, A. Morescalchi, G. Santangelo
Quasi Experimental Methods I
REGIONAL POLICY DIALOGUE SCIENCE, TECHNOLOGY AND INNOVATION
Impact Evaluation Terms Of Reference
Quasi-Experimental Methods
PROXIMITY AND INVESTMENT: EVIDENCE FROM PLANT-LEVEL DATA
Presentation at the African Economic Conference
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Development Impact Evaluation in Finance and Private Sector
Matching Methods & Propensity Scores
Implementation Challenges
Evaluating Impacts: An Overview of Quantitative Methods
Does Innovation and Technology Policy Pay-off? Evidence from Turkey
Presentation transcript:

Evaluating the effectiveness of innovation policies Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys

June DEIP, Amman June, Structure of presentation 1. Introduction to the policy evaluation studies:  policy background  features of TDFs  evaluation setup: outcomes to be evaluated, data sources 2. Evaluation methodologies:  the evaluation problem  addressing selection bias 3. Results from Latin American TDF evaluation: example of results, summary of results, concluding remarks

June DEIP, Amman June, A. Introduction: Policy background Constraints to performance in Latin America S&T falling behind in relative terms: small and declining share in world R&D investment, increasing gap with developed countries, falling behind other emerging economies Low participation by productive sector in R&D investment: lack of skilled workforce with technical knowledge; macro volatility, financial constraints, weak IPR, low quality of research institutes, lack of mobilized government resources, rentier mentality

June DEIP, Amman June, A. Introduction: Policy background Policy response: shift in policy From focus on promotion of scientific research activities, in public research institutes, universities and SOE To (1990-…) needs of productive sector, with instruments that foster the demand for knowledge by end users and that support the transfer of Know How to firms TDF emerged as an instrument of S&T policy

June DEIP, Amman June, A. Introduction: Policy background IDB: evaluating the impact of a sample of IDB S&T programmes and instruments frequently used: Technology Development Funds (TDF): to stimulate innovation activities in the productive sector, through R&D subsidies Competitive research grants (CRG) OVE coordinated, compiled results for TDF evaluation in Argentina, Brazil, Chile, Panama (Colombia)

June DEIP, Amman June, B. Introduction: Selected TDFs Country and PeriodNameTools Argentina FONTAR-TMP ITargeted Credit Argentina FONTAR ANRMatching Grants Brazil ADTENTargeted Credit Brazil FNDCTMatching Grants Chile FONTEC-line1Matching Grants Panama FOMOTECMatching Grants

June DEIP, Amman June, B. Introduction: features of TDFs Demand driven Subsidy Co-financing Competitive allocation of resources Execution by a specialised agency

June DEIP, Amman June, C. Introduction: evaluation setup Evaluation of TDFs at recipient (firm) level Impact on :  R&D input additionality  Behaviour additionality  Innovative output  performance: productivity, employment and growth thereof

June DEIP, Amman June,

June DEIP, Amman June, IndicatorData source Input additionality Amount invested by beneficiaries in R&D Firm balance sheets; Innovation surveys; Industrial surveys Behavioral additionality Product / process innovation, linkages with other agents in the NIS Innovation surveys Innovative Outputs Patents; Sales due to new products Patents databases; Innovation surveys PerformanceTotal factor productivity Labor productivity; Growth in sales, exports,employment Firm balance sheets; Innovation surveys; Industrial surveys; Labor surveys

June DEIP, Amman June, A. The evaluation problem (in words) To measure the impact of a program, the evaluator is interested in the counterfactual question: what would have happened to the beneficiaries,… if they had not had access to the program This is however not observed, unknown. We can only observe the performance of non- beneficiaries and compare it to the performance of beneficiaries.

June DEIP, Amman June, A. The evaluation problem (in words) This comparison however is not sufficient to tell us the impact of the program, it presents rather correlations, no causality Why not? Because there may be a range of characteristics that affect both the possibility of accessing the program AND performing well on the performance indicators (eg R&D intensity, productivity…) Eg. size of the firm, age, exporting…

June DEIP, Amman June, A. The evaluation problem (in words) This means, ‘being in the treatment group or not’ is not the result of a random draw, but there is a selection into a specific group, along both observable and non-observable characteristics The effect of selection has to be taken into account if one wants to measure the impact of the program on the performance of the firms!! More formally….

June DEIP, Amman June, A. The evaluation problem Define: Y T = the average expenses in innovation by a firm in a specific year if the firm participates in the TDF and Y C = the average expenses by the same firm if it does not participate to the program. Measuring the program impact requires a measurement of the difference (Y T - Y C ) which is the effect of having participated in the program for firm i.

June DEIP, Amman June, A. The evaluation problem Computing (Y T - Y C ) requires knowledge of the counterfactual outcome that is not empirically observable since a firm can not be observed simultaneously as a participant and as a non- participant.

June DEIP, Amman June, A. The evaluation problem by comparing data on participating and non- participating firms, we can evaluate an average effect of program participation, E[Y T - Y C ] Substracting and adding E[Y C |D=1]

June DEIP, Amman June, A. The evaluation problem Only if there is no selection bias, the average effect of program participation will give an unbiased estimate of the program impact There is no selection bias, if participating and non- participating firms are similar with respect to dimensions that are likely to affect both the level of innovation expenditures and TDF participation Eg. Size, age, exporting, solvency… affecting RD expenditures and application for grant

June DEIP, Amman June, B. The evaluation problem avoided Incorporating randomized evaluation in programme design Random assignment of treatment (participation in the program) would imply that there are no pre- existing differences between the treated and non- treated firms, selection bias is zero Hard to implement for certain types of policy instruments

June DEIP, Amman June, B. Controlling for selection bias Controlling for observable differences Develop a statistically robust control group of non- beneficiaries identify comparable participating and non- participating firms, conditional on a set of observable variables X, i.o.w.: control for the pre-existing observable differences using econometric techniques: e.g. propensity score matching

June DEIP, Amman June, B. Propensity score matching (PSM) If there is only one dimension (eg size) that affects both treatment (participation in TDF) and outcome (R&D intensity), it would be relatively simple to find pairs of matching firms. When treatment and outcome are determined by a multidimensional vector of characteristics (size, age, industry, location...), this becomes problematic. Find pairs of firms that have equal or similar probability of being treated (having TDF support)

June DEIP, Amman June, B. PSM Using probit or logit analysis on the whole sample of beneficiaries and non-beneficiaries, we calculate the probability (P) or propensity that a firm participates in a program P(D=1)=F(X) X= vector of observable characteristics Purpose: to find for each participant (D=1) at least one program non-participant that has equal/very similar chance of being participant, which is then selected into the control group.

June DEIP, Amman June, B. PSM It reduces the multidimensional problem of several matching criteria to one single measure of distance There are several measures of proximity: Eg nearest neighbour, predefined range, kernel – based matching...

June DEIP, Amman June, B. PSM Estimating the impact (Average effect of Treatment on Treated): ATT=E[E(Y1 | D = 1, p(x)) –E(Y0 | D = 0, p(x))| D=1 ] Y is the impact variable D = {0,1} is a dummy variable for the participation in the program, x is a vector of pre-treatment characteristics p(x) is the propensity score.

June DEIP, Amman June, B. Difference in difference (DID) The treated and control group of firms may also differ in non-observable characteristics, eg management skills. If panel data are available (data of pre-treatment and post-treatment time periods) the impact of unobservable differences and time shocks can be neutralised by taking the difference-in-differences of the impact variable. Important assumption: unobservables do not change over time In case of DID, the impact variable is a growth rate.

June DEIP, Amman June, Example of results Impact of ADTEN (Brazil) on (private) R&D intensity Single difference in 2000 [(RD/sales 2000 beneficiaries – RD/sales 2000 control)] after PSM 92 observations each beneficiaries 1.18% Control group 0.52% Difference: 0.66% positive and significant impact,net of subsidy

June DEIP, Amman June, Example of results Impact of FONTAR-ANR (Argentina) on (public+private) R&D intensity (=R&D expenditures/sales) Difference in difference with PSM 37 observations each [(RDint. afterANR beneficiaries –RD/sales beforeANR ben.)- RD/sales afterANR control-RD/Sales beforeANR control)] Beneficiaries ( ) = 0.12 Control group ( ) = DID 0.19 positive and significant impact, GROSS of subsidy

June DEIP, Amman June, Results: summary The impact of the programs on firm behaviour and outcomes becomes weaker and weaker as one gets further from the immediate target of the policy instrument: There is clear evidence of a positive impact on R&D, weaker evidence of some behavioural effects, and almost no evidence of an immediate positive impact on new product sales or patents. This may be expected, given the relatively short time span over which the impacts were measured.

June DEIP, Amman June, Results no clear evidence that the TDF can significantly affect firms’ productivity and competitiveness within a five-year period, although there is a suggestion of positive impacts. However, these outcomes, which are often the general objective of the programs, are more likely related to a longer run impact of policy. The evaluation does not take into account potential positive externalities that may result from the TDF.

June DEIP, Amman June, Results the evaluation design should clearly identify: rationale short, medium and long run expected outcomes; periodic collection of primary data on the programs’ beneficiaries and on a group of comparable non- beneficiaries; the repetition of evaluation on the same sample so that long run impacts can be clearly identified; the periodic repetition of the impact evaluation on new samples to identify potential needs of re- targeting of policy tools.

June DEIP, Amman June, Concluding remarks The data needs of this type of evaluation are evident Involvement and commitment of statistical offices is needed to be able to merge survey data that allow these analyses The merger and accessability of several data sources create unprecedented opportunities for the evaluation and monitoring of policy instruments Thank you!