Davide Azzolini, FBK-IRVAPP Enrico Rettore, FBK-IRVAPP Antonio Schizzerotto, FBK-IRVAPP MENTEP Kick-Off Meeting Brussels, 13-14 April 2015.

Slides:



Advertisements
Similar presentations
Survey design. What is a survey?? Asking questions – questionnaires Finding out things about people Simple things – lots of people What things? What people?
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
Survey Methodology Nonresponse EPID 626 Lecture 6.
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Survey Design Steps in Conducting a survey.  There are two basic steps for conducting a survey  Design and Planning  Data Collection.
Washington State Prevention Summit Analyzing and Preparing Data for Outcome-Based Evaluation Using the Assigned Measures and the PBPS Outcomes Report.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
Types of Evaluation.
a judgment of what constitutes good or bad Audit a systematic and critical examination to examine or verify.
Marketing Research Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides.
PROJECT OVERVIEW. Grundtvig Learning Partnership Through this Learning Partnership, participating organizations have agreed to address the following subjects:
Assessing Financial Education: A Practitioner’s Guide December 2010.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Equitable Services for Private School Students March, 2012 Consultation Process & Meeting Agenda’s Marcia Beckman, Director Elementary & Secondary Education.
Professional Development Activity Log: Comparing Teacher Log and Survey Approaches to Evaluating Professional Development AERA Annual Meeting Montreal,
ZUZANA STRAKOVÁ IAA FF PU Pre-service Trainees´ Conception of Themselves Based on the EPOSTL Criteria: a Case Study.
TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011.
1 Commissioned by PAMSA and German Technical Co-Operation National Certificate in Paper & Pulp Manufacturing NQF Level 3 Collect and use data to establish.
ESRD-CAHPS Field Test Beverly Weidmer, M.A. RAND Corporation CAHPS RAND Team.
Measuring Impact: Experiments
MAC Fall Symposium: Learning What You Want to Know and Implementing Change Elizabeth Yakel, Ph.D. October 22, 2010.
Case-Control Matching with SPSS:
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
19 th Bled eConference, 06 June Hannes Selhofer European Commission An initiative of the Hannes Selhofer empirica GmbH 19 th Bled eConference –
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Chapter Four Managing Marketing Information. Copyright 2007, Prentice Hall, Inc.4-2 The Importance of Marketing Information  Companies need information.
Evaluation and Impact of Entrepreneurial Education and Training Malcolm Maguire Transnational High Level Seminar on National Policies and Impact of Entrepreneurship.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
The background of the improvement of PISA results in Hungary Trends in Performance Since 2000 International Launch of PISA 2009 Report February 10 th,
MSP Annual Performance Report: Online Instrument MSP Regional Conferences November, 2006 – February, 2007.
Issues in Validity and Reliability Conducting Educational Research Chapter 4 Presented by: Vanessa Colón.
What is randomization and how does it solve the causality problem? 2.3.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Institutional and legal framework of the national statistical system: the national system of official statistics Management seminar on global assessment.
Lessons from the United States: Evaluating Employment Services is Neither Easy Nor Cheap November 7, 2009 Sheena McConnell Peter Schochet Alberto Martini.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
REGIONAL EDUCATIONAL LAB ~ APPALACHIA The Effects of Hybrid Secondary School Courses in Algebra 1 on Teaching Practices, Classroom Quality and Adolescent.
15-April-10 Johan van der Valk Sub sample of persons in Labour Force household Survey Just an idea.
Evaluation Assistant Research Projects EAs are required to lead an evaluation research project for the academic year.
IMPACT EVALUATION WORKSHOP ISTANBUL, TURKEY MAY
New Administrators’ Technology Guide Becca, Dana, José, Kim.
Pushing Forward from BHR through Random Individual-Level Variation in Program Components within Sites: The HPOG Impact Study Stephen H. Bell APPAM Research.
A Software Engineering Model Based Curriculum Development Approach Leon Pan University of the Fraser Valley.
Davide Azzolini, FBK-IRVAPP Enrico Rettore, FBK-IRVAPP Antonio Schizzerotto, FBK-IRVAPP MENTEP Kick-Off Meeting Brussels, April 2015.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
DATA FOR EVIDENCE-BASED POLICY MAKING Dr. Tara Vishwanath, World Bank.
DIBELS Progress Monitoring Arbuckle Staff Training 2012.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Davide Azzolini, FBK-IRVAPP Enrico Rettore, FBK-IRVAPP Antonio Schizzerotto, FBK-IRVAPP MENTEP Kick-Off Meeting Brussels, April 2015.
MENTEP kick off meeting and bilateral discussions Brussels, 13 April 2015 MENTEP in a nutshell …from the partners’ point of view Patricia Wastiau European.
Using Surveys to Design and Evaluate Watershed Education and Outreach Day 5 Methodologies for Implementing Mailed Surveys Alternatives to Mailed Surveys.
Formulation of the Research Methods A. Selecting the Appropriate Design B. Selecting the Subjects C. Selecting Measurement Methods & Techniques D. Selecting.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Ellinogermaniki Agogi Research and Development Department DigiSkills Network DigiSkills: Network for the enhancement of Digital competence skills.
Kenya Evidence Forum - June 14, 2016 Using Evidence to Improve Policy and Program Designs How do we interpret “evidence”? Aidan Coville, Economist, World.
LSRN Discussion Workshop: 24 November 2016
Chapter 4 Marketing Research
Chapter 4 Marketing Research
CHAPTER 7 Sampling Distributions
Quasi-Experimental Designs
WIOCC SUPPLEMENTARY SCHOOL CURRICULUM AND ACTIVITIES
CHAPTER 7 Sampling Distributions
CHAPTER 7 Sampling Distributions
CHAPTER 7 Sampling Distributions
CHAPTER 7 Sampling Distributions
CHAPTER 7 Sampling Distributions
Sample Sizes for IE Power Calculations.
TLQAA STANDARDS & TOOLS
Presentation transcript:

Davide Azzolini, FBK-IRVAPP Enrico Rettore, FBK-IRVAPP Antonio Schizzerotto, FBK-IRVAPP MENTEP Kick-Off Meeting Brussels, April 2015

The MENTEP evaluation: a quick overview Day 1

The evaluation question Does the Technology-Enhanced Teaching Self-Assessment Tool (TET-SAT) really improve teachers’ TET self-assessment capabilities and, ultimately, increase their TET competencies? NB: The aim is to evaluate the tool, NOT the teachers, nor the schools!

The basics of the MENTEP evaluation design Counterfactual approach: compare a group of teachers who use the TET-SAT to an equivalent group of teachers who do not use it. The two groups of teachers will be identified in a way that: The two groups will be truly comparable (no «apples vs oranges» comparison!) The “no one forced, no one denied” principle will be satisfied

a) Sampling A random sample of schools will be selected from the relevant population, then a random sample of teachers will be drawn in each school. Reference population: Only schools with adequate ICT equipment levels will be considered; education level to be decided. Target sample size: approximately 1,000 teachers per country over no less than 50 schools (Small countries, ad hoc solutions). Incentives for the participation into the experiment (e.g., a lottery for one teacher per country offered a trip to visit Brussels). Oversampling of schools and teachers will be performed in order to cope with possible refusals. Moreover, administrative information (whenever available) will be used to check whether school/teachers refusing to cooperate are systematically different from the others.

b) Benchmark and follow-up surveys Benchmark survey (within the first two months of school year 2016/2017). Before the intervention all sampled teachers complete an on-line benchmark survey to (a) assess their TET competencies and (b) collect a rich set of information on their educational and professional experience. Follow-up survey (By the end of school year 2016/2017). After the intervention, sampled teachers re-assess their TET competences. This is the outcome variable by which the intervention will be evaluated.

c) Encouragement letters and estimation method The sampled schools are randomly split into two halves: ‘encouraged’ and ‘not encouraged’. A random subgroups of teachers teaching in the ‘encouraged’ schools receive a set of encouragement letters explaining how to use the tool and why they should. All other teachers – both in the ‘encouraged’ and in the ‘not encouraged’ schools - receive no information. Since not all those receiving the letters will make use of the tool while some of the teachers not receiving the letter presumably will do, a rough comparison of the two groups of teachers does not identify the causal effect of the tool. The letter of encouragement is used as an instrumental variable to increase the probability of participation for those who receive it and allow to retrieve the estimate of the impact. We can assess the peer effect of the intervention by comparing teachers in the ‘encouraged’ schools not receiving the set of letters to those in the ‘not encouraged’ schools

Thank you for your attention Contact: