Quality indicators for measuring and enhancing the composition of survey response Q2008 – Special topic session, July 9 Jelke Bethlehem and Barry Schouten.

Slides:



Advertisements
Similar presentations
Page 1 Measuring Survey Quality through Representativity Indicators using Sample and Population based Information Chris Skinner, Natalie Shlomo, Barry.
Advertisements

Paul Smith Office for National Statistics
UNIT 1 CONCEPT OF MANAGERIAL ECONOMICS (continue)
UNIT 1 CONCEPT OF MANAGERIAL ECONOMICS (continue)
Brian A. Harris-Kojetin, Ph.D. Statistical and Science Policy
Estimates and sampling errors for Establishment Surveys International Workshop on Industrial Statistics Beijing, China, 8-10 July 2013.
Documentation and survey quality. Introduction.
STAT 4060 Design and Analysis of Surveys Exam: 60% Mid Test: 20% Mini Project: 10% Continuous assessment: 10%
Q USE OF PROCESS DATA TO DETERMINE THE NUMBER OF CALL ATTEMPTS IN A TELEPHONE SURVEY Annica Isaksson Linköping University, Sweden Peter Lundquist.
~ Draft version ~ 1 HOW TO CHOOSE THE NUMBER OF CALL ATTEMPTS IN A TELEPHONE SURVEY IN THE PRESENCE OF NONRESPONSE AND MEASUREMENT ERRORS Annica Isaksson.
REPRESENTATIVE RESPONSE Some Examples ITSEW Peter Lundquist Statistics Sweden.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
08/08/2015 Statistics Canada Statistique Canada Paradata Collection Research for Social Surveys at Statistics Canada François Laflamme International Total.
Marketing Environment
St Andrews HS Business Studies Dept Higher Business Management.
18/08/2015 Statistics Canada Statistique Canada Responsive Collection Design (RCD) for CATI Surveys and Total Survey Error (TSE) François Laflamme International.
Vienna, 23 April 2008 UNECE Work Session on SDE Topic (v) Editing on results (post-editing) 1 Topic (v): Editing based on results Discussants: Maria M.
Evaluation Test Justin K. Reeve EDTECH Dr. Ross Perkins.
1 1 Management Tools for Enhancing the Composition of Survey Response Q2008 European Conference on Quality in Official Statistics Rome, July 2008 Anne.
Nonresponse issues in ICT surveys Vasja Vehovar, Univerza v Ljubljani, FDV Bled, June 5, 2006.
Normative Criteria for Decision Making Applying the Concepts
Fieldwork efforts  Monitoring fieldwork efforts  Monitoring fieldwork efforts: Did interviewers /survey organisations implement fieldwork guidelines.
Multiple Indicator Cluster Surveys Survey Design Workshop Sampling: Overview MICS Survey Design Workshop.
Quality strategies in cross- national surveys The case of the European Social Survey Ineke Stoop.
Eurostat Overall design. Presented by Eva Elvers Statistics Sweden.
Copyright 2010, The World Bank Group. All Rights Reserved. Managing processes Core business of the NSO Part 2 Strengthening Statistics Produced in Collaboration.
Scot Exec Course Nov/Dec 04 Survey design overview Gillian Raab Professor of Applied Statistics Napier University.
How to Design a Sample and Improve Response Rates Alex StannardScottish Government Kevin RalstonUniversity of Stirling.
1 Renee M. Gindi NCHS Federal Conference on Statistical Methodology Statistical Policy Seminar December 4, 2012 Responsive Design on the National Health.
The application of selective editing to the ONS Monthly Business Survey Emma Hooper Office for National Statistics
Stop the Madness: Use Quality Targets Laurie Reedman.
1 Systematic Sampling (SYS) Up to now, we have only considered one design: SRS of size n from a population of size N New design: SYS DEFN: A 1-in-k systematic.
Centraal Bureau voor de Statistiek Challenges of redesigning household surveys and maintaining output quality Menno Cuppen Paul van der Laan Wim van Nunspeet.
The Challenge of Non- Response in Surveys. The Overall Response Rate The number of complete interviews divided by the number of eligible units in the.
Assessment of Misclassification Error in Stratification Due to Incomplete Frame Information Donsig Jang, Xiaojing Lin, Amang Sukasih Mathematica Policy.
A Theoretical Framework for Adaptive Collection Designs Jean-François Beaumont, Statistics Canada David Haziza, Université de Montréal International Total.
THE LFS REVIEW in the context of Eurostat programme for modernising social micro-data collections Anne CLEMENCEAU - Eurostat 9th Workshop on Labour Force.
A Quality Driven Approach to Managing Collection and Analysis
Topic (iii): Macro Editing Methods Paula Mason and Maria Garcia (USA) UNECE Work Session on Statistical Data Editing Ljubljana, Slovenia, 9-11 May 2011.
Byron Gangnes Econ 427 lecture 6 slides Selecting forecasting models— alternative criteria.
Best Practices Met Council Household Travel Survey (HTS) May
McGraw-Hill/Irwin © 2004 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 3 Designing the Sample.
Data Collection: Enhancing Response Rates & Limiting Errors Chapter 10.
11 How Much of Interviewer Variance is Really Nonresponse Error Variance? Brady T. West Michigan Program in Survey Methodology University of Michigan-Ann.
Practical Survey Design Strategies for Minimizing MSE Lars Lyberg and Bo Sundgren Statistics Sweden
Representativity Indicators for Survey Quality Programme: Cooperation Theme: Socio-economic sciences and Humanities Activity: Socio-economic and scientific.
An Indicator of Nonresponse Bias Derived from Call-back Analysis Paul P. Biemer RTI International and UNC.
Q2010 Special session 34 Data quality and inference under register information Discussion by Carl-Erik Särndal.
1 European Conference on Quality in Official Statistics - Helsinki. Finland 3-6 May 2010 The use of R-indicators in responsive survey design – Some Norwegian.
Q2010 – special topic session 33 - Page 1 Indicators for representative response Barry Schouten (Statistics Netherlands) Natalie Shlomo and Chris Skinner.
Small area estimation combining information from several sources Jae-Kwang Kim, Iowa State University Seo-Young Kim, Statistical Research Institute July.
TOPIC - Page 1 Representativity Indicators for Survey Quality R-indicators and fieldwork monitoring Koen Beullens & Geert Loosveldt K.U.Leuven.
Best Practices for Implementing a Paradata Warehouse Presentation at the Washington Statistical Society Mini-Conference on Paradata Washington, D.C. Jason.
An Active Collection using Intermediate Estimates to Manage Follow-Up of Non-Response and Measurement Errors Jeannine Claveau, Serge Godbout and Claude.
Planning the change to a targeted survey design
The European Statistical Training Programme (ESTP)
The European Statistical Training Programme (ESTP)
Chapter 7: Reducing nonresponse
The European Statistical Training Programme (ESTP)
Chapter 12: Other nonresponse correction techniques
Chapter 10: Selection of auxiliary variables
New Techniques and Technologies for Statistics 2017  Estimation of Response Propensities and Indicators of Representative Response Using Population-Level.
The European Statistical Training Programme (ESTP)
The European Statistical Training Programme (ESTP)
Chapter 6: Measures of representativity
Deciding the mixed-mode design WP1
Adaptive mixed-mode design WP1
Chapter 5: The analysis of nonresponse
Determining Subsampling Rates for Nonrespondents
Crucial Traits for Highly Effective Management Development Programmer
Presentation transcript:

Quality indicators for measuring and enhancing the composition of survey response Q2008 – Special topic session, July 9 Jelke Bethlehem and Barry Schouten

Session programme Enhancing response by differentiated data collection strategies Barry Schouten and Jelke Bethlehem Management tools for enhancing the composition of survey response Anne Sundvoll and Øyvin Kleven Use of process data to determine the number of call attempts in a telephone survey Annica Isaksson, Peter Lundquist and Daniel Thorburn An Indicator of Nonresponse Bias Derived from Call-back Analysis Paul Biemer Discussion

Enhancing response by differentiated data collection strategies Q2008 – Quality indicators for measuring and enhancing the composition of survey response Barry Schouten and Jelke Bethlehem

Enhancing the composition of response Session papers have in common that they want to minimize the effects of nonresponse by using auxiliary information and proces information (paradata) to either change the data collection or adjustment strategies. In this paper we propose to differentiate data collection strategies as a function of auxiliary information and paradata.

Differentiated data collection? Responsive designs - Groves & Heeringa (2006): The ability to monitor continually the streams of process data and survey data creates the opportunity to alter the design during the course of data collection to improve survey cost efficiency and to achieve more precise, less biased estimates. Responsive design: Learning period during fieldwork (phases) Especially useful for new or infrequent surveys with little available auxiliary information Data collection differentiated at macro level

Differentiated data collection? What if a survey is running for a long time and auxiliary information is available beforehand? Differentiate data collection at micro level: Different strategies for different households/persons/businesses Static: Independent of paradata Dynamic: Depending on paradata Goal: Optimal balance between quality and costs

What are ingredients to differentiated strategies? We need: 1. Auxiliary information 2. Strategies 3. Optimization criteria 4. Tools Model

Auxiliary information Auxiliary information: Must be available beforehand for full sample, or Can be collected during fieldwork for full sample. Sources: Frame data Registers and administrative data Paradata (fieldwork staff, interviewers) Crucial: Need for models and auxiliary variables that give a strong and consistent explanation of the different types of nonresponse (non-contact, refusal).

Data collection strategies Strategy is a series of pre-defined decision rules that depend only on auxiliary variables from frame and registers, and possibly also on paradata. Components: Data collection modes (f2f, telephone, web) Incentives Advance letters Reminders Contact strategy (Isaksson, Lundquist, Thorburn) Refusal conversion or call-back (Biemer)

Modelling strategies and response X Y S C,P R I Sample unit Paradata

Notation Strategies: Auxiliary: Distribution: Costs: Response: Allocation: with

Sample size, costs and response rate Expected sample size Expected costs Expected response rate

Objective function Criterion: conditional on Obvious choice: MSE conditional on

Objective function; example 7th Frame work Programme project RISQ Proposal: Minimize impact under worst case scenario Maximal absolute bias Objective function: Estimated response probabilities

Strategy allocation and sampling designs Strong resemblance with traditional sample designs if we view strategy allocation probabilities and response probabilities as inclusion probabilities. Response is second stage in “design” Stratification of strategy allocation From a variance point of view it is optimal to make the product of response and strategy allocation probabilities proportional to target variable(s) in survey

Tools We need tools to: Allocate sample units to different strategies Translate paradata and auxiliary information to indicators that enable monitoring and controlling Follow strategy during fieldwork These tools must be compatible with existing survey management tools and must be user-friendly to fieldwork staff One of main objectives of FP7 project RISQ (Representativity Indicators for Survey Quality)

Discussion Given a set of candidate strategies Can we predict response probabilities without bias? Is response behaviour sufficiently stable over time? What objective function to choose to optimize quality and costs? Is it realistic to employ individual cost functions? What demands come from fieldwork management?