Interviewer Observations in the National Survey of Family Growth: Lessons Learned and Unanswered Questions Brady T. West Survey Research Center, Institute.

Slides:



Advertisements
Similar presentations
Ed-D 420 Inclusion of Exceptional Learners. CAT time Learner-Centered - Learner-centered techniques focus on strategies and approaches to improve learning.
Advertisements

Agency for Healthcare Research and Quality (AHRQ)
Correcting for Common Causes of Nonresponse and Measurement Error Andy Peytchev International Total Survey Error Workshop Stowe, June 15, 2010.
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Split Questionnaire Designs for Consumer Expenditure Survey Trivellore Raghunathan (Raghu) University of Michigan BLS Workshop December 8-9, 2010.
Brian A. Harris-Kojetin, Ph.D. Statistical and Science Policy
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Challenges to Surveys Non-response error –Falling response rates over time Sample coverage –Those without landline telephones excluded –Growing.
4.11 PowerPoint Emily Smith.
Designing the Questionnaire
B121 Chapter 7 Investigative Methods. Quantitative data & Qualitative data Quantitative data It describes measurable or countable features of whatever.
Chapter 13 Survey Designs
Documentation and survey quality. Introduction.
Chapter 13 Survey Designs
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
08/08/2015 Statistics Canada Statistique Canada Paradata Collection Research for Social Surveys at Statistics Canada François Laflamme International Total.
Survey Designs EDUC 640- Dr. William M. Bauer
Demand Planning: Forecasting and Demand Management
9. Learning Objectives  How do companies utilize social media research? What are the primary approaches to social media research?  What is the research.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
Dr. Engr. Sami ur Rahman Assistant Professor Department of Computer Science University of Malakand Research Methods in Computer Science Lecture: Research.
Health promotion and health education programs. Assumptions of Health Promotion Relationship between Health education& Promotion Definition of Program.
Aggregate and Systemic Components of Risk in Total Survey Error Models John L. Eltinge U.S. Bureau of Labor Statistics International Total Survey Error.
Responsive Design for Household Surveys: Illustration of Management Interventions Based on Survey Paradata Robert M. Groves, Emilia Peytcheva, Nicole Kirgis,
Assessment with Children Chapter 1. Overview of Assessment with Children Multiple Informants – Child, parents, other family, teachers – Necessary for.
Volunteer Angler Data Collection and Methods of Inference Kristen Olson University of Nebraska-Lincoln February 2,
Participants Adoption Study 109 (83%) of 133 WSU Cooperative Extension county chairs, faculty, and program staff responded to survey Dissemination & Implementation.
Chapter 12: Survey Designs
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
Evaluating a Research Report
Staying the Course: Facility and Profession Retention among Nursing Assistants in Nursing Homes Sally C. Stearns, PhD Laura D’Arcy, MPA The University.
Internet Surveys & Future Works In Official Statistics 1 The 4th International Workshop on Internet Survey Methods, Sept. Statistical Center, Statistics.
1 Renee M. Gindi NCHS Federal Conference on Statistical Methodology Statistical Policy Seminar December 4, 2012 Responsive Design on the National Health.
1 Do UK higher education students overestimate their starting salary? John Jerrim Institute of Education, University of London.
Assessing dietary intakes in food environment research: Implications for policy and practice SHARON KIRKPATRICK University of Waterloo JILL REEDY, KEVIN.
1 Data Linkage for Educational Research Royal Statistical Society March 19th 2007 Andrew Jenkins and Rosalind Levačić Institute of Education, University.
Chapter 2 Specifying and Assessing What You Want to Change.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Nonresponse Rates and Nonresponse Bias In Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology, USA Emilia Peytcheva.
2005 INDEPENDENT ANNUAL SURVEY OF COMMUNITIES 1 BHP BILLITON INDEPENDENT ANNUAL SURVEY OF COMMUNITIES Mine X, South America 2005 PREPARED FOR BHP BILLITON.
1 Non-Response Non-response is the failure to obtain survey measures on a sample unit Non-response increases the potential for non-response bias. It occurs.
A Theoretical Framework for Adaptive Collection Designs Jean-François Beaumont, Statistics Canada David Haziza, Université de Montréal International Total.
A discussion of Comparing register and survey wealth data ( F. Johansson and A. Klevmarken) & The Impact of Methodological Decisions around Imputation.
Reading Recovery—Does it work?-Staff Presentation Patti Lapham January 15, 2011 Classroom Reading and Writing-
Household Survey Data on Remittances in Sending Countries Johan A. Mistiaen International Technical meeting on Measuring Remittances Washington DC - January.
11 How Much of Interviewer Variance is Really Nonresponse Error Variance? Brady T. West Michigan Program in Survey Methodology University of Michigan-Ann.
Pg 1 Mode Effects Measurement and Correction: A Case Study Stas Kolenikov, Courtney Kennedy, Ali Ackermann, and Chintan Turakhia Abt SRBI Michael O. Emerson.
Paradata and Survey Management in the National Survey of Family Growth (NSFG), William D. Mosher, NCHS; James Wagner, Mick Couper, & Nicole Kirgis,
1 Responsive Design and Survey Management in the National Survey of Family Growth (NSFG) William D. Mosher, NCHS FCSM Statistical Policy Seminar Washington,
CASE STUDY: NATIONAL SURVEY OF FAMILY GROWTH Karen E. Davis National Center for Health Statistics Coordinating Center for Health Information and Service.
Assessing Nonresponse Bias and Measurement Error Using Statistical Matching John Dixon U.S. Bureau of Labor Statistics June 15, 2010 The opinions.
Investigating the Potential of Using Non-Probability Samples Debbie Cooper, ONS.
Examining the Trade Off between Sampling and Non-response Error in a Targeted Non-response Follow-up Sarah Tipping and Jennifer Sinibaldi, NatCen.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students June 8, 2009 IES Annual Research.
Neighborhood Collective Efficacy and Participation in Household Surveys Carolina Casas-Cordero PhD Candidate, JPSM University of Maryland ITSEW 2009 (Tällberg,
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
NCRM is funded by the Economic and Social Research Council 1 Interviewers, nonresponse bias and measurement error Patrick Sturgis University of Southampton.
Module 4: Obtaining, Tabulating, and Using Survey Results Outcome Monitoring and Evaluation Using LQAS.
1 Assessment of Potential Bias in the National Immunization Survey (NIS) from the Increasing Prevalence of Households Without Landline Telephones Meena.
Crystal Reinhart, PhD & Beth Welbes, MSPH Center for Prevention Research and Development, University of Illinois at Urbana-Champaign Social Norms Theory.
1 Survey Nonresponse Survey Research Laboratory University of Illinois at Chicago March 16, 2010.
Gathering Feedback for Teaching Combining High-Quality Observations with Student Surveys and Achievement Gains.
ACF Office of Community Services (OCS) Community Services Block Grant (CSBG) Survey of Grantees Satisfaction with OCS Survey of Eligible Entities Satisfaction.
© 2010 Jones and Bartlett Publishers, LLC. Chapter 12 Clinical Epidemiology.
Statistical Concepts Basic Principles An Overview of Today’s Class What: Inductive inference on characterizing a population Why : How will doing this allow.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
RTI International is a registered trademark and a trade name of Research Triangle Institute. Using Paradata for Monitoring Interviewer Performance.
Measuring the Effects of an Irrigation and Land Tenure Security Initiative in the Senegal River Valley Baseline findings and evaluation challenges March.
SESRI Workshop on Survey-based Experiments
Multi-Mode Data Collection Approach
Presentation transcript:

Interviewer Observations in the National Survey of Family Growth: Lessons Learned and Unanswered Questions Brady T. West Survey Research Center, Institute for Social Research University of Michigan-Ann Arbor January 26, WSS: Benefits and Challenges in Using Paradata

Presentation Overview A summary of published research demonstrating the benefits and challenges of working with interviewer observations in the U.S. National Survey of Family Growth (NSFG) Directions for future research in this area / open questions for further investigation 2 WSS: Benefits and Challenges in Using Paradata

The NSFG The major national fertility survey in the United States An important source of data on sexual activity, sexual behavior, and reproductive health for policy makers Target population (Until Sept. 2015): Ages Target population (Sept Present): Ages Continuous Sample Design: Four National Quarter Samples Worked Each Year Face-to-face interviews (60-80 minutes) with one person from each household; ACASI for sensitive items 3 WSS: Benefits and Challenges in Using Paradata

Paradata in the NSFG Interviewer Observations (our focus today) – Segment (Area) Level (e.g., safety concerns?) – Housing Unit Level (e.g., young children present?) – Respondent Level (e.g., is selected R sexually active?) – Post-survey observations (e.g., ACASI behaviors?) The NSFG also collects a wide variety of other paradata, including call record data, case disposition outcomes, and keystroke information 4 WSS: Benefits and Challenges in Using Paradata

Benefit #1: Nonresponse Adjustment Interviewer observations collected on all sampled units are included in models of response propensity, which are used to adjust weights Observations related to both key NSFG outcomes and response propensity have the ability to reduce nonresponse bias (West, 2013a) – Sexual activity – Presence of young children – Physical impediments to housing units 5 WSS: Benefits and Challenges in Using Paradata

Challenge #1: Observation Quality What if the observations are prone to error? They are (West, 2013a): – Sexual activity: 78% “accuracy”, based on survey data – Young children: 72% “accuracy”, based on HH rosters – Accuracy also varies substantially among interviewers (West and Kreuter, 2013; Sinibaldi et al., 2013; West et al., 2014); Why? Error-prone observations will hinder the effectiveness of nonresponse adjustments (West, 2013a; West, 2013b) 6 WSS: Benefits and Challenges in Using Paradata

Challenge #1: Observation Quality 7 WSS: Benefits and Challenges in Using Paradata

Challenge #1: Observation Quality So what can we do about this? One option: Provide the interviewers with a list of important predictors of what they are trying to observe (West and Kreuter, 2015) This type of intervention improves error rates, and we have currently programmed this “list” to be optional for interviewers (if they need it) 8 WSS: Benefits and Challenges in Using Paradata

Challenge #1: Observation Quality Ongoing Research: Understand the cues and strategies that different interviewers are using to make their observations  Standardize training based on the most effective strategies (West and Kreuter, 2011; West et al., 2014; West et al., submitted); randomize type of training received? Current work suggests that a focus on the features of housing units and persons within those units is helpful (as opposed to guessing, area features, etc.), in addition to diversifying the cues used WSS: Benefits and Challenges in Using Paradata 9

Benefit #2: Interviewer Evaluation The interviewer observations collected inform eligibility, contact (daily), and cooperation (daily) propensity models (Krueger and West, 2014) These models are used to compute expectations of ________ propensity at a given point in time Interviewer performance can then be evaluated by comparing actual daily outcomes to expectations, and averaging the deviations for a given interviewer (West and Groves, 2013) 10 WSS: Benefits and Challenges in Using Paradata

Challenge #2: Model Specification How do we know if a given type of propensity model has been correctly specified? Error-prone interviewer observations can once again play a role…how do we adjust for these errors when fitting the models? Replace the obs. w/ predictions? Should random interviewer effects be included in the models (so that they are evaluated against themselves)? What if paradata are missing? 11 WSS: Benefits and Challenges in Using Paradata

Benefit #3: Response Quality Use post-survey interviewer observations to identify respondents who may be providing data of poor quality Assess self-reported interviewer behaviors that may affect responses on sensitive items during ACASI (West and Peytcheva, 2014) 12 WSS: Benefits and Challenges in Using Paradata

Benefit #3: Response Quality Interviewers vary substantially in terms of how often they sit close enough to see the screen ACASI reports on sensitive behaviors in the NSFG vary as a function of whether the interviewer says that they can see the screen 13 WSS: Benefits and Challenges in Using Paradata

Benefit #3: Response Quality 14 WSS: Benefits and Challenges in Using Paradata

Benefit #3: Response Quality 15 WSS: Benefits and Challenges in Using Paradata

Challenge #3: Post-Survey Observations Are post-survey interviewer observations reliable indicators of data quality (Wang et al., 2013)? Past literature has shown that these are a function of respondent characteristics (rather than data quality), and that there is consistent evidence of interviewer variance in them Can these observations be combined (e.g., LCA) to reliably indicate data quality? Open question! 16 WSS: Benefits and Challenges in Using Paradata

Benefit #4: Responsive Survey Design The paradata collected in the NSFG are examined daily in an RSD framework to monitor field production and efficiency Interventions are implemented when the paradata suggest that certain processes may be introducing bias or inefficiency Example: Monitoring percentages of Rs and NRs with kids, and increasing interviewer focus on households with no kids if Rs have a higher tendency to have kids (Wagner et al., 2012) 17 WSS: Benefits and Challenges in Using Paradata

Benefit #4: Responsive Survey Design 18 WSS: Benefits and Challenges in Using Paradata

Challenge #4: Bias Indicators Are response rates in these different subgroups really the best indicators of nonresponse bias, especially if the interviewer observations are prone to error? These observations, provided that they are of high enough quality, could inform a variety of possible nonresponse bias indicators (Nishimura et al., forthcoming; see also Krueger and West, 2014) 19 WSS: Benefits and Challenges in Using Paradata

NSFG: Summary The NSFG is committed to using a variety of interviewer observations to improve its survey operations and its ultimate data products The importance of these observations has to be clearly communicated to interviewers! An active program of research on paradata in general is necessary to fully understand the measurement error properties of these data The NSFG has a long history of collaborating with other researchers interested in these areas of research! 20 WSS: Benefits and Challenges in Using Paradata

References Krueger, B.S. and West, B.T. (2014, Authors Alphabetical). Assessing the Potential of Paradata and Other Auxiliary Information for Nonresponse Adjustments. Public Opinion Quarterly, 78(4), Nishimura, R., Wagner, J., and Elliott, M. (2015). Alternative Indicators for the Risk of Nonresponse Bias: A Simulation Study. International Statistical Review. DOI: /insr Sinibaldi, J., Durrant, G.B., and Kreuter, F. (2013). Evaluating the Measurement Error of Interviewer Observed Paradata. Public Opinion Quarterly, 77, Wagner, J., West, B.T., Kirgis, N., Lepkowski, J.M., Axinn, W.G., and Kruger-Ndiaye, S. (2012). Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection. Journal of Official Statistics, 28(4), Wang, Y., West, B.T., and Liu, M. (2013). Interviewer perceptions of Survey Data Quality. Presented at the 38th Annual conference of the Midwest Association for Public Opinion Research (MAPOR), Chicago, IL, 11/23/2013. West, B.T. (2013a). An Examination of the Quality and Utility of Interviewer Observations in the National Survey of Family Growth. Journal of the Royal Statistical Society, Series A (General), 176(1), West, B.T. (2013b). The Effects of Error in Paradata on Weighting Class Adjustments: A Simulation Study. Chapter 15 in Improving Surveys with Paradata: Analytic Use of Process Information. Wiley Publishing. West, B.T. and Groves, R.M. (2013). The PAIP Score: A Propensity-Adjusted Interviewer Performance Indicator. Public Opinion Quarterly, 77(1), WSS: Benefits and Challenges in Using Paradata

References West, B.T., Li, D., and Ma, Y. (Under Review). The Identification of Effective Observational Strategies for Survey Interviewers. Submitted to Sociological Methods and Research, January West, B.T. and Kreuter, F. (2011). Observational Strategies Associated with Increased Accuracy of Interviewer Observations: Evidence from the National Survey of Family Growth. In JSM Proceedings, AAPOR Alexandria, VA: American Statistical Association, pp West, B.T. and Kreuter, F. (2015). A practical technique for improving the accuracy of interviewer observations of respondent characteristics. Field Methods, 27(2), West, B.T. and Kreuter, F. (2013). Factors Affecting the Accuracy of Interviewer Observations: Evidence from the National Survey of Family Growth (NSFG). Public Opinion Quarterly, 77(2), West, B.T., Kreuter, F., and Trappmann, M. (2014). Is the Collection of Interviewer Observations Worthwhile in an Economic Panel Survey? New Evidence from the German Labor Market and Social Security (PASS) Study. Journal of Survey Statistics and Methodology, 2(2), West, B.T. and Peytcheva, E. (2014). Can Interviewer Behaviors During ACASI Affect Data Quality? Survey Practice, 5(7). 22 WSS: Benefits and Challenges in Using Paradata

Thank You! Thanks to Donsig for the invitation! Please feel free to me if you would like copies of any of the papers cited here I’m open to questions and suggestions! WSS: Benefits and Challenges in Using Paradata 23