Discussion of Plans for Designing the Recall Period for the Consumer Expenditure Interview Survey David Cantor Westat and Joint Program for Survey Methodology.

Slides:



Advertisements
Similar presentations
Developing Satisfaction Surveys: Integrating Qualitative and Quantitative Information David Cantor, Sarah Dipko, Stephanie Fry, Pamela Giambo and Vasudha.
Advertisements

Innovation data collection: Advice from the Oslo Manual South East Asian Regional Workshop on Science, Technology and Innovation Statistics.
CHAPTER SIX SURVEY RESEARCH: INTERVIEWS AND TELEPHONE SURVEYS.
Robin L. Donaldson May 5, 2010 Prospectus Defense Florida State University College of Communication and Information.
An Assessment of the Impact of Two Distinct Survey Design Modifications on Health Insurance Coverage Estimates in a National Health Care Survey Steven.
Copyright 2010, The World Bank Group. All Rights Reserved. Agricultural Data Collection Procedures Section A 1.
Split Questionnaire Designs for Consumer Expenditure Survey Trivellore Raghunathan (Raghu) University of Michigan BLS Workshop December 8-9, 2010.
LIST QUESTIONS – COMPARISONS BETWEEN MODES AND WAVES Making Connections is a study of ten disadvantaged US urban communities, funded by the Annie E. Casey.
STATISTICS FOR MANAGERS LECTURE 2: SURVEY DESIGN.
Structured Survey Interviewing: Telephone and In-Person Surveys Behice Ece Ilhan Chihchien Chen.
Meta-Evaluation of USAID Evaluations: American Evaluation Association Annual Conference Molly Hageboeck and Micah Frumkin.
Kevin Deardorff Assistant Division Chief, Decennial Management Division U.S. Census Bureau 2014 SDC / CIC Conference April 2, Census Updates.
SE 450 Software Processes & Product Metrics Survey Use & Design.
The Effort to Develop Disability Questions for the Current Population Survey Terence M. McMenamin U.S. Bureau of Labor Statistics October 5, 2006.
Causal-Comparative Research
Chapter 14: Surveys Descriptive Exploratory Experimental Describe Explore Cause Populations Relationships and Effect Survey Research.
Documentation and survey quality. Introduction.
Consumer Expenditure Survey Redesign Jennifer Edgar Bureau of Labor Statistics COPAFS Quarterly Meeting March 4, 2011.
Knowledge is Power Marketing Information System (MIS) determines what information managers need and then gathers, sorts, analyzes, stores, and distributes.
Types of interview used in research
Chapter 13 Survey Designs
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
08/08/2015 Statistics Canada Statistique Canada Paradata Collection Research for Social Surveys at Statistics Canada François Laflamme International Total.
Collaborating for Quality: A Cross-Discipline Approach to Questionnaire Content Evaluation in Business Surveys Diane K. Willimack Peter Gibson U.S. Census.
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
Descriptive and Causal Research Designs
© 2001 Vito & Blankenship. Learning Objectives In this chapter you will learn role of statistical analysis in criminal justice how crime in measured in.
The new HBS Chisinau, 26 October Outline 1.How the HBS changed 2.Assessment of data quality 3.Data comparability 4.Conclusions.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
Survey Research and Other Ways of Asking Questions
Jack C Richards Professional Development for Language Teachers: Strategies for Teacher Learning Jack C Richards & Thomas.
Marketing Research: Overview
A Comparison of Survey Reports Obtained via Standard Questionnaire and Event History Calendar Jeff Moore, Jason Fields, Joanne Pascale, Gary Benedetto,
Cognitive Interviewing for Question Evaluation Kristen Miller, Ph.D. National Center for Health Statistics
Modeling errors in physical activity data Sarah Nusser Department of Statistics and Center for Survey Statistics and Methodology Iowa State University.
Responsive Design for Household Surveys: Illustration of Management Interventions Based on Survey Paradata Robert M. Groves, Emilia Peytcheva, Nicole Kirgis,
Making Sense of the Social World 4th Edition
Descriptive and Causal Research Designs Copyright © 2010 by the McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Gemini Project Overview Adam Safir Division of Consumer Expenditure Surveys December 2010.
Pushing forward with ASPIRE A System for Product Improvement, Review and Evaluation Heather Bergdahl, Paul Biemer, Dennis Trewin Q2014.
Evaluating a Research Report
Survey Design and Data Collection Most Of the Error In Surveys Comes From Poorly Designed Questionnaires And Sloppy Data Collection Procedures.
Monitoring & Evaluation. Objective Learn the why, what and how-to approach to monitoring Review monitoring techniques and define the roles monitoring.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
CJ490: Research Methods In Criminal Justice
Copyright 2010, The World Bank Group. All Rights Reserved. Managing Data Collection Section A 1.
Nonresponse Rates and Nonresponse Bias In Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology, USA Emilia Peytcheva.
The Challenge of Non- Response in Surveys. The Overall Response Rate The number of complete interviews divided by the number of eligible units in the.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Joint OECD / European Commission workshop on international development in business and consumer tendency surveys Nov Task force on improvement.
A Comparison of Survey Reports Obtained via Standard Questionnaire and Event History Calendar: Initial Results from the 2008 EHC “Paper” Test Jeff Moore.
A discussion of Comparing register and survey wealth data ( F. Johansson and A. Klevmarken) & The Impact of Methodological Decisions around Imputation.
System error Biases in epidemiological studies FETP India.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Survey Methodology Survey Instruments (1) EPID 626 Lecture 7.
Self-completion Questionnaires
Survey Research.
11 How Much of Interviewer Variance is Really Nonresponse Error Variance? Brady T. West Michigan Program in Survey Methodology University of Michigan-Ann.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
NCRM is funded by the Economic and Social Research Council 1 Interviewers, nonresponse bias and measurement error Patrick Sturgis University of Southampton.
1. 2 Issues in the Design and Testing of Business Survey Questionnaires: Diane K. Willimack U.S. Census Bureau Economic Census The International.
Recall Period in CE Surveys Program Norman M. Bradburn CE Survey Methods Workshop December 8-9, 2010.
RTI International is a registered trademark and a trade name of Research Triangle Institute. Using Paradata for Monitoring Interviewer Performance.
Rachel Vis-Visschers & Vivian Meertens QDET2, 11 November 2016
Trena M. Ezzati-Rice, Frederick Rohde, Robert Baskin
Data Collection techniques Marina Signore
Survey Design & Use.
Task Force on Victimization Eurostat, October 2011 Guillaume Osier
HCI Evaluation Techniques
Presentation transcript:

Discussion of Plans for Designing the Recall Period for the Consumer Expenditure Interview Survey David Cantor Westat and Joint Program for Survey Methodology Prepared for the 2010 Consumer Expenditure Surveys Methods Workshop, December 8-9, 2010.

Outline General Issues Key aspects of Recall Period Key tradeoffs to balance Best practices for designing recall period 2

General Issue - Motivation Recall task is among the hardest for a survey respondent – Requires time to search memory. Need to prevent premature denial of eligible events. – May need to try different search strategies Motivation to complete these tasks has dramatic effects on data quality – Success of the Event History Calendar (Belli and Callegaro, 2008) – Cognitive interview debriefing 3

4 Kindermann, C., Lynch, J. and D. Cantor (1997) “’Effects of the Redesign on Victimization Estimates.” Bureau of Justice Statistics, NCJ ,

Burden Affects Motivation Current CE = 1 hour? – Mock interview took 2.5 hours – Significant time is needed to review general and specific questions Important to set realistic limit – Depends on methods to promote recall (e.g., use of records), use of proxy interviewing – 1 hour is upper limit (probably too long) Cognitive interview and focus group experience 5

General Issue - Interviewers Interviewers play a key role in promoting recall Natural tension between gaining cooperation and effective probing – Current timings: 1 hour CE interview involves significant shortcuts on probing. – Panel context – respondent has been exposed to questionnaire. This may encourage shortcuts Interviewer variance studies for CE (Cho, et al, 2006; others?) 6

NCS Redesign: Monitoring Matters Biderman et al., CATI interviews conducted by SRC (Michigan) produced victimization rates twice as high as equivalent Census field interviews Hubble and Wilder, Census CATI interviews produced significantly higher rate than Census Field interviews (see next slides) Effects seemed to have carried over to production NCVS (Cantor and Lynch, 2005). 7

Comparison of Field and CATI Telephone Interviews Type of CrimeDe-centralizedCATIRatio All Personal ** Robbery ** Assault ** Theft ** All Household ** Burglary ** Theft ** Motor Vehicle Theft ns 8 Source: Hubble, D. and B.E. Wilder (1988) “Preliminary results from the National Crime Survey CATI Experiment.” Proceedings of the American Statistical Association: Survey Methods Section. ** p<.05

Centralized vs Decentralized Interviewing; 3 vs 6 Month Reference periods Type of CrimeRatio for CATI and Decentralized Telephone Ratio of 3 and 6 month recall periods All Personal1.56**1.22** Robbery3.10**ns Assault1.38**1.28** Theft1.53**1.19** All Household1.23**1.16** Burglary1.36**ns Theft1.27**1.18** Motor Vehicle Theft ns 9 Source: Hubble, D. and B.E. Wilder (1988); Kobilarcik, et al., (1983); ** p<.05

Monitor Interviewers Use of CARI – Census Bureau is developing this capability – Provide systematic and timely feedback Other paradata (e.g., timings) 10

Key Aspects of the Recall Period Method of Recall – Methods for cueing – Role of the interviewer – Use of visual aids (calendar – Belli and Callegaro, 2008; attention to format - Redline, et al, 2009) Bounded vs Unbounded recall period – Effects of bounding by recall period – Dependent vs independent bounding – Internal telescoping 11

Sample size, Low Incidence Items and Shorter Recall Period Explore which items can be collected using a longer reference period – Large variation in data quality by type of item and length of period (Neter and Waksberg, 1963) – What is optimum length for main items? Increase collection using self-administered methods to reduce the cost of collection – Shift more items to Diary Survey – Shorten reference period for interview survey for particular types of items. 12

Alternative Designs Two independent samples: – Large, salient purchases use a long, unbounded, reference period – Separate sample with shorter, bounded, reference periods for less salient purchases. Single survey with one long reference period. – Mix reference periods, depending on purchase Single Panel Design: Monthly interviews followed by a longer reference period. 13

What is the Tradeoff Between Variance and Bias? For fixed costs, it will be difficult to shorten the reference period without sacrifice of precision. What are the minimum precision requirements? – Use prior research to assess tradeoffs – Evaluate from experiments conducted as part of the redesign 14

15

Other Tradeoffs to Consider? Respondent burden increases with length of recall period – May need to ask about fewer items – Enhancing recall may increase burden Panel conditioning – does it increase with shorter reference periods? – Prior research (Silberstein and Jacobs, 1989; Cho, et al., 2004)) did not find large effects of time-in-sample – Will this change if shorter periods are used? 16

Best Practices: Basics Develop appropriate interviewing protocol: – Interview structure; interviewer procedures and methods to monitor; Visual and other aids (Conrad; Peytchev; Stafford) – Use of proxy interviewing (Mathiowetz; Schaeffer) Analyze existing data. Examine recency curves by type of item (Silberstein and Jacobs, 1989; Steinberg, et al., 2006; Biderman and Lynch, 1981; Fay and Li, 2010). – Some guide to recall effects by type of item Conduct scale tests comparing different recall periods – Small scale could be done in the lab (easier to get validation data) – Large scale test, under field conditions (Neter and Waksberg, 1964; Sudman and Ferber, 1971; Chu, et al., 1992) 17

Best Practices: Assess Tradeoffs Non-response bias vs. measurement error – Non-response is not directly related to bias (Groves, 2006; Keeter, et al., 2006) – Efforts to reduce measurement error may lead to higher non-response (e.g., use of proxy interviewing; more complete interviewer protocols) Differential effects of recall error by population group (Kobilarcik, et al., 1983; Cantor, 1986) Do you need to test effects on level vs change? – Effects of recall period may be less on measures of change 18

Validation is Needed to Assess Tradeoffs Small scale experiments: – Collect records – Intensive debriefing Larger scale – Reverse record checks. Sample from retail records (especially large purchases). Perhaps combine this with selective forward record check methods – Have respondent keep a diary and/or conduct short interviews on a regular basis (Rips, et al., 2003; Millen, et al., 2005)) Randomization should reduce the impact of these errors on assessment s of comparative validity 19

Methods of Validation: Is More Better? General belief there is under-reporting – Comparison to National Accounts – Prior research has made this assumption (Neter and Waksberg, 1964; Sudman and Ferber, 1971) Validity of assumption is likely to vary by type of purchase – Dangerous for small purchases that are re-occurring using rule-based recall – Enhanced protocols may lead to more over-reporting 20

Summary Initial efforts should be put into improving the interviewing protocol, including interviewer monitoring and training When investigating optimum recall period – Compare tradeoffs between variance, non-response bias and measurement error – Assess possible differential bias across population groups – Is it important to test for bias in change estimates? Include external validation criterion when evaluating optimum periods 21

Thank-you 22