Confronting the Challenges of Household Surveys by Mixing Modes Roger Tourangeau, Westat Note: This presentation accompanied the Keynote Address by Dr.

Slides:



Advertisements
Similar presentations
Correcting for Common Causes of Nonresponse and Measurement Error Andy Peytchev International Total Survey Error Workshop Stowe, June 15, 2010.
Advertisements

Survey Methodology Nonresponse EPID 626 Lecture 6.
Part IV – When the MOE Doesn’t Apply
Self-Administered Surveys: Mail Survey Methods ChihChien Chen Lauren Teffeau Week 10.
Brian A. Harris-Kojetin, Ph.D. Statistical and Science Policy
ESS Mixed mode experiment ESRA Conference, Ljubljana, July 2013 Alun 18 July 2013.
Kevin Deardorff Assistant Division Chief, Decennial Management Division U.S. Census Bureau 2014 SDC / CIC Conference April 2, Census Updates.
© 2003 Prentice-Hall, Inc.Chap 1-1 Business Statistics: A First Course (3 rd Edition) Chapter 1 Introduction and Data Collection.
Chapter 7 Sampling Distributions
© 2002 Prentice-Hall, Inc.Chap 1-1 Statistics for Managers using Microsoft Excel 3 rd Edition Chapter 1 Introduction and Data Collection.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Survey Research & Understanding Statistics
The Excel NORMDIST Function Computes the cumulative probability to the value X Business Statistics: A First Course, 5e © 2009 Prentice-Hall, Inc
7-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 7 Sampling and Sampling Distributions Statistics for Managers using Microsoft.
213Sampling.pdf When one is attempting to study the variable of a population, whether the variable is qualitative or quantitative, there are two methods.
Basic Business Statistics (8th Edition)
Sampling Methods.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Marketing Research Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides.
Household Surveys ACS – CPS - AHS INFO 7470 / ECON 8500 Warren A. Brown University of Georgia February 22,
A Tale of Two Methods: Comparing mail and RDD data collection for the Minnesota Adult Tobacco Survey III Wendy Hicks and David Cantor Westat Ann St. Claire,
Survey Research Slides Prepared by Alison L. O’Malley Passer Chapter 7.
Benjamin L. Messer Washington State University PAPOR Mini-Conference, Berkeley, CA June 24, 2011 Item Nonresponse in Mail, Web, and Mixed Mode Surveys:
(Source: Causeweb.org). Elementary Survey Sampling 7 th Edition By Scheaffer, Mendenhall, Ott and Gerow.
An initiative by the Economic and Social Research Council, with scientific leadership by the Institute for Social and Economic Research, University of.
Timing Isn’t Everything, But Money Talks How to encourage a face-to-face household panel to go online? University of Essex, July 2013 Gerry Nicolaas Carl.
From Sample to Population Often we want to understand the attitudes, beliefs, opinions or behaviour of some population, but only have data on a sample.
1 Sampling Distributions Lecture 9. 2 Background  We want to learn about the feature of a population (parameter)  In many situations, it is impossible.
Understanding the Decision to Participate in a Survey and the Choice of the Response Mode Anders Holmberg and Boris Lorenc European Conference on Quality.
Lesli Scott Ashley Bowers Sue Ellen Hansen Robin Tepper Jacob Survey Research Center, University of Michigan Third International Conference on Establishment.
Planning for 2010: A Reengineered Census of Population and Housing Preston Jay Waite Associate Director for Decennial Census U.S. Census Bureau Presentation.
Approaches to Assessing and Correcting for Bias in Distributions of Cognitive Ability due to Non-Response David R. Weir Jessica D. Faul Kenneth M. Langa.
DRAFT - not for publication Nonresponse Bias Analysis in a Survey of Banks Carl Ramirez U.S. Government Accountability Office
Internet Surveys & Future Works In Official Statistics 1 The 4th International Workshop on Internet Survey Methods, Sept. Statistical Center, Statistics.
Copyright ©2011 Pearson Education 7-1 Chapter 7 Sampling and Sampling Distributions Statistics for Managers using Microsoft Excel 6 th Global Edition.
Scot Exec Course Nov/Dec 04 Survey design overview Gillian Raab Professor of Applied Statistics Napier University.
1 1 Web in mix-mode surveys in Norway Bengt Oscar Lagerstrøm Symposium on General Population Surveys on the web, London November 2011.
1 © 2008 Arbitron Inc. Arbitron Diary Service Offering Respondents a Choice of Survey Mode: Use Patterns of an Internet Response Option in a Mail Survey.
MAT 1000 Mathematics in Today's World. Last Time 1.Two types of observational study 2.Three methods for choosing a sample.
Nonresponse Rates and Nonresponse Bias In Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology, USA Emilia Peytcheva.
The 2006 American Community Survey (ACS) Content Test Questions on International Migration: Improving Data on the U.S. Foreign-Born Dean H. Judson For.
Building Wave Response Rates in a Longitudinal Survey: Essential for Nonsampling Error Reduction or Last In - First Out? Steven B. Cohen Fred Rohde and.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Household Surveys: American Community Survey & American Housing Survey Warren A. Brown February 8, 2007.
11 How Much of Interviewer Variance is Really Nonresponse Error Variance? Brady T. West Michigan Program in Survey Methodology University of Michigan-Ann.
Effects of Sampling and Screening Strategies in an RDD Survey Anthony M. Roman, Elizabeth Eggleston, Charles F. Turner, Susan M. Rogers, Rebecca Crow,
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 7-1 Chapter 7 Sampling and Sampling Distributions Basic Business Statistics 11 th Edition.
Mode effects in social surveys A mixed-mode experiment linked to the Safety Monitor Ger Linden, Leanne Houben, Barry Schouten (Statistics Netherlands)
The Use of Random Digit Dialing in Household Surveys: Challenges and Changes Chris Chapman 2008 IES Research Conference Washington, DC June 11, 2008
1 Responsive Design and Survey Management in the National Survey of Family Growth (NSFG) William D. Mosher, NCHS FCSM Statistical Policy Seminar Washington,
Basic Business Statistics, 8e © 2002 Prentice-Hall, Inc. Chap 1-1 Inferential Statistics for Forecasting Dr. Ghada Abo-zaid Inferential Statistics for.
1 of 29Visit UMT online at Prentice Hall 2003 Chapter 1, STAT125Basic Business Statistics STATISTICS FOR MANAGERS University of Management.
Basic Business Statistics
STT 350: SURVEY SAMPLING Dr. Cuixian Chen Chapter 2: Elements of the Sampling Problem Elementary Survey Sampling, 7E, Scheaffer, Mendenhall, Ott and Gerow.
1 Survey Nonresponse Survey Research Laboratory University of Illinois at Chicago March 16, 2010.
Using Surveys to Design and Evaluate Watershed Education and Outreach Day 5 Methodologies for Implementing Mailed Surveys Alternatives to Mailed Surveys.
Conducting surveys and designing questionnaires. Aims Provide students with an understanding of the purposes of survey work Overview the stages involved.
The impact of using the web in a mixed mode follow-up of a longitudinal birth cohort study: Evidence from the National Child Development Study Presenter:
Journalism 614: Non-Response and Cell Phone Adoption Issues.
Influencing Mode Choice in a Multi-Mode Survey May 2012 AAPOR Conference Presentation Geraldine Mooney Cheryl De Saw Xiaojing Lin Andrew Hurwitz Flora.
Miami, Florida, 9 – 12 November 2016
Context for the experiment?
Nonresponse Bias in a Nationwide Dual-Mode Survey
Chapter 2: The nonresponse problem
Multi-Mode Data Collection Approach
The European Statistical Training Programme (ESTP)
Multi-Mode Data Collection Approach
Multi-Mode Data Collection
Deciding the mixed-mode design WP1
Chapter 2: The nonresponse problem
Presentation transcript:

Confronting the Challenges of Household Surveys by Mixing Modes Roger Tourangeau, Westat Note: This presentation accompanied the Keynote Address by Dr. Tourangeau at the 2013 Federal Committee on Statistical Methodology Research Conference, Nov. 4, 2013

Outline Why mix modes of data collection? Types of MM Designs Five Issues with MM Designs – Assessing mode effects – Impact of choice – Impact of order – Getting responses via the Internet – “Unimode” versus best practices Conclusions Presentation to the FCSM Research Conference November 4,

Why Mix Modes? Improve coverage Increase response rates Reduce costs Improve measurement? Presentation to the FCSM Research Conference November 4,

Why Mix Modes—II? Costs are rising and response rates falling Response rate problems found not just in government, but also in academic and commercial surveys (Tourangeau and Plewes, 2013) Polls often have response rates in the single digits Presentation to the FCSM Research Conference November 4,

5 Sample Adult Nonresponse in NHIS (Brick and Williams, 2013) Presentation to the FCSM Research Conference November 4, 2013

6 Sample Adult Nonresponse in NHES (Brick and Williams, 2013)

7 Costs and Return Rates: Decennial Census Presentation to the FCSM Research Conference November 4, 2013

8 Can Mixing Modes Help? Four common designs: 1.Cross-sectional mixed modes: Start with cheapest mode and follow up with more expensive modes to reduce NR bias; sometimes concurrent choice – American Community Survey (mail, CATI, CAPI) – U.S. population census, since 1969 (mail, FTF) – Canadian census (mail/Internet, FTF) 2.Different modes for different sections (NSFG) 3.Longitudinal mixed modes: Start with high RR mode, then follow-up waves done more cheaply – Current Population Survey—FTF with maximum telephone in Waves 2-4, 6-9) 4.Cross-national surveys: Use different modes in different countries Presentation to the FCSM Research Conference November 4, 2013

9 Five Issues Raised by MM Designs 1.Assessing mode effects: How do you tell whether the mode affects the measurement? 2.Impact of choice: Is it useful to let people choose their method responding? 3.Impact of order: If a sequential design is used, does the order of the options matter? 4.Getting responses via the Internet: How can one get responses in Web surveys? Can one get responses via the Web? 5.“Unimode” versus best practices: Should one minimize differences across mode? Presentation to the FCSM Research Conference November 4, 2013

10 Mode Effects Presentation to the FCSM Research Conference November 4, 2013

11 Non-Observation Errors Impact of non-observation (non- coverage/non-response) for Mode A Two components—those excluded entirely and those who might respond with different propensities Presentation to the FCSM Research Conference November 4, 2013

12 Non-Observation Effects--II Two modes ought to increase coverage, reduce non- response relative to one mode Consider the proportions with no chance of responding The hope is that the mixed mode design likely to reduce the size of the excluded population as well as boosting mean response propensities Presentation to the FCSM Research Conference November 4, 2013

13 Observation Errors Overall level of measurement error, depends on average measurement effect in both modes: Whether the quantity above is larger or smaller than the measurement error in the corresponding estimate from a single-mode survey depends on – the relative magnitudes of errors in two modes, – whether they are in same or the opposite directions. Presentation to the FCSM Research Conference November 4, 2013

14 Issue 1: Assessing Mode Effects Three common designs – Mode experiments With true scores (Tourangeau, Groves, and Redline, 2010; Kreuter, Presser, and Tourangeau, 2008) Without true scores (Tourangeau and Smith, 1996) – Mixed mode design compared with single-mode design (Vannieuwenhuyze and Loosveldt, 2012; Vannieuwenhuyze, Loosveldt, Molenberghs, 2010) Experimental design—B with A follow-up versus A only Assume coverage/nonresponse the same in two conditions A respondents would give same answers regardless of whether they had B option First assumption not very plausible (look at ESS data, Dutch experiment) Presentation to the FCSM Research Conference November 4, 2013

15 Vannieuwenhuyze and colleagues (2010, 2012) Two components (non-observation and observation error) may offset Advocate design that compares B then A versus A only Survey about surveys in the Netherlands Measurement and nonresponse effects offset each (FTF-only vs. mail with FTF) – People who dislike surveys less likely to respond to mail – People less like to report negative attitudes in FTF Different conclusions about mode effect unless FTF only included in analysis Presentation to the FCSM Research Conference November 4, 2013

16 Tourangeau, Groves, and Redline (2010) Sample of Maryland residents who are registered to vote; sample stratified by voter status Alternative strategy: – Experimentally varied mode – Have true scores (from frame on key variables) Response rates (overall 34%) reflect incentive (44% vs. 23%) and voter status (41% vs. 26%) Presentation to the FCSM Research Conference November 4, 2013

17 Bias Estimates Subgroup Entire Sample (Frame Data) Respondents (Frame Data) Respondents (Survey Reports) Bias NonresponseMeasurement Overall 43.7 (2689) 57.0 (904) 76.0 (895) Topic Politics Health 42.6 (1346) 44.7 (1343) 58.5 (441) 55.5 (463) 77.4 (438) 74.6 (457) Incentive $5 $ (1349) 44.0 (1340) 54.8 (591) 61.0 (313) 75.9 (586) 76.0 (309) Mode Telephone Mail 43.2 (1020) 43.9 (1669) 57.4 (350) 56.7 (554) 79.4 (345) 73.8 (550) Estimated Percentage of Voters in 2006

18 Issue 2: Impact of Choice In cross-sectional design: Should you give people a choice? At first blush, it would seem to encourage higher response rates; let people respond by the mode they prefer ACS experiment (Griffin, Fischer, and Morgan, 2001) showed lower response rate for mail with Internet option than mail-only Medway and Fulton (2012): Examine 19 experiments comparing Web + mail versus mail only – Choice lowers RR by 3.8 percent on average – Only 10.2 percent use Web Presentation to the FCSM Research Conference November 4, 2013

19 Choice (cont’d) Medway and Fulton did not examine two large ACS experiments ACS 2011 test (Tancreto et al., 2012): No impact of choice per se Presentation to the FCSM Research Conference November 4, 2013 OverallInternet TargetedNon-TargetedTargeted Non- Targeted Mail Only Prominent Choice Non-Prominent Choice Push (Regular Schedule) Push (Accelerated Schedule )

20 Choice (cont’d) If you want people to respond by the cheaper mode, why give them a choice? Does the order of modes matter? Presentation to the FCSM Research Conference November 4, 2013

21 Issue 3: Sequence of Modes Having refused by one mode, are they more likely to refuse in a second mode? Lynn (2013): Offering low propensity/low cost mode first may lower overall RR – Messer and Dillman (in Washington State): Mail followed by Web higher RR than Web followed by mail – Holmberg, Lorenc, and Warner (2010): Web followed by mail nearly as high as mail only, but 65 percent complete by mail – Olson, Smyth, and Wood (2012): Web followed by mail no different from mail followed by web ACS tests in 2011 suggest that sequence not so crucial (see also Matthews et al., 2012) Presentation to the FCSM Research Conference November 4, 2013

22 Issue 4: Getting Responses via the Web Can you get people to respond via the Web? The 2011 Canadian experience (Dolson, 2013) – Experiment in mail-out areas (80 percent of total population) – 75 percent get letter, 25 percent mailed questionnaire – The majority of Canadians responded by the Internet (53 + percent) – Four mailings: Prenotification, reminder letter, mail questionnaire, voice reminder Presentation to the FCSM Research Conference November 4, 2013 LetterQuestionnaire Mail Internet Help Line Nonresponse Follow-Up Nonresponse

23 Getting Responses via the Web (cont’d) Mag (2013): Hungarian experience Three lessons 1.Don’t give people a choice 2.Don’t let them procrastinate 3.Give them an incentive Presentation to the FCSM Research Conference November 4, 2013 Mode Online18.6 Paper16.2 FTF 65.1

24 Issue 5: Unimode versus Best Practices Should one try to remove mode effects (measurement effects by design) or attempt to reduce measurement effects within each mode group? Two issues relevant here: – Is the estimate an overall population estimate or an estimated difference across groups? – Does the estimate involve an attitudinal or factual variable? Presentation to the FCSM Research Conference November 4, 2013

25 Overall Estimates Combined estimate from MM survey: Minimize error in overall estimate by minimizing measurement error (not by maximizing comparability) Presentation to the FCSM Research Conference November 4, 2013

26 Making Comparisons Minimize difference in mode effects – Use the same mode of data collection to collect within each population (e.g., satisfaction ratings from patients at two hospitals) – If more than one mode were used to collect the data (say, a combination of web and mail), then use the same mix of modes in each population (at each hospital) – Third, if neither the same mode nor same mix of modes can be used, then use the unimode approach (designing the questions to minimize mode differences) Differences often in attitudinal variables Presentation to the FCSM Research Conference November 4, 2013

27 Conclusions 1) Mode differences are not themselves a form of error Instead, mode differences reflect differences in coverage/nonresponse and differences in measurement errors 2) Some danger that offering a choice can lower response rates – Don’t give concurrent choice – Push people to the cheapest option 3) With a cross-section survey, it seems possible that refusing in one mode increases likelihood of refusal in second mode Presentation to the FCSM Research Conference November 4, 2013

28 Conclusions (cont’d) 4) People will respond by the Web—Don’t give them a choice, do multiple follow-ups 5)Minimize error, not mode differences – With factual items and overall estimates, use the best methods in each mode – With comparisons, especially comparisons involving attitudinal judgments (such as satisfaction ratings), reduce mode differences o Use single mode, o Use same mix of modes o Use unimode design Presentation to the FCSM Research Conference November 4, 2013

29 Thank you! Presentation to the FCSM Research Conference November 4, 2013