Maintaining high response rates – is it worth the effort?

Slides:



Advertisements
Similar presentations
Survey design. What is a survey?? Asking questions – questionnaires Finding out things about people Simple things – lots of people What things? What people?
Advertisements

Survey Methodology Nonresponse EPID 626 Lecture 6.
Experimental thinkaloud protocols: a new method for evaluating the validity of survey questions Patrick Sturgis National Centre for Research Methods (NCRM)
Improving Advertising Conversion Studies Chapter 47 Research Methodologies.
The Polls and The 2015 Election John Curtice 9 June 2015.
What is an Opinion Survey or Poll?
Survey Research Slides Prepared by Alison L. O’Malley Passer Chapter 7.
Fieldwork efforts  Monitoring fieldwork efforts  Monitoring fieldwork efforts: Did interviewers /survey organisations implement fieldwork guidelines.
Scot Exec Course Nov/Dec 04 Survey design overview Gillian Raab Professor of Applied Statistics Napier University.
3.14 X AXIS 6.65 BASE MARGIN 5.95 TOP MARGIN 4.52 CHART TOP LEFT MARGIN RIGHT MARGIN © TNS 2014 Fieldwork effort, response rate and the distribution.
1 Unobserved common causes of measurement and nonresponse error on the 2008 ANES Panel Survey International Total Survey Error Workshop Stowe, VT - June.
Nonresponse Rates and Nonresponse Bias In Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology, USA Emilia Peytcheva.
NCRM is funded by the Economic and Social Research Council 1 Interviewers, nonresponse bias and measurement error Patrick Sturgis University of Southampton.
Journalism 614: Non-Response and Cell Phone Adoption Issues.
Journalism 614: Non-Response and Cell Phone Adoption Issues.
Data to be released after chamber annual meeting and Methodology Statement.
Opinion polls and survey research. What do surveys do? Measure concepts –Provide indicators for concepts Allow us track opinion and test theories.
Weighting and imputation PHC 6716 July 13, 2011 Chris McCarty.
2006 WCVI National Exit Poll Results
Introduction to Quantitative Research
Detecting and understanding interviewer effects on survey data using a cross-classified mixed-effects location scale model Ian Brunton-Smith, University.
Measurements Statistics
Samples, Surveys, and Results UNIT SELF-TEST QUESTIONS
DATA COLLECTION METHODS IN NURSING RESEARCH
Part Two.
COMPLEMENTARY TEACHING MATERIALS
Handling Attrition and Non-response in the 1970 British Cohort Study
Context for the experiment?
Selecting the Best Measure for Your Study
Social Research Methods
Understanding and Evaluating Survey Data Documentation: A User’s Guide
A Comparison of Two Nonprobability Samples with Probability Samples
The effects of panel conditioning on party support and political interest in Understanding Society Nicole Martin University of Essex.
Statistical methods for detection of falsified interview in surveys: experience from election polls in Ukraine Eugen Bolshov Marina.
Did people do what they said
Can we trust the opinion polls – a panel discussion
Randomized Trials: A Brief Overview
Changing Patterns of Social Science Data Usage
Obtaining information on non-responders: a development of the basic question approach for surveys of individuals Patten Smith (Ipsos MORI) Richard Harry.
AICE Sociology - Chapter 3
Sampling: Design and Procedures
Social Research Methods
Applied Statistical Analysis
Mapping Practising Christians
Nonresponse Bias in a Nationwide Dual-Mode Survey
Chapter 2: The nonresponse problem
Methods Chapter Format Sources of Data Measurements
The European Statistical Training Programme (ESTP)
Natalie Robinson Centre for Evidence-based Veterinary Medicine
SPEC Barometer Results
Natalie Jackson, Managing Director of Polling
Referendum Baseline Public Opinion Poll June 2010
EVALUATING STATISTICAL REPORTS
Chapter 7: Reducing nonresponse
The European Statistical Training Programme (ESTP)
How media cover election polls in the U.S.
BMTRY 738: The Study Population
Chapter 3: Response models
Marketing Research: Course 4
Bias in Studies Section 1.4.
Sociological Research Methods
Social Research Methods
The European Statistical Training Programme (ESTP)
Introductory Statistics Introductory Statistics
Nazmus Saquib, PhD Head of Research Sulaiman AlRajhi Colleges
Chapter 2: The nonresponse problem
Chapter 5: The analysis of nonresponse
2019 Planning & Progress Study
Quantitative Research Methods
Workshop on best practices for EU-SILC revision, −
Presentation transcript:

Maintaining high response rates – is it worth the effort? Patrick Sturgis, University of Southampton

Response Rates going down

British Social Attitudes Survey Response Rate 1996-2011 x Mention I started working for SCPR in 1995.. 2014

Low and declining response rates RDD even worse, in the US routinely < 10% (increasing mobile-only + do not call legislation) Survey sponsors ask ‘what are we getting for our money?’ Is a low response rate survey better than a well designed quota?

Increasing costs Per achieved interview costs are high and increasing Simon Jackman estimates $2000 per complete interview in 2012 American National Election Study My estimate= ~£250 per achieved for PAF sample, 45 min CAPI, n=~1500, RR=~50% Compare ~£5 for opt-in panels ‘A good quality survey may cost two to three times as much as an inferior version of the same thing’ Marsh A national quota sample of 1000 interviews will cost £1000 a random sample of 2500 will cost at least £3,500 is £45k (rpi adjusted) £72k (labour value)

Cost drivers Average number of calls increasing More refusal conversion More incentives (UKHLS, £30) 30%-40% of fieldwork costs can be deployed on the 20% ‘hardest to get’ respondents

Externalities of ‘survey pressure’ Poor data quality of ‘hard to get’ respondents Fabrication pressure on respondents (community life survey) Fabrication pressure on interviewers (PISA) Ethical research practice? Roger Thomas anecdote

Is all this effort (and cost) worth it?

r(response rate, nonresponse bias) Groves (2006) But some skepticism about this – a rather unusual set of surveys/estimates. The set of estimates that have a measured criterion may be less subject to bias than say, voting intention

Correlation response rate & bias Edelman et al (2000) US Presidential exit poll Keeter et al (2000) compare ‘standard’ and ‘rigorous’ fieldwork procedures In general, the field finds weak response propensity models

Our study

Williams, Sturgis, Brunton-Smith & Moore (2016) Take estimate for a variable after first call E.g. % smokers = 24% Compare to same estimate after n calls Now % smokers = 18% Absolute % difference = 6% Relative absolute difference = 6/18 = 33% Do this for lots of variables over multiple surveys

U=unconditional incentive, C= conditional incentive   British Crime Survey Taking Part British Election Study Community Life National Survey for Wales Skills for Life Population England & Wales 16+ England 16+ Great Britain 18+ Wales 16+ England 16-65 Timing 2011 2010 2013-14 2010-11 Sample size 46,785 10,994 1,811 5,105 9,856 7,230 RR 76% 59% 54% 61% 70% ~57% Incentives? Stamps (U) Stamps (U) +£5 (C) £5-10 (C) None £10 (C) U=unconditional incentive, C= conditional incentive

RESPONSE RATE PER CALL NUMBER FOR ALL SIX SURVEYS

Methodology All non-demographic variables administered to all respondents = 559 questions For each variable calculate average percentage difference in each category = 1250 at each call Code questions by: response format: categorical; ordinal; binary; multi-coded Question type: behavioural; attitudinal; belief Multi-level meta-analysis

Results

ESTIMATED ABSOLUTE PERCENTAGE DIFFERENCE BY QUESTION (DESIGN WEIGHTED) Mean = 1.6% 54% not different from zero after 1 call

ESTIMATED ABSOLUTE PERCENTAGE DIFFERENCE BY QUESTION (DESIGN WEIGHTED) Mean = 1.6% Mean = 1% 54% not different from zero after 1 call

ESTIMATED ABSOLUTE PERCENTAGE DIFFERENCE BY QUESTION (DESIGN WEIGHTED) Mean = 1.6% Mean = 1% 54% not different from zero after 1 call Mean = 0.7%

ESTIMATED ABSOLUTE PERCENTAGE DIFFERENCE BY QUESTION (DESIGN WEIGHTED) Mean = 1.6% Mean = 1% 54% not different from zero after 1 call Mean = 0.7% Mean = 0.4%

DESIGN v CALLIBRATION WEIGHTED Mean = 1.6% Mean = 1.4% 54% not different from zero after 1 call

DESIGN v CALLIBRATION WEIGHTED Mean = 1% Mean = 0.8%

DESIGN v CALLIBRATION WEIGHTED Mean = 0.7% Mean = 0.5%

DESIGN v CALLIBRATION WEIGHTED Mean = 0.4% Mean = 0.3%

Other Results Significant difference by survey, Taking Part on average 1% higher than BCS at call 1 Some differences by question format and type at call 1 but this disappears at later calls Pattern essentially the same using relative absolute difference

Discussion More evidence of weak correlation between RR and nonresponse bias Weakness = not a measure of bias Strength = broader range of surveys and variables Place upper limit on calls? Make only one call? Not that simple!

Williams, Joel and Sturgis, Patrick and Brunton-Smith, Ian and Moore, Jamie (2016) Fieldwork effort, response rate, and the distribution of survey outcomes: a multi-level meta-analysis. NCRM Working Paper. NCRM http://eprints.ncrm.ac.uk/3771/