Download presentation
Presentation is loading. Please wait.
Published byRosemary Boyd Modified over 8 years ago
1
An Experiment in Open End Response Quality in Relation to Text Box Length in a Web Survey Michael Traugott Christopher Antoun University of Michigan
2
Background Researchers are always interested in the quality of the survey responses they collect Open end questions allow respondents freedom to answer in their own terms Response quality means “the amount and type of information contained in responses” (Smyth, Dillman, Christian & McBride 2009) Data quality for open end questions can be: Getting a response at all Length of the response Fullness of the response in terms of themes
3
Prior Research Web surveys produce longer answers than paper and pencil (Christian and Dillman 2004; Kraut, 2006; Kulesa & Bishop 2006; Barrios, Villarroya, Borrego, and Olle´ 2011) Motivation to participate or respond to a specific question affects answer length (Holland & Christian 2009; Smyth, Dillman, Christian, and McBride 2009; Poncheri, Surface, Lindberg and Thompson 2004) Whether a larger box in a web survey stimulates longer responses has produced mixed results Samples are often idiosyncratic
4
Current Study Second wave of panel study of faculty involved in a project to stimulate interdisciplinary research (MCubed) Two key open end questions: What are the aspects of the MCubed process that you think have worked especially well to date or that you are especially satisfied with? What are the aspects of the MCubed process that you think have not worked especially well to date or that you are especially dissatisfied with?
5
Main Hypotheses H1: A larger box in a web survey stimulates longer responses. H2: Higher education Rs and women will provide higher quality responses. H3: Respondent motivation will affect response quality (e.g., stronger for “work well” for funded token holders and “not work well” for unfunded token holders)
6
Measures TREATMENT 3 box sizes (Small, Medium, Large: 2, 4, 8 lines)
7
Measures DEPENDENT VARIABLES Nonresponse: No answer (blank), NA, and DK for “worked well” and “didn’t work well” question Response length: # of words trimmed to + 2 s.d. Complexity: # of different themes mentioned
8
Measures INDEPENDENT VARIABLES Demographics: gender, education, tenure Motivation: Funded token or not Proportion of unit faculty with token Proportion funded tokens in unit Elapsed time to respond (median and six categories)
9
Multivariate Results “WORKED WELL” “DIDN’T WORK WELL” INR Length Themes INR Length Themes Demographics Education -- ns ns -- ns ns Gender ns ns ns ns ns ns Motivation Funded -- ns + -- ns ns Unit Funding* ns ns ns ns ns ns Late responder ns ns ns + -- -- *Unit funding had a significant interaction with whether R was funded or not
10
Conclusions No direct effect of box size on any of three measures of response quality for either of the two questions (H1 disconfirmed) Demographics had a limited effect on response quality which disappeared under further controls (H2 disconfirmed) Motivation did affect response quality (H3 mostly confirmed) – Funded respondents had lower item nonresponse (INR) on both Qs and mentioned more themes for the “worked well” Q – People who were not funded from a unit with low funding were especially likely to skip the “didn’t work well” question – People who had to be coaxed to participate in the survey were least likely to provide any type of quality response Sometimes respondent characteristics outweigh design features
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.