Lilian Yahng Derek Wietelman and Lauren Dula Indiana University

Slides:



Advertisements
Similar presentations
Engaging Students in Online Discussion
Advertisements

Criticism NICE Evidence Search Student Champion Scheme: Evaluating Student-Owned Learning Teaching Experience Background Peer-Peer Advice Evaluation “
Data analysis and interpretation. Agenda Part 2 comments – Average score: 87 Part 3: due in 2 weeks Data analysis.
1 What the eye doesn’t see: Evaluating a paper based questionnaire using eye-tracking technology Lyn Potaka Statistics NZ.
Welcome to Turnitin.com’s Peer Review! This tour will take you through the basics of Turnitin.com’s Peer Review. The goal of this tour is to give you.
University of Wisconsin - Extension, Cooperative Extension, Program Development and Evaluation Unit 6: Analyzing and interpreting data “There’s a world.
1 MTN-003 Training General Interviewing Techniques Some specific tips for administering the Screening interviewer-administered CRFs SSP Section 14.
1 Evaluation. 2 Evaluating The Organization Effective evaluation begins at the organizational level. It starts with a strategic plan that has been carefully.
CHAPTER 5: CONSTRUCTING OPEN- AND CLOSED-ENDED QUESTIONS Damon Burton University of Idaho University of Idaho.
The Research Process Interpretivist Positivist
Experimental Research Methods in Language Learning Chapter 8 A Hybrid Approach for Experimental Research.
The Interpersonal Mode
Reading Comprehension Exercises Online: The Effects of Feedback, Proficiency and Interaction N97C0025 Judith.
Institute of Health Sciences Education
Quantitative and Qualitative Approaches
Experimental Research Methods in Language Learning Chapter 9 Descriptive Statistics.
A Comparison of Survey Reports Obtained via Standard Questionnaire and Event History Calendar: Initial Results from the 2008 EHC “Paper” Test Jeff Moore.
Chapter 6: Analyzing and Interpreting Quantitative Data
The measurement effect in PC smartphone and tablet surveys Valerija Kolbas University of Essex ISER Ipsos-MORI The European Survey Research Association.
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
Organizational Structures Nonfiction texts have their own organization and features Writer use text structures to organize information. Understanding.
Discuss how researchers analyze data obtained in observational research.
Searching and Using Electronic Literature III. Research Design.
Linda Jones and Claire Layfield (Group Co-Leaders) Lyndsey Nickels - Academic Member Presented by Linda Jones and Lauren Kovesy.
Coding Preparing The Research for Data Entry. Coding (defined) Coding is the process of converting questionnaire responses into a form that a computer.
Click to edit Master title style Click to edit Master text styles – Second level Third level – Fourth level » Fifth level 6/13/ Outcomes associated.
An Experiment in Open End Response Quality in Relation to Text Box Length in a Web Survey Michael Traugott Christopher Antoun University of Michigan.
Evaluation of Social Prescribing in City and Hackney Dr Marcello Bertotti (Senior Research Fellow), Caroline Frostick (Research Fellow) Institute for Health.
Building Capacity in Evaluating Outcomes Unit 6: Analyzing and interpreting data 1 “There’s a world of difference between truth and facts. Facts can obscure.
Shubhangi Arora1; Eden Haverfield2; Gabriele Richard2; Susanne B
Introduction to Survey Research
Miami, Florida, 9 – 12 November 2016
GAMIFICATION – NOT ALL FUN AND GAMES September 2016
The Effects of Prelisted Items in Business Survey Questionnaire Tables
Device Effects on behaviour and participation in mobile-optimised online diaries Christian Holdt, Head of Online Market Research Europe Annika Heeck,
Rachel Vis-Visschers & Vivian Meertens QDET2, 11 November 2016
Inter-Professional Education and Practice in Autism Spectrum Disorders
Individualized research consultations in academic libraries: Useful or useless? Let the evidence speak for itself Karine Fournier Lindsey Sikora Health.
Cheryl Ng Ling Hui Hee Jee Mei, Ph.D Universiti Teknologi Malaysia
Chapter 7: Client Satisfaction
Evaluation Report: April 1, 2015 – March 31, 2016
The Literature Search and Background of the Problem
RESEARCH METHODS Lecture 43
South Texas Psychiatric PBRN
Understanding Results
Teaching Interprofessional Collaborative Care Skills Using a Blended Learning Approach WGEA April /1/2018 [ADD PRESENTATION TITLE: INSERT TAB > HEADER.
Writing for Academic Journals
Qualitative and Quantitative Data
Research strategies & Methods of data collection
Qualitative vs. Quantitative Research
Statistics and Research Desgin
Building a GER Toolbox As you return from break, please reassemble into your working groups: Surveys and Instruments Analytical Tools Getting Published.
This presentation will include:
Basic Format Enhanced format
Unit 6: Analyzing and interpreting data
Model Answers Research methods.
Interviews Although Social Surveys can be conducted using written questionnaires, sociologists often use the interview method as an alternative. The Interview.
The impact of small-group EBP education programme: barriers and facilitators for EBP allied health champions to share learning with peers.
ETS WG meeting 6-7 September 2006
Key idea: Science is a process of inquiry.
Self-Care Among Social Work educators
Evaluation.
Research strategies & Methods of data collection
IDEA Student Ratings of Instruction
Gathering data, cont’d Subjective Data Quantitative
Ass. Prof. Dr. Mogeeb Mosleh
Discussion and Future directions
Indicator 3.05 Interpret marketing information to test hypotheses and/or to resolve issues.
Dr. Sheri Conklin; Erika Hanson, Ginu Easow & Zach Morgan
Levels of involvement Consultation Collaboration User control
Presentation transcript:

Tell Me More: Data Quality, Burden, and Designing for Information-Rich Web Surveys Lilian Yahng (lyahng@Indiana.edu), Derek Wietelman (dcwietel@Indiana.edu), and Lauren Dula (ldula@Indiana.edu) Indiana University BACKGROUND METHODS CONCLUSIONS Character count: Paired t-tests on responses (n=187), with and without classification into early (pre-median) or late (post-median) responders. Paired t-tests also applied to each IPE individually, with and without time classification. The 2016 Survey of Interprofessional Educational (IPE) Activities was a census (N=5941) of current faculty and volunteer practitioners in the health sciences schools at Indiana University to gauge involvement in programmatic “interprofessional education” (IPE) over the last three years. RR2=12.3% (15.2% for faculty; 9.2% for volunteer practitioners). Median duration=10.7 min. 71-item Web instrument, consisting chiefly of open-ended questions. (Shorter 10-item path if screened out of IPE activities.) Experimental treatment Additional language inserted before the most text-intensive item (IPE objectives) for each respondent-reported IPE up to 3: Thank you very much for providing this valuable information. (first IPE) Thank you again for taking the time to provide such rich data. It is most appreciated! (second IPE) This information will be most helpful in understanding the breadth of IPE activity at IU. (third IPE transition screen) Once again, thank you. You are almost done! (third IPE) Research questions Does motivating (or explicitly grateful) language on a burdensome survey: reduce breakoffs? reduce missing data? increase response length (character count)? increase number of unique themes and elaborations? (in progress) Breakoffs and missing data: Spot-check for breakoffs before treatment. NB Could only identify item nonresponse on numeric items (not open-ends) due to programming limitation. Also performed unequal variances test for the missing data. Some evidence of encouraging language having an effect on “late” responders, slightly increasing written response length – but only on the second reported IPE. There was no effect on the first reported IPE, possibly because respondents had not yet caught on to the battery pattern or otherwise did not feel overburdened. No effect on their third reported IPE, either, perhaps due to being too fatigued by then for the encouragement to make a difference. Coding in progress. Overall increase in theme number on second IPE but decrease on third, though character counts progressively less. TBD on treatment effect. Intriguing class of “bypasser” respondents – among bottom 10% of character counts but agree to possible follow-up. Not the typical cognitively lazy or learnt underreporting: “Too much detail in this survey.” RESULTS Breakoffs before the treatment (n=289) generally occurred directly after screening out of IPE (n=92). Suspect likely “ineligible” cases, given census included non-faculty volunteers not engaged in programs. Treatment had no significant effect on numeric missing data, regardless whether respondent reported one IPE activity t(177)=.15, p=.87; two IPEs t(47)=-.62, p=.54; or all three IPEs t(32)=-.49, p=.63. No tests proved significant for character count analyses, except slightly for the late responders at the second IPE set: BURDENSOME TASKS Difficulty of tasks: detailed recall and description of hours, session frequency, participant numbers by rank by school, funding, location, objectives, and outcome. Battery repeated up to 3 IPE activities. Less “classic survey” than reporting tool. Used default design skin “optimized” for mobile but tasks essentially for desk/laptop; 14% on smartphone, 5% on tablet. Limited time/resources – qualitative interviews not an option. Given these questionnaire challenges, an opportunity to test efficacy of “motivating” language, following prior research suggesting instruction can help response quality. Group Mean Total Characters (Before Treatment) Mean Total Characters (After Treatment) Differences in means Paired t-test statistic p-val (one-sided) n First Set: Treat Early 209.51 274.56 65.04 -1.149 .1284 45 First Set: Treat Late 160.12 170.82 10.70 -0.273 .3928 57 First Set: Control Early 183.90 325.66 141.76 -1.168 .1242 50 First Set: Control Late 157.43 164.91 7.49 -0.192 .4245 35 Second Set: Treat Early 56.27 73.56 17.29 -1.306 .0992 Second Set: Treat Late 49.02 105.09 56.07 -1.304 .0987 Second Set: Control Early 57.98 63.16 5.18 -0.294 .3851 Second Set: Control Late 49.80 76.54 26.74 -1.817 .0391 Third Set: Treat Early 32.27 41.33 9.07 -1.120 .1344 Third Set: Treat Late 22.00 24.74 2.74 -0.588 .2796 Third Set: Control Early 31.96 22.9 -9.06 1.257 .8926 Third Set: Control Late 38.31 27.09 -11.23 1.663 .9473   REFERENCES Dykema, J., Jones, N., & Stevenson, J. (2013). Surveying clinicians by Web. Evaluation and the Health Professions 36(6), 352-381. Holland, J. & Christian, L. (2009). The influence of topic interest and interactive probing on responses to open- ended questions in web surveys. Social Science Computer Review 27(2), 196-212. Smyth, J., Dillman, D., Christian, L., & McBride, M. (2009). Can increasing the size of answer boxes and providing extra verbal instructions improve response quality? POQ, 1-13.