Download presentation
Presentation is loading. Please wait.
Published byHayden Burton Modified over 11 years ago
1
Pranav Anand, Caroline Andrews, Matthew Wagers Assessing the pragmatics of experiments with crowdsourcing: The case of scalar implicature University of California, Santa Cruz
2
Experiments & Pragmatic Processing Each of the critics reviewed some of the movies. – evidence for EIs, with different response choices - no evidence of EIs but not all ? Depending on the study: Worry: How much do methodologies themselves influence judg e ments? Worry: Are we adequately testing the influence of methodologies on our data? Case Study: (Embedded) Implicatures Previous Limitation: Lack of Subjects and Money Crowd-sourcing addresses both problems
3
Pragmatics of Experimental Situations Teleological Curiosity - Subjects hypothesizing expected behavior, matching an ideal Worry: How much do methodologies themselves influence judg e ments? Evaluation Apprehension – subjects know they are being judged The experiment itself is part of the pragmatic context See Rosenthal & Rosnow. (1975) The Volunteer Subject.
4
Elements of Experimental Context e.g. True / False, Yes / No, 1-7 scale Response Structure – Response choices available to the subject Worry: How much do methodologies themselves influence judg e ments? Prompt – the Question Protocol – Social Context / Task Specification directions for the Response Structure Immediate Linguistic/Visual Context Our Goal : Explore variations of these elements in a systematic way
5
Experimental Design Is this an accurate description? Some of the spices have red lids. Linguistic Contexts – All Relevant, All Irrelevant, No Context Protocol Experimental – normal experiment instructions Annotation – checking the work of unaffiliated annotators 4 Implicature Targets, 6 Some/All Controls, 20 Fillers
6
Experiment 1: Social Context Focus on Protocol Annotation vs Experiment Population: Undergraduates All – IrrelevantNo StoryAll-Relevant Experiment Annotation Accuracy Prompt - Is this an accurate description? Response Categories - Yes, No, Dont Know
7
Experiment 1: Social Context Finding : Social context even when linguistic context does not. Linguistic Context: No Effect
8
Experiment 1: Social Context Finding : Social context even when linguistic context does not. Lower SI rate for Annotation (p<0.05)
9
Experiment 2 Prompt Type Accuracy Prompt - Is this an accurate description? Response Categories - Yes, No, Dont Know Informativity Prompt - How Informative is this sentence? Response Categories - Not Informative Enough Informative Enough Too Much Information False Population: Mechanical Turk Workers Systematic Debriefing Survey
10
Experiment 2 Prompt Type Effect for Prompt
11
Experiment 2 Prompt Type Effect for Prompt (p<0.001) Effect for Context (p<0.001)
12
Experiment 2 Prompt Type Effect for Prompt (p<0.001) Effect for Context (p<0.001) Weak Interaction: Prompt x Context (p<0.06)
13
Experiment 2 Prompt Type No Effect for Protocol
14
Experiment 2 Prompt Type Low SI rates overall But the debriefing survey indicates that (roughly) 70% of participants were aware of some/all contrast
15
Populations Turkers – More sensitive to Linguistic Context Less sensitive to changes in changes in social context/ evaluation apprehension Undergraduates – More sensitive to Protocol
16
Take Home Points Methodological variables should be explored alongside conventional linguistic variables – Ideal: models of these processes (cf. Schutze 1996) – Crowdsourcing allows for cheap/fast exploration of parameter spaces New Normal: Dont guess, test. – Controls, norming, confounding … all testable online
17
A potential check on exuberance Undergraduates may be WEIRD*, but crowdsourcing engenders its own weirdness – High evaluation apprehension – Uncontrolled backgrounds, skillsets, focus levels – Unknown motivations Ignorance does not necessarily mean diversity – This requires study if we rely on such participants more * Heinrich et al. (2010) The Weirdest People in the World? BBS
18
Acknowledgments Thanks Jaye Padgett and to the attendees of two Semantics Lab presentations and the XPRAG conference for their comments, to the HUGRA committee for their generous award and support, and thanks to Rosie Wilson-Briggs for stimuli construction.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.