Download presentation
Presentation is loading. Please wait.
Published byWarren Welch Modified over 9 years ago
1
Crowdsourcing: Ethics, Collaboration, Creativity KSE 801 Uichin Lee
2
Occupational Hazards Employers who don’t pay Staying safe online: phishing, scamming – Some reports from Turker Nation: “Do not do any HITs that involve: secret shopping, ….; they are scams” – How to moderate such instances? (in a scalable way?) Costs of requesters and admin errors are often borne by workers – Defective HITs, too short time to finish, etc. – Worker’s rating can be affected due to such errors
3
Helping Workers? Augmenting M-Turk from the outside – Few external Turking tools Building alternative human computation platforms? Offering workers legal protections (human rights)? – Humans or machines? – Legal responsibilities? – Intellectual properties? Offering fair wage? – Minimum wage? (or fair wage?)
4
Collaborative Crowdsourcing So far mostly “independent” tasks: – Ex) CastingWords’s podcast transcription tasks One turker do initial transcription, which is split into multiple segments that are verified/improved by other turkers Collaborative translation with Etherpad – Etherpad: an open-source platform for real-time collaborative editing – Turkers join an etherpad, then add/improve a translation of the famous Spanish poem ($0.15 for this task) – Within hours more than a dozen turkers were working on the translation interactively, seeing each other’s edits reflected in real- time (lots of interaction) EtherpadChatting
6
Discussion Understanding motivational and reward structure of crowd workers and how they generalize across different kinds of markets – Intrinsic vs. extrinsic motivations? – M-Turk for pennies to Innocentives for tens of thousands of dollars – Virtual cash (point-based reputation) e.g., Naver KiN, Facebook Farmvile Designing tasks based on the needs of different markets – M-Turk: parceling work into short, simple subtasks? What else can we do other than cooperative translation?
7
Crowdsourcing Subjective Tasks How would you know whether a given answer is a user’s honest opinion or whether she is just clicking randomly? – How to design tasks to enforce quality responses? Two criteria: – To take the same amount of effort for a worker to enter an invalid but believable response as a valid one written in good faith – To signal to the workers that their output would be monitored and evaluated Wikipedia article rating by Turkers (does it comparable with the rating by the experts?) – Enforced quality responses by asking the below questions Asking turkers to finish three simple questions that had verifiable quantitative answers, such as how many references/images/sections the article had Asking turkers to provide between four and six keywords summarizing the article – Results: invalid comments reduced from 49% to 3%!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.