Download presentation
Presentation is loading. Please wait.
2
Crowdsourcing = Crowd + Outsourcing “soliciting solutions via open calls to large-scale communities”
4
Some Examples Call for professional helps Award 50,000 to 1,000,000 for each tasks Office work platform Microtask platform Over 30,000 tasks at the same time
5
What Tasks are crowdsourceable?
6
Software Development Reward: 25,000 USD
7
Data Entry Reward: 4.4 USD/hour
8
Image Tagging Reward: 0.04 USD
9
Trip Advice Reward: points on Yahoo! Answers
10
The impact of crowdsourcing on scientific research?
11
Amazon Mechanical Turk A micro-task marketplace Task prices are usually between 0.01 to 1 USD Easy-to-use interface
12
Amazon Mechanical Turk Human Intelligence Task (HIT) Tasks hard for computers Developer Prepay the money Publish HITs Get results Worker Complete the HITs Get paid
13
Who are the workers?
14
A Survey of Mechanical Turk Survey on 1000 Turkers (Turk workers) Two identical surveys (Oct. 2008 and Dec. 2008) Consistent results Blog post: A Computer Scientist in a Business School A Computer Scientist in a Business School
16
Education Age Gender Annual Income
17
Compare with Internet Demographics Use the data from ComScore In summary, Tukers are younger Portion of 21-35 years old: 51% vs. 22% in internet mainly female 70% female vs. 50 % female having lower income 65% turkers with income < 60k/year vs. 45% in internet having smaller family 55% turkers have no children vs. 40% in internet
18
How Much Turkers Earn?
19
Why Turkers Turk?
20
Research Applications
21
Dataset Collection Dataset is important in computer science! In multimedia analysis Is there X in the image Where is Y in the image In natural language processing What is the emotion of this sentence And in lots of other applications
22
Dataset Collection Utility Annotation By Sorokin and Forsyth at UIUC Image analysis Type keyword Select examples Click on landmarks Outline figures
23
0.01 USD/ task
24
0.02 USD/ task
25
0.01 USD/ task
27
Dataset Collection Linguistic annotations (Snow et al. 2008) Word similarity USD 0.2 to label 30 word pairs
28
Dataset Collection Linguistic annotations (Snow et al. 2008) Affect recognition USD 0.4 to label 20 headlines (140 labels)
29
Dataset Collection Linguistic annotations (Snow et al. 2008) Textual entailment If “Microsoft was established in Italy in 1985”, then “Microsoft was established in 1985” ? Word sense disambiguation “a bass on the line” vs. “a funky bass line” Temporal annotation Ran happens before fell: “The horse ran past the barn fekk”
30
Dataset Collection Document relevance evaluation Alonso et al. (2008) User rating collection Kittur et al. (2008) Noun compound paraphrasing Nakov (2008) Name resoluation Su et al. (2007)
31
Data Characteristic Cost? Efficiency? Quality?
32
Cost and Efficiency In image annotation Sorokin and Forsyth, 2008
33
Cost and Efficiency In linguistic annotation Snow et. al, 2008
34
Cheap and fast! Is it good?
35
Quality Multiple non-experts can beat experts 三個臭皮匠勝過一個諸葛亮 Black line agreement among turkers Green line: single expert Golden result: agreement among multiple experts
36
In addition to Dataset Collection
37
QoE Measurement QoE (Quality of Experience) Subjective measure of user perception Traditional approach User studies by MOS ratings (Bad -> Excellent) Crowdsourcing with paired comparison Diverse user input Easy to understand Interval scale scores can be calculated
38
Acoustic QoE Evaluation
39
Which one is better? Simple pair comparison
40
Optical QoE evaluation
41
Interactive QoE Evaluation
42
Acoustic QoE MP3 Compression Rate VoIP Loss Rate
43
Optical QoE Video Codec Packet loss rate
44
Iterative Task
45
Iterative Tasks Turkit: tools for iterative tasks on Mturk Imperative programming paradigm Basic elements Variable (a = b) Control (if else statement) Loop (for, while statement) Turning MTurk into a programming platform which integrates human brain powers
46
Iterative Text Improvement A Wikipedia-like scenario One Turker improve the text Other Turkers vote if the improvement is valid
47
Iterative Text Improvement Image description Instructions for the improve-HIT Please improve the description for this image People will vote whether to approve your changes Use no more than 500 characters Instructions for the vote-HIT Please select the better description for this image Your vote must agree with the majority to be approved
48
Iterative Text Improvement Image description A partial view of a pocket calculator together with some coins and a pen. A view of personal items a calculator, and some gold and copper coins, and a round tip pen, these are all pocket and wallet sized item used for business, writing, calculating prices or solving math problems and purchasing items. A close-up photograph of the following items: * A CASIO multi-function calculator * A ball point pen, uncapped * Various coins, apparently European, both copper and gold …Various British coins; two of £1 value, three of 20p value and one of 1p value. …
49
Iterative Text Improvement Image description A close-up photograph of the following items: A CASIO multi-function, solar powered scientific calculator. A blue ball point pen with a blue rubber grip and the tip extended. Six British coins; two of £1 value, three of 20p value and one of 1p value. Seems to be a theme illustration for a brochure or document cover treating finance - probably personal finance.
50
Iterative Text Improvement Handwriting Recognition Version 1 You (?) (?) (?) (work). (?) (?) (?) work (not) (time). I (?) (?) a few grammatical mistakes. Overall your writing style is a bit too (phoney). You do (?) have good (points), but they got lost amidst the (writing). (signature)
51
Iterative Text Improvement Handwriting Recognition Version 6 “You (misspelled) (several) (words). Please spell-check your work next time. I also notice a few grammatical mistakes. Overall your writing style is a bit too phoney. You do make some good (points), but they got lost amidst the (writing). (signature)”
52
Cost and Efficiency
53
More on Methodology
54
Repeated Labeling Crowdsourcing -> Multiple imperfect labeler Each worker is a labeler Labels are not always correct Repeated labeling Improve the supervised induction Increase the single-label accuracy Decrease the cost for acquiring training data
55
Repeated Labeling Repeated labeling helps improve the overall quality when the accuracy of single labeler low.
56
Selected Repeated Labeling Repeat-label the most uncertain points Label uncertainty (LU) Whether the label distribution is stable Calculated from beta distribution Model uncertainty (MU) Whether the model has high confidence for the label Calculated from model predictions
57
Selected Repeated Labeling Selected repeated labeling improves the overall quality of crowdsourcing approach. GRR: no selected repeated labeling MU: Model Uncertainty LU: Label Uncertainty LMU: integrate Label and Model Uncertainty
58
Incentive vs. Performance High financial incentive -> high performance? User studies (Mason and Watt 2009) Order images Ex: choose the busiest image Solve word puzzles
59
Incentive vs. Performance High incentive -> high quantity, not high quality
60
Incentive vs. Performance Workers always wants more How much workers think they deserve Users would be influenced by their paid amount Pay little at first, and incrementally increase the payment
61
Conclusion Crowdsourcing provides a new paradigm and a new platform for computer science researches. New applications, new methodologies, and new businesses are quickly developing with the aid of crowdsouring.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.