Download presentation
Presentation is loading. Please wait.
Published byNoah Maloney Modified over 11 years ago
1
Crowdsourcing and All-Pay Auctions Milan Vojnović Microsoft Research Joint work with Dominic DiPalantino UC Berkeley, July 13, 2009
2
Examples of Crowdsourcing Crowdsourcing = soliciting solutions via open calls to large-scale communities – Coined in a Wired article (06) Taskcn – 530,000 solutions posted for 3,100 tasks Innocentive – Over $3 million awarded Odesk – Over $43 million brokered Amazons Mechanical Turk – Over 23,000 tasks 2
3
Examples of Crowdsourcing (contd) Yahoo! Answers – Lunched Dec 05 – 60M users / 65M answers (as of Dec 06) Live QnA – Lunched Aug 06 / closed May 09 – 3M questions / 750M answers Wikipedia 3
4
Incentives for Contribution Incentives – Monetary $$$ – Non-momentary Social gratification and publicity Reputation points Certificates and levels Incentives for both participation and quality 4
5
Incentives for Contribution (contd) Ex. Taskcn 5 Reward range (RMB) Contest duration Number of submissions Number of registrants Number of views 100 RMB $15 (July 09)
6
Incentives for Contribution (contd) Ex. Yahoo! Answers 6 Points Levels Source: http://en.wikipedia.org/wiki/Yahoo!_Answers
7
Questions of Interest Understanding of the incentive schemes – How do contributions relate to offered rewards? Design of contests – How do we best design contests? – How do we set rewards? – How do we best suggest contests to players and rewards to contest providers? 7
8
Strategic User Behavior From empirical analysis of Taskcn by Yang et al (ACM EC 08) – (i) users respond to incentives, (ii) users learn better strategies – Suggests a game-theoretic analysis 8 User Strategies on Taskcn.com
9
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 9
10
Single Contest Competition 10 c1c1 c2c2 c3c3 c4c4 R c i = cost per unit effort or quality produced contest offering reward R players
11
Single Contest Competition (contd) 11 Outcome -c1b1 -c1b1 R - c 2 b 2 -c 3 b 3 -c 4 b 4 c1c1 c2c2 c3c3 c4c4 b1b1 b2b2 b3b3 b4b4 R
12
All-Pay Auction 12 Outcome -b1 -b1 v 2 - b 2 -b 3 -b 4 v1v1 v2v2 v3v3 v4v4 b1b1 b2b2 b3b3 b4b4 Everyone pays their bid
13
Competing Contests 13 R1R1 R2R2 RJRJ... RjRj contestsusers 1 2 u N...
14
Incomplete Information Assumption Each user u knows = total number of users = his own skill = skills are randomly drawn from F 14 We assume F is an atomless distribution with finite support [0,m]
15
Assumptions on User Skill 1) Player-specific skill random i.i.d. across u (ex. contests require similar skills or skill determined by players opportunity cost) 2) Contest-specific skill random i.i.d. across u and j (ex. contests require diverse skills) 15
16
Bayes-Nash Equilibrium Mixed strategy Equilibrium Select contest of highest expected profit where expectation with respect to beliefs about other user skills = prob. of selecting a contest of class j = bid 16 Contest class = set of contests that offer same reward
17
User Expected Profit Expected profit for a contest of class j = prob. of selecting a contest of class j = distribution of user skill conditional on having selected contest class j 17
18
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 18
19
Equilibrium Contest Selection m 0 1 2 3 4 5 1 v2v2 v3v3 v4v4 2 3 4 skill levels contest classes 19
20
Threshold Reward Only K highest-reward contest classes selected with strictly positive probability 20 = number of contests of class k
21
Partitioning over Skill Levels User of skill v is of skill level l if where 21
22
Contest Selection User of skill l, i.e. with skill selects a contest of class j with probability 22
23
Participation Rates A contest of class j selected with probability 23 Prior-free – independent of the distribution F
24
Large-System Limit For positive constants where K is a finite number of contest classes 24
25
Skill Levels for Large System User of skill v is of skill level l if where 25
26
Participation Rates for Large System Expected number of participants for a contest of class j 26 Prior-free – independent of the distribution F
27
Contest Selection in Large System User of skill l, i.e. with skill selects a contest of class j with probability m 0 1 2 3 4 5 1 2 3 4 1/3 27 For large systems, what matters is which contests are selected for given skill
28
Proof Hint for Player-Specific Skills 28 Key property – equilibrium expected payoffs as showed v m0v1v1 v2v2 v3v3 g 1 (v) g 2 (v) g 3 (v) g 4 (v)
29
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 29
30
Contest-specific Skills Results established only for large-system limit Same equilibrium relationship between participation and rewards as for player- specific skills 30
31
Proof Hints Limit expected payoff – For each Balancing – Whenever Asserted relations for follow from above 31
32
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 32
33
System Optimum Rewards 33 maximise over subject to SYSTEM Set the rewards so as to optimize system welfare
34
Example 1: zero costs (non monetary rewards) 34 Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of Rewards unique up to a multiplicative constant – only relative setting of rewards matters
35
Example 1 (contd) 35 For large systems Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of
36
Example 2: optimum effort 36 Consider SYSTEM with exerted effort { cost of giving R j (budget constraint) { prob. contest attended { Utility: Cost:
37
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 37
38
Taskcn Analysis of rewards and participation across tasks as observed on Taskcn – Tasks of diverse categories: graphics, characters, miscellaneous, super challenge – We considered tasks posted in 2008 38
39
Taskcn (contd) 39 reward number of views number of registrants number of submissions
40
Submissions vs. Reward Diminishing increase of submissions with reward 40 GraphicsCharactersMiscellaneous linear regression
41
Submissions vs. Reward for Subcategory Logos Conditioning on the more experienced users, the better the prediction by the model 41 any rate once a monthevery fourth dayevery second day Conditional on the rate at which users submit solutions model
42
Same for the Subcategory 2-D 42 any rate once a monthevery fourth dayevery second day model
43
Conclusion Crowdsourcing as a system of competing contests Equilibrium analysis of competing contests – Explicit relationship between rewards and participations Prior-free – Diminishing increase of participation with reward Suggested by the model and data Framework for design of crowdsourcing / contests Base results for strategic modelling – Ex. strategic contest providers 43
44
More Information Paper: ACM EC 09 Version with proofs: MSR-TR-2009-09 – http://research.microsoft.com/apps/pubs/default. aspx?id=79370 http://research.microsoft.com/apps/pubs/default. aspx?id=79370 44
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.