Crowdsourcing and All-Pay Auctions Milan Vojnović Microsoft Research Joint work with Dominic DiPalantino UC Berkeley, July 13, 2009
Examples of Crowdsourcing Crowdsourcing = soliciting solutions via open calls to large-scale communities – Coined in a Wired article (06) Taskcn – 530,000 solutions posted for 3,100 tasks Innocentive – Over $3 million awarded Odesk – Over $43 million brokered Amazons Mechanical Turk – Over 23,000 tasks 2
Examples of Crowdsourcing (contd) Yahoo! Answers – Lunched Dec 05 – 60M users / 65M answers (as of Dec 06) Live QnA – Lunched Aug 06 / closed May 09 – 3M questions / 750M answers Wikipedia 3
Incentives for Contribution Incentives – Monetary $$$ – Non-momentary Social gratification and publicity Reputation points Certificates and levels Incentives for both participation and quality 4
Incentives for Contribution (contd) Ex. Taskcn 5 Reward range (RMB) Contest duration Number of submissions Number of registrants Number of views 100 RMB $15 (July 09)
Incentives for Contribution (contd) Ex. Yahoo! Answers 6 Points Levels Source:
Questions of Interest Understanding of the incentive schemes – How do contributions relate to offered rewards? Design of contests – How do we best design contests? – How do we set rewards? – How do we best suggest contests to players and rewards to contest providers? 7
Strategic User Behavior From empirical analysis of Taskcn by Yang et al (ACM EC 08) – (i) users respond to incentives, (ii) users learn better strategies – Suggests a game-theoretic analysis 8 User Strategies on Taskcn.com
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 9
Single Contest Competition 10 c1c1 c2c2 c3c3 c4c4 R c i = cost per unit effort or quality produced contest offering reward R players
Single Contest Competition (contd) 11 Outcome -c1b1 -c1b1 R - c 2 b 2 -c 3 b 3 -c 4 b 4 c1c1 c2c2 c3c3 c4c4 b1b1 b2b2 b3b3 b4b4 R
All-Pay Auction 12 Outcome -b1 -b1 v 2 - b 2 -b 3 -b 4 v1v1 v2v2 v3v3 v4v4 b1b1 b2b2 b3b3 b4b4 Everyone pays their bid
Competing Contests 13 R1R1 R2R2 RJRJ... RjRj contestsusers 1 2 u N...
Incomplete Information Assumption Each user u knows = total number of users = his own skill = skills are randomly drawn from F 14 We assume F is an atomless distribution with finite support [0,m]
Assumptions on User Skill 1) Player-specific skill random i.i.d. across u (ex. contests require similar skills or skill determined by players opportunity cost) 2) Contest-specific skill random i.i.d. across u and j (ex. contests require diverse skills) 15
Bayes-Nash Equilibrium Mixed strategy Equilibrium Select contest of highest expected profit where expectation with respect to beliefs about other user skills = prob. of selecting a contest of class j = bid 16 Contest class = set of contests that offer same reward
User Expected Profit Expected profit for a contest of class j = prob. of selecting a contest of class j = distribution of user skill conditional on having selected contest class j 17
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 18
Equilibrium Contest Selection m v2v2 v3v3 v4v skill levels contest classes 19
Threshold Reward Only K highest-reward contest classes selected with strictly positive probability 20 = number of contests of class k
Partitioning over Skill Levels User of skill v is of skill level l if where 21
Contest Selection User of skill l, i.e. with skill selects a contest of class j with probability 22
Participation Rates A contest of class j selected with probability 23 Prior-free – independent of the distribution F
Large-System Limit For positive constants where K is a finite number of contest classes 24
Skill Levels for Large System User of skill v is of skill level l if where 25
Participation Rates for Large System Expected number of participants for a contest of class j 26 Prior-free – independent of the distribution F
Contest Selection in Large System User of skill l, i.e. with skill selects a contest of class j with probability m /3 27 For large systems, what matters is which contests are selected for given skill
Proof Hint for Player-Specific Skills 28 Key property – equilibrium expected payoffs as showed v m0v1v1 v2v2 v3v3 g 1 (v) g 2 (v) g 3 (v) g 4 (v)
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 29
Contest-specific Skills Results established only for large-system limit Same equilibrium relationship between participation and rewards as for player- specific skills 30
Proof Hints Limit expected payoff – For each Balancing – Whenever Asserted relations for follow from above 31
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 32
System Optimum Rewards 33 maximise over subject to SYSTEM Set the rewards so as to optimize system welfare
Example 1: zero costs (non monetary rewards) 34 Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of Rewards unique up to a multiplicative constant – only relative setting of rewards matters
Example 1 (contd) 35 For large systems Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of
Example 2: optimum effort 36 Consider SYSTEM with exerted effort { cost of giving R j (budget constraint) { prob. contest attended { Utility: Cost:
Outline Model of Competing Contests Equilibrium Analysis – Player-Specific Skills – Contest-Specific Skills Design of Contests Experimental Validation Conclusion 37
Taskcn Analysis of rewards and participation across tasks as observed on Taskcn – Tasks of diverse categories: graphics, characters, miscellaneous, super challenge – We considered tasks posted in
Taskcn (contd) 39 reward number of views number of registrants number of submissions
Submissions vs. Reward Diminishing increase of submissions with reward 40 GraphicsCharactersMiscellaneous linear regression
Submissions vs. Reward for Subcategory Logos Conditioning on the more experienced users, the better the prediction by the model 41 any rate once a monthevery fourth dayevery second day Conditional on the rate at which users submit solutions model
Same for the Subcategory 2-D 42 any rate once a monthevery fourth dayevery second day model
Conclusion Crowdsourcing as a system of competing contests Equilibrium analysis of competing contests – Explicit relationship between rewards and participations Prior-free – Diminishing increase of participation with reward Suggested by the model and data Framework for design of crowdsourcing / contests Base results for strategic modelling – Ex. strategic contest providers 43
More Information Paper: ACM EC 09 Version with proofs: MSR-TR – aspx?id= aspx?id=