Download presentation
Presentation is loading. Please wait.
Published byMadeleine Ford Modified over 9 years ago
1
Discussion: So Who Won
2
Announcements Looks like you’re turning in reviews… good! – Some of you are spending too much time on them!! Key points, what was good, what was bad, anything you found interesting. Not necessarily a summary of the paper Not necessarily long “Think like a reviewer” – you have to – A) get the gist – B) say something intelligent/critical about the paper that other reviewers may not have pointed out. force you to think
3
Announcements Class schedule is up – Typically assigned Rank 1 or Rank 2 paper to you; in rare cases Rank 3 – Send presentations to Tarique 48 hrs in advance for feedback – You don’t need to do your class review when you’re the one presenting
4
Today’s paper Early days of crowdsourced algorithms – One of the first few papers in this space Motivation was derived from the design of database primitives for crowdsourcing – MAX, FILTER, SORT, … Makes it easier for database reviewers to accept the paper!
5
Today’s Paper An algorithmic paper Some hardness and #P-hardness proofs – Totally fine if you don’t get them – not the point – Completely understand that you may be coming in with different backgrounds (UG, HCI, Industrial Eng., Systems) Some heuristics to solve the problem – Hopefully most of you got these! Plenty of experimental takeaways
6
What to look out for in a crowd algorithms paper Error Model Question Model Objectives Problem Formulation Algorithms Experiments Realism Utility
7
Question Model: Crowdsourced Questions The paper uniformly uses pairwise comparisons between items as a “unit question” Are there other questions? Pros/Cons?
8
Question Model: Crowdsourced Questions The paper uniformly uses pairwise comparisons between items as a “unit question” Are there other questions? Pros/Cons? – Rating: fewer questions, more errors due to comparison O(n) vs O(n^2) – Compare multiple items and rank? Again tradeoff somewhere in terms of too many items to keep track of.
9
Error Model: Assumptions Were there any issues in the error model adopted by the paper?
10
Error Model: Assumptions Were there any issues in the error model adopted by the paper? A: Paper assumes that workers have a single error rate – need not be true B: Workers are assumed to be independent C: There is assumed to be a single true ranking
11
Objectives Three-way tradeoff between – Cost – Latency – Accuracy What do they optimize for? What do they mean by those quantities? What about other quantities?
12
Problem formulation Next Questions: Is the next questions problem formulation reasonable? Say, instead of a handful of additional questions, I’d like to schedule 1000 questions. Is it still reasonable?
13
Problem formulation Next Questions: Is the next questions problem formulation reasonable? Say, instead of a handful of additional questions, I’d like to schedule 1000 questions. Is it still reasonable? – Not really – may want to do a “decision tree” type approach.
14
Algorithms Most of the algorithms were adapted from literature in economics/social choice Q: were there other variants you’d have liked to have seen for the judgment problem?
15
Algorithms Most of the algorithms were adapted from literature in economics/social choice Q: were there other variants you’d have liked to have seen for the judgment problem? – What about: “two” hop instead of “one” hop = local? Combining information across these algorithms?
16
Algorithms Most of the algorithms were adapted from literature in economics/social choice Q: were there other variants you’d have liked to have seen for the next votes problem?
17
Algorithms Most of the algorithms were adapted from literature in economics/social choice Q: were there other variants you’d have liked to have seen for the next votes problem? – Pair, Max, Greedy, Round-Robin – What about something that quantifies the potential impact of each vote of changing the sort order? – What about repeatedly asking the top pair?
18
Experimental Results Did you find any issues with the experiments?
19
Experimental Results Did you find any issues with the experiments? Most of them were simulated Real experiments often reveal interesting insights – Always worth performing to check the realism of models – Or at least whether they lead to tangible benefits
20
What else? What else could the authors have done better?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.