Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Slides:



Advertisements
Similar presentations
Panos Ipeirotis Stern School of Business
Advertisements

Quality Management on Amazon Mechanical Turk Panos Ipeirotis Foster Provost Jing Wang New York University.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Rewarding Crowdsourced Workers Panos Ipeirotis New York University and Google Joint work with: Jing Wang, Foster Provost, Josh Attenberg, and Victor Sheng;
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis Stern School of Business New York University Joint work with: Jing.
R OBERTO B ATTITI, M AURO B RUNATO The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University Title Page.
Estimating the Completion Time of Crowdsourced Tasks using Survival Analysis Jing Wang, New York University Siamak Faridani, University of California,
Presenter: Chien-Ju Ho  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated.
@ Carnegie Mellon Databases User-Centric Web Crawling Sandeep Pandey & Christopher Olston Carnegie Mellon University.
Proactive Learning: Cost- Sensitive Active Learning with Multiple Imperfect Oracles Pinar Donmez and Jaime Carbonell Pinar Donmez and Jaime Carbonell Language.
Anindya Ghose Panos Ipeirotis Arun Sundararajan Stern School of Business New York University Opinion Mining using Econometrics A Case Study on Reputation.
Evaluating Search Engine
Information Retrieval in Practice
Search Engines and Information Retrieval
Machine Learning Case study. What is ML ?  The goal of machine learning is to build computer systems that can adapt and learn from their experience.”
Presented by Li-Tal Mashiach Learning to Rank: A Machine Learning Approach to Static Ranking Algorithms for Large Data Sets Student Symposium.
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
1 Ranked Queries over sources with Boolean Query Interfaces without Ranking Support Vagelis Hristidis, Florida International University Yuheng Hu, Arizona.
SLIDE 1IS 202 – FALL 2003 Lecture 26: Final Review Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00.
Information Retrieval
Overview of Search Engines
Get Another Label? Using Multiple, Noisy Labelers Joint work with Victor Sheng and Foster Provost Panos Ipeirotis Stern School of Business New York University.
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis New York University Joint work with Jing Wang, Foster Provost,
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 9. Hypothesis Testing I: The Six Steps of Statistical Inference.
CS598CXZ Course Summary ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Adversarial Information Retrieval The Manipulation of Web Content.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Search Engines and Information Retrieval Chapter 1.
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis New York University Joint work with Jing Wang, Foster Provost,
Opinion Mining Using Econometrics: A Case Study on Reputation Systems Anindya Ghose, Panagiotis G. Ipeirotis, and Arun Sundararajan Leonard N. Stern School.
Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers Panos Ipeirotis New York University Joint work with Jing.
Panos Ipeirotis Stern School of Business New York University Opinion Mining Using Econometrics.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York.
Detecting Semantic Cloaking on the Web Baoning Wu and Brian D. Davison Lehigh University, USA WWW 2006.
Spam? No, thanks! Panos Ipeirotis – New York University ProPublica, Apr 1 st 2010 (Disclaimer: No jokes included)
CIKM’09 Date:2010/8/24 Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen 1.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University.
Treatment Learning: Implementation and Application Ying Hu Electrical & Computer Engineering University of British Columbia.
Designing Ranking Systems for Consumer Reviews: The Economic Impact of Customer Sentiment in Electronic Markets Anindya Ghose Panagiotis Ipeirotis Stern.
Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1.
Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University.
© 2009 Amazon.com, Inc. or its Affiliates. Amazon Mechanical Turk New York City Meet Up September 1, 2009 WELCOME!
Presenter: Shanshan Lu 03/04/2010
Detecting Dominant Locations from Search Queries Lee Wang, Chuang Wang, Xing Xie, Josh Forman, Yansheng Lu, Wei-Ying Ma, Ying Li SIGIR 2005.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Improving Search Results Quality by Customizing Summary Lengths Michael Kaisser ★, Marti Hearst  and John B. Lowe ★ University of Edinburgh,  UC Berkeley,
ASSESSING LEARNING ALGORITHMS Yılmaz KILIÇASLAN. Assessing the performance of the learning algorithm A learning algorithm is good if it produces hypotheses.
Joseph M. Hellerstein Peter J. Haas Helen J. Wang Presented by: Calvin R Noronha ( ) Deepak Anand ( ) By:
Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University.
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
 Effective Multi-Label Active Learning for Text Classification Bishan yang, Juan-Tao Sun, Tengjiao Wang, Zheng Chen KDD’ 09 Supervisor: Koh Jia-Ling Presenter:
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Adventures in Crowdsourcing Panos Ipeirotis Stern School of Business New York University Thanks to: Jing Wang, Marios Kokkodis, Foster Provost, Josh Attenberg,
Using Blog Properties to Improve Retrieval Gilad Mishne (ICWSM 2007)
Lecture-6 Bscshelp.com. Todays Lecture  Which Kinds of Applications Are Targeted?  Business intelligence  Search engines.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
Information Retrieval in Practice
Evaluation of IR Systems
Probabilistic Databases
Presentation transcript:

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint work with Victor Sheng, Foster Provost, and Jing Wang

2 Motivation Many task rely on high-quality labels for objects: – relevance judgments for search engine results – identification of duplicate database records – image recognition – song categorization – videos Labeling can be relatively inexpensive, using Mechanical Turk, ESP game …

Micro-Outsourcing: Mechanical Turk Requesters post micro-tasks, a few cents each

4 Motivation Labels can be used in training predictive models But: labels obtained through such sources are noisy. This directly affects the quality of learning models

5 Quality and Classification Performance Labeling quality increases  classification quality increases Q = 0.5 Q = 0.6 Q = 0.8 Q = 1.0 Training set size

6 How to Improve Labeling Quality Find better labelers – Often expensive, or beyond our control Use multiple noisy labelers: repeated-labeling – Our focus

7 Majority Voting and Label Quality P=0.4 P=0.5 P=0.6 P=0.7 P=0.8 P=0.9 P=1.0 Ask multiple labelers, keep majority label as “true” label Quality is probability of majority label being correct P is probability of individual labeler being correct

8 Tradeoffs for Modeling Get more examples  Improve classification Get more labels per example  Improve quality  Improve classification Q = 0.5 Q = 0.6 Q = 0.8 Q = 1.0

9 Basic Labeling Strategies Single Labeling – Get as many data points as possible – One label each Round-robin Repeated Labeling – Repeatedly label data points, – Give next label to the one with the fewest so far

10 Repeat-Labeling vs. Single Labeling P= 0.8, labeling quality K=5, #labels/example Repeated Single With low noise, more (single labeled) examples better

11 Repeat-Labeling vs. Single Labeling P= 0.6, labeling quality K=5, #labels/example Repeated Single With high noise, repeated labeling better

12 Selective Repeated-Labeling We have seen: – With enough examples and noisy labels, getting multiple labels is better than single-labeling Can we do better than the basic strategies? Key observation: we have additional information to guide selection of data for repeated labeling – the current multiset of labels Example: {+,-,+,+,-,+} vs. {+,+,+,+}

13 Natural Candidate: Entropy Entropy is a natural measure of label uncertainty: E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1 Strategy: Get more labels for high-entropy label multisets

14 What Not to Do: Use Entropy Improves at first, hurts in long run

Why not Entropy In the presence of noise, entropy will be high even with many labels Entropy is scale invariant – (3+, 2-) has same entropy as (600+, 400-) 15

16 Estimating Label Uncertainty (LU) Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution S LU Beta probability density function

Label Uncertainty p=0.7 5 labels (3+, 2-) Entropy ~ 0.97 CDF  =

Label Uncertainty p= labels (7+, 3-) Entropy ~ 0.88 CDF  =

Label Uncertainty p= labels (14+, 6-) Entropy ~ 0.88 CDF  =

Quality Comparison 20 Label Uncertainty Round robin (already better than single labeling)

21 Model Uncertainty (MU) Learning a model of the data provides an alternative source of information about label certainty Model uncertainty: get more labels for instances that cause model uncertainty Intuition? – for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances – for modeling: why improve training data quality if model already is certain there? Models Examples Self-healing process ? ? ?

22 Label + Model Uncertainty Label and model uncertainty (LMU): avoid examples where either strategy is certain

Quality 23 Label Uncertainty Uniform, round robin Label + Model Uncertainty Model Uncertainty alone also improves quality

24 Comparison: Model Quality (I) Label & Model Uncertainty Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU.

25 Comparison: Model Quality (II) Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU.

26 Summary of results  Micro-outsourcing (e.g., MTurk, RentaCoder, ESP game) change the landscape for data acquisition Repeated labeling improves data quality and model quality With noisy labels, repeated labeling can be preferable to single labeling When labels relatively cheap, repeated labeling can do much better than single labeling Round-robin repeated labeling works well Selective repeated labeling improves substantially

Example: Build an Adult Web Site Classifier Need a large number of hand-labeled sites Get people to look at sites and classify them as: G (general), PG (parental guidance), R (restricted), X (porn) Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr  MTurk: 2500 websites/hr, cost: $12/hr Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr  MTurk: 2500 websites/hr, cost: $12/hr

Bad news: Spammers! Worker ATAMRO447HWJQ labeled X (porn) sites as G (general audience) Worker ATAMRO447HWJQ labeled X (porn) sites as G (general audience)

Solution: Repeated Labeling  Probability of correctness increases with number of workers  Probability of correctness increases with quality of workers 1 worker 70% correct 1 worker 70% correct 11 workers 93% correct 11 workers 93% correct

11-vote Statistics  MTurk: 227 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr 11-vote Statistics  MTurk: 227 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr Single Vote Statistics  MTurk: 2500 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr Single Vote Statistics  MTurk: 2500 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr But Majority Voting can be Expensive

Spammer among 9 workers Our “friend” ATAMRO447HWJQ mainly marked sites as G. Obviously a spammer…  We can compute error rates for each worker Error rates for ATAMRO447HWJQ  P[X → X]=9.847%P[X → G]=90.153%  P[G → X]=0.053%P[G → G]=99.947%

Rejecting spammers and Benefits Random answers error rate = 50% Average error rate for ATAMRO447HWJQ: 45.2% P[X → X]=9.847%P[X → G]=90.153% P[G → X]=0.053%P[G → G]=99.947% Action: REJECT and BLOCK Results: Over time you block all spammers Spammers learn to avoid your HITS You can decrease redundancy, as quality of workers is higher

After rejecting spammers, quality goes up With spam 1 worker 70% correct With spam 1 worker 70% correct With spam 11 workers 93% correct With spam 11 workers 93% correct Without spam 1 worker 80% correct Without spam 1 worker 80% correct Without spam 5 workers 94% correct Without spam 5 workers 94% correct

Correcting biases Sometimes workers are careful but biased Classifies G → P and P → R Average error rate for ATLJIK76YH1TF: 45.0% Is ATLJIK76YH1TF a spammer? Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0% Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0%

Correcting biases For ATLJIK76YH1TF, we simply need to compute the “non- recoverable” error-rate (technical details omitted) Non-recoverable error-rate for ATLJIK76YH1TF: 9% The “condition number” of the matrix [how easy is to invert the matrix] is a good indicator of spamminess Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0% Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0%

Too much theory? Open source implementation available at: Input: – Labels from Mechanical Turk – Cost of incorrect labelings (e.g., X  G costlier than G  X) Output: – Corrected labels – Worker error rates – Ranking of workers according to their quality Alpha version, more improvements to come! Suggestions and collaborations welcomed!

37 Many new directions… Strategies using “learning-curve gradient” Increased compensation vs. labeler quality Multiple “real” labels Truly “soft” labels Selective repeated tagging

Other Projects SQoUT project Structured Querying over Unstructured Text Faceted Interfaces EconoMining project The Economic Value of User Generated Content

39 SQoUT: Structured Querying over Unstructured Text Information extraction applications extract structured relations from unstructured text July 8, 2008: Intel Corporation and DreamWorks Animation today announced they have formed a strategic alliance aimed at revolutionizing 3-D filmmaking technology,… DateCompany1Company2 08/06/08BPVeneriu 04/30/07OmnitureVignette 06/18/06MicrosoftNortel 07/08/08Intel Corp.DreamWorks Information Extraction System (e.g., OpenCalais) Alliances covered in The New York Times Alliances and strategic partnerships before 1990 are sparsely covered in databases such as SDC Platinum

40 In an ideal world… Output Tokens … Extraction System(s) Text Databases 3.Extract output tuples 2.Process documents 1.Retrieve documents from database/web/archive SELECT Date, Company1, Company2 FROM Alliances USING OpenCalais OVER NYT_archive [WITH recall>0.2 AND precision >0.9] SIGMOD’06, TODS’07, ICDE’09, TODS’09

41 SQoUT: The Questions Output Tokens … Extraction System(s) Text Databases 3.Extract output tuples 2.Process documents 1.Retrieve documents from database/web/archive Questions: 1.How to we retrieve the documents? (Scan all? Specific websites? Query Google?) 2.How to configure the extraction systems? 3.What is the execution time? 4.What is the output quality? SIGMOD’06 best paper, TODS’07, ICDE’09,TODS’09

EconoMining Project Show me the Money! Applications (in increasing order of difficulty)  Buyer feedback and seller pricing power in online marketplaces (ACL 2007)  Product reviews and product sales (KDD 2007)  Importance of reviewers based on economic impact (ICEC 2007)  Hotel ranking based on “bang for the buck” (WebDB 2008)  Political news (MSM, blogs), prediction markets, and news importance Basic Idea  Opinion mining an important application of information extraction  Opinions of users are reflected in some economic variable (price, sales)

Some Indicative Dollar Values Positive Negative Natural method for extracting sentiment strength and polarity good packaging -$0.56 Naturally captures the pragmatic meaning within the given context captures misspellings as well Positive? Negative ?

Thanks! Q & A?

So… (Sometimes) quality of multiple noisy labelers better than quality of best labeler in set 45 Multiple noisy labelers improve quality So, should we always get multiple labels?

Optimal Label Allocation 46