Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Panos Ipeirotis Stern School of Business
Feichter_DPG-SYKL03_Bild-01. Feichter_DPG-SYKL03_Bild-02.
1 Vorlesung Informatik 2 Algorithmen und Datenstrukturen (Parallel Algorithms) Robin Pomplun.
© 2008 Pearson Addison Wesley. All rights reserved Chapter Seven Costs.
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Chapter 1 The Study of Body Function Image PowerPoint
Copyright © 2011, Elsevier Inc. All rights reserved. Chapter 6 Author: Julia Richards and R. Scott Hawley.
Author: Julia Richards and R. Scott Hawley
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
UNITED NATIONS Shipment Details Report – January 2006.
1 RA I Sub-Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Casablanca, Morocco, 20 – 22 December 2005 Status of observing programmes in RA I.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Measurements and Their Uncertainty 3.1
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Year 6 mental test 5 second questions
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
1 Discreteness and the Welfare Cost of Labour Supply Tax Distortions Keshab Bhattarai University of Hull and John Whalley Universities of Warwick and Western.
Solve Multi-step Equations
REVIEW: Arthropod ID. 1. Name the subphylum. 2. Name the subphylum. 3. Name the order.
Order of Operations Lesson
Week 2 The Object-Oriented Approach to Requirements
On Comparing Classifiers : Pitfalls to Avoid and Recommended Approach
PP Test Review Sections 6-1 to 6-6
ABC Technology Project
1 Undirected Breadth First Search F A BCG DE H 2 F A BCG DE H Queue: A get Undiscovered Fringe Finished Active 0 distance from A visit(A)
2 |SharePoint Saturday New York City
VOORBLAD.
1 Breadth First Search s s Undiscovered Discovered Finished Queue: s Top of queue 2 1 Shortest path from s.
Hypothesis Tests: Two Independent Samples
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
BIOLOGY AUGUST 2013 OPENING ASSIGNMENTS. AUGUST 7, 2013  Question goes here!
Factor P 16 8(8-5ab) 4(d² + 4) 3rs(2r – s) 15cd(1 + 2cd) 8(4a² + 3b²)
Squares and Square Root WALK. Solve each problem REVIEW:
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
© 2012 National Heart Foundation of Australia. Slide 2.
Universität Kaiserslautern Institut für Technologie und Arbeit / Institute of Technology and Work 1 Q16) Willingness to participate in a follow-up case.
Understanding Generalist Practice, 5e, Kirst-Ashman/Hull
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
25 seconds left…...
1 Using one or more of your senses to gather information.
Subtraction: Adding UP
Equal or Not. Equal or Not
Slippery Slope
Statistical Inferences Based on Two Samples
Analyzing Genes and Genomes
DTU Informatics Introduction to Medical Image Analysis Rasmus R. Paulsen DTU Informatics TexPoint fonts.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Chapter 8 Estimation Understandable Statistics Ninth Edition
Intracellular Compartments and Transport
PSSA Preparation.
Essential Cell Biology
Energy Generation in Mitochondria and Chlorplasts
Basics of Statistical Estimation
Presenter: Chien-Ju Ho  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated.
Get Another Label? Using Multiple, Noisy Labelers Joint work with Victor Sheng and Foster Provost Panos Ipeirotis Stern School of Business New York University.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University.
Presentation transcript:

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos Ipeirotis

2 Outsourcing KDD preprocessing Traditionally, data mining teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing – Raghu from his Innovation Lecture the best you can expect are noisy labels Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc. – using Mechanical Turk, Rent-a-Coder, etc.Mechanical Turk – quality may be lower than expert labeling (much?) – but low costs can allow massive scale The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc.

ESP Game (by Luis von Ahn) 3

Other free labeling schemes Open Mind initiative ( Other gwap games – Tag a Tune – Verbosity (tag words) – Matchin (image ranking) Web 2.0 systems? – Can/should tagging be directed?

5 Noisy labels can be problematic Many tasks rely on high-quality labels for objects: – learning predictive models – searching for relevant information – finding duplicate database records – image recognition/labeling – song categorization Noisy labels can lead to degraded task performance

6 Quality and Classification Performance Labeling quality increases classification quality increases P = 0.5 P = 0.6 P = 0.8 P = 1.0 Here, labels are values for target variable

Summary of results Repeated labeling can improve data quality and model quality (but not always) When labels are noisy, repeated labeling can be preferable to single labeling even when labels arent particularly cheap When labels are relatively cheap, repeated labeling can do much better (omitted) Round-robin repeated labeling does well Selective repeated labeling improves substantially

8 Repeated labeling and data quality Repeated labeling and classification quality Selective repeated labeling Our Focus: Labeling using Multiple Noisy Labelers

9 Majority Voting and Label Quality P=0.4 P=0.5 P=0.6 P=0.7 P=0.8 P=0.9 P=1.0 Ask multiple labelers, keep majority label as true label Quality is probability of being correct P is probability of individual labeler being correct

10 Tradeoffs for Modeling Get more labels Improve label quality Improve classification Get more examples Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

11 Basic Labeling Strategies Single Labeling – Get as many data points as possible – one label each Round-robin Repeated Labeling – Fixed Round Robin (FRR) keep labeling the same set of points – Generalized Round Robin (GRR) repeatedly-label data points, giving next label to point with fewest so far

12 Fixed Round Robin vs. Single Labeling p= 0.6, labeling quality #examples =100 FRR (100 examples) SL With high noise, repeated labeling better than single labeling

13 Fixed Round Robin vs. Single Labeling p= 0.8, labeling quality #examples =50 FRR (50 examples) Single With low noise, more (single labeled) examples better

Gen. Round Robin vs. Single Labeling P=0.6, k=5 Repeated labeling is better than single labeling P: labeling quality k: #labels GRR SL

15 Tradeoffs for Modeling Get more labels Improve label quality Improve classification Get more examples Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

16 Selective Repeated-Labeling We have seen: – With enough examples and noisy labels, getting multiple labels is better than single-labeling – When we consider costly preprocessing, the benefit is magnified (omitted -- see paper) Can we do better than the basic strategies? Key observation: we have additional information to guide selection of data for repeated labeling – the current multiset of labels Example: {+,-,+,+,-,+} vs. {+,+,+,+}

17 Natural Candidate: Entropy Entropy is a natural measure of label uncertainty: E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1 Strategy: Get more labels for examples with high- entropy label multisets

18 What Not to Do: Use Entropy Improves at first, hurts in long run

Why not Entropy In the presence of noise, entropy will be high even with many labels Entropy is scale invariant (3+, 2-) has same entropy as (600+, 400-) 19

20 Estimating Label Uncertainty (LU) Observe +s and –s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution S LU Beta probability density function

Label Uncertainty p=0.7 5 labels (3+, 2-) Entropy ~ 0.97 CDF =

Label Uncertainty p= labels (7+, 3-) Entropy ~ 0.88 CDF =

Label Uncertainty p= labels (14+, 6-) Entropy ~ 0.88 CDF =

Label Uncertainty vs. Round Robin 24 similar results across a dozen data sets

Recall: Gen. Round Robin vs. Single Labeling P=0.6, k=5 Multi-labeling is better than single labeling P: labeling quality k: #labels GRR SL

Label Uncertainty vs. Round Robin 26 similar results across a dozen data sets

27 Another strategy : Model Uncertainty (MU) Learning a model of the data provides an alternative source of information about label certainty Model uncertainty: get more labels for instances that cannot be modeled well Intuition? – for data quality, low-certainty regions may be due to incorrect labeling of corresponding instances – for modeling: why improve training data quality if model already is certain there? ? ?

28 Yet another strategy : Label & Model Uncertainty (LMU) Label and model uncertainty (LMU): avoid examples where either strategy is certain

Comparison 29 Label Uncertainty GRR Label & Model Uncertainty Model Uncertainty alone also improves quality

30 Comparison: Model Quality Label & Model Uncertainty Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU.

Summary of results Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) has changed the landscape for data formulation Repeated labeling can improve data quality and model quality (but not always) When labels are noisy, repeated labeling can be preferable to single labeling even when labels arent particularly cheap When labels are relatively cheap, repeated labeling can do much better (omitted) Round-robin repeated labeling can do well Selective repeated labeling improves substantially

32 Opens up many new directions… Strategies using learning-curve gradient Estimating the quality of each labeler Example-conditional quality Increased compensation vs. labeler quality Multiple real labels Truly soft labels Selective repeated tagging

Thanks! Q & A?

What if different labelers have different qualities? (Sometimes) quality of multiple noisy labelers is better than quality of best labeler in set here, 3 labelers: p-d, p, p+d 34

Mechanical Turk Example 35

Estimating Labeler Quality (Dawid, Skene 1979): Multiple diagnoses – Assume equal qualities – Estimate true labels for examples – Estimate qualities of labelers given the true labels – Repeat until convergence 36