Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York.

Slides:



Advertisements
Similar presentations
Panos Ipeirotis Stern School of Business
Advertisements

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.
Bayes rule, priors and maximum a posteriori
UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
Linear Classifiers (perceptrons)
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis Stern School of Business New York University Joint work with: Jing.
Presenter: Chien-Ju Ho  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated.
Data preprocessing before classification In Kennedy et al.: “Solving data mining problems”
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Matchin: Eliciting User Preferences with an Online Game Severin Hacker, and Luis von Ahn Carnegie Mellon University SIGCHI 2009.
Relational Learning with Gaussian Processes By Wei Chu, Vikas Sindhwani, Zoubin Ghahramani, S.Sathiya Keerthi (Columbia, Chicago, Cambridge, Yahoo!) Presented.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Assuming normally distributed data! Naïve Bayes Classifier.
Chapter 7 – K-Nearest-Neighbor
Ensemble Learning: An Introduction
Thanks to Nir Friedman, HU
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
Ensemble Learning (2), Tree and Forest
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
1 CSI5388 Data Sets: Running Proper Comparative Studies with Large Data Repositories [Based on Salzberg, S.L., 1997 “On Comparing Classifiers: Pitfalls.
1 Efficiently Learning the Accuracy of Labeling Sources for Selective Sampling by Pinar Donmez, Jaime Carbonell, Jeff Schneider School of Computer Science,
Get Another Label? Using Multiple, Noisy Labelers Joint work with Victor Sheng and Foster Provost Panos Ipeirotis Stern School of Business New York University.
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis New York University Joint work with Jing Wang, Foster Provost,
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.
EVALUATION David Kauchak CS 451 – Fall Admin Assignment 3 - change constructor to take zero parameters - instead, in the train method, call getFeatureIndices()
Issues with Data Mining
Crowdsourcing using Mechanical Turk: Quality Management and Scalability Panos Ipeirotis New York University Joint work with Jing Wang, Foster Provost,
by B. Zadrozny and C. Elkan
Get Another Label? Improving Data Quality and Machine Learning Using Multiple, Noisy Labelers Panos Ipeirotis New York University Joint work with Jing.
Classification. An Example (from Pattern Classification by Duda & Hart & Stork – Second Edition, 2001)
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University.
Evaluating What’s Been Learned. Cross-Validation Foundation is a simple idea – “ holdout ” – holds out a certain amount for testing and uses rest for.
Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
MINING MULTI-LABEL DATA BY GRIGORIOS TSOUMAKAS, IOANNIS KATAKIS, AND IOANNIS VLAHAVAS Published on July, 7, 2010 Team Members: Kristopher Tadlock, Jimmy.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
27 February 2001What is Confidence?Slide 1 What is Confidence? How to Handle Overfitting When Given Few Examples Top Changwatchai AIML seminar 27 February.
Ensemble Methods: Bagging and Boosting
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Data Reduction via Instance Selection Chapter 1. Background KDD  Nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable.
Bayesian Classification Using P-tree  Classification –Classification is a process of predicting an – unknown attribute-value in a relation –Given a relation,
CogNova Technologies 1 Evaluating Induced Models Evaluating Induced Models with Daniel L. Silver Daniel L. Silver Copyright (c), 2004 All Rights Reserved.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University.
Data Mining Practical Machine Learning Tools and Techniques By I. H. Witten, E. Frank and M. A. Hall Chapter 5: Credibility: Evaluating What’s Been Learned.
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
Ch15: Decision Theory & Bayesian Inference 15.1: INTRO: We are back to some theoretical statistics: 1.Decision Theory –Make decisions in the presence of.

Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Data Mining and Decision Support
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Introduction Sample surveys involve chance error. Here we will study how to find the likely size of the chance error in a percentage, for simple random.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Rodney Nielsen Many of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall Data Science Credibility: Evaluating What’s Been Learned Predicting.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Ensemble Classifiers.
Modeling Annotator Accuracies for Supervised Learning
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Statistical Models for Automatic Speech Recognition
Classification and Prediction
LECTURE 23: INFORMATION THEORY REVIEW
Basics of ML Rohan Suri.
CS639: Data Management for Data Science
Mathematical Foundations of BME Reza Shadmehr
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York University Stern School

Outsourcing preprocessing Traditionally, data mining teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing – Raghu from his Innovation Lecture “the best you can expect are noisy labels” Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc. – using Mechanical Turk, Rent-a-Coder, etc. – quality may be lower than expert labeling (much?) – but low costs can allow massive scale The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc. 2

ESP Game (by Luis von Ahn) 3

Other “free” labeling schemes Open Mind initiative ( Other GWAP games – Tag a Tune – Verbosity (tag words) – Matchin (image ranking) Web 2.0 systems? – Can/should tagging be directed? 4

Noisy labels can be problematic Many tasks rely on high-quality labels for objects: – learning predictive models – searching for relevant information – finding duplicate database records – image recognition/labeling – song categorization Noisy labels can lead to degraded performance 5

Quality and Classification Performance Labeling quality (labeling accuracy P) increases classification quality increases 6 P = 0.5 P = 0.6 P = 0.8 P = 1.0

Majority Voting and Label Quality Ask multiple “noisy” labelers, keep majority label as “true” label Given 2N+1 labelers with uniform accuracy P, integrated quality is P(Bin(2N+1, P) <= N) P=0.4 P=0.5 P=0.6 P=0.7 P=0.8 P=0.9 P=1.0 P is probability of individual labeler being correct  (1) removing noisy labelers, (2) collect more labels as much as possible

Labeling Methods MV: majority voting Uncertainty preserving labeling (soft label) – Multiplied Examples (ME): for each example x i, ME considers the multiset of existing labels (L i,j ), and for each L ij, it creates a replica with weight 1/|L i,j | (??) These replicas with weights are fed into the classifier for training Another method is to use Naive Bayes (see WSDM’11): Modeling Annotator Accuracies for Supervised Learning, WSDM 2011

Get Another Label? Single Labeling (SL) – One label per each sample; get more samples Repeated labeling (w/ a fixed set of samples) – Round-robin Repeated Labeling Fixed Round Robin (FRR) – Keep labeling the same set of samples Generalized Round Robin (GRR) – Keep labeling the same set of samples, yet giving highest preference to a sample with the fewest labels – Selective Repeated-Labeling Consider label uncertainty of a sample (LU) Considerer classification (model) uncertainty of a sample (MU) Consider both label uncertainty and model uncertainty (LMU)

Experiment Setting 70/30 division (70% for training, 30% for testing) Uniform accuracy of labeling as P; for each sample, a correct label is given with probability P Classifier: C4.5 in WEKA

Single Labels vs. Majority Voting Sample size of training data set matters When sample size is large enough, MV is better than SL With low noise, more (single labeled) examples better MV-FRR (50 examples)

Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

Selective Repeated-Labeling We have seen: – With enough examples and noisy labels, getting multiple labels is better than single-labeling – When we consider costly preprocessing, the benefit is magnified Can we do better than the basic strategies? Key observation: we have additional information to guide selection of data for repeated labeling – Multi-set labels; e.g., {+,-,+,+,-,+} vs. {+,+,+,+} 13

Natural Candidate: Entropy Entropy is a natural measure of label uncertainty: – E({+,+,+,+,+,+})=0 – E({+,-, +,-, +,- })=1 Strategy: Get more labels for examples with high-entropy label multisets 14

What Not to Do: Use Entropy Improves at first, hurts in long run

Why not Entropy In the presence of noise, entropy will be high even with many labels Entropy is scale invariant (3+, 2-) has same entropy as (600+, 400-) 16

Binomial Dist. with Uniform Prior Dist. Let Y ~ Bin( , n) where  ~ Uniform( 0, 1 ). This is the normalization constant to transform  y (1-  ) n-y into a beta distribution. You cannot just call the posterior a binomial distribution because you are conditioning on Y and  is a random variable, not the other way around.

Estimating Label Uncertainty (LU) Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution – P=0.5, alpha1 and alpha2 – For more accurate estimation, we can instead use 95% HDR, Highest Density Region (or interval) S LU Beta probability density function Beta(18,8) 95% HDR [.51,.84]

Label Uncertainty p=0.7 5 labels (3+, 2-) Entropy ~ 0.97 CDF  =0.34

Label Uncertainty p= labels (7+, 3-) Entropy ~ 0.88 CDF  =0.11

Label Uncertainty p= labels (14+, 6-) Entropy ~ 0.88 CDF  =0.04

Label Uncertainty vs. Round Robin similar results across a dozen data sets

Another strategy : Model Uncertainty (MU) Learning a model of the data provides an alternative source of information about label certainty Model uncertainty: get more labels for instances that cannot be modeled well Intuition? – for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances – for modeling: why improve training data quality if model already is certain there? (LMU) ? ?

Yet another strategy : Label & Model Uncertainty (LMU) Label and model uncertainty (LMU): avoid examples where either strategy is certain 24

Comparison Label Uncertainty GRR Label & Model Uncertainty Model Uncertainty alone also improves quality

Comparison: Model Quality Label & Model uncertainty Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU

Summary  Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) has changed the landscape for data formulation Repeated labeling can improve data quality and model quality (but not always) When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap When labels are relatively cheap, repeated labeling can do much better Round-robin repeated labeling can do well Selective repeated labeling improves substantially 27