Download presentation
Presentation is loading. Please wait.
Published byNickolas Barber Modified over 9 years ago
1
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Victor Sheng, Foster Provost, Panos Ipeirotis KDD 2008 New York University Stern School
2
Outsourcing preprocessing Traditionally, data mining teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing – Raghu from his Innovation Lecture “the best you can expect are noisy labels” Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc. – using Mechanical Turk, Rent-a-Coder, etc. – quality may be lower than expert labeling (much?) – but low costs can allow massive scale The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc. 2
3
ESP Game (by Luis von Ahn) 3
4
Other “free” labeling schemes Open Mind initiative (http://www.openmind.org)http://www.openmind.org Other GWAP games – Tag a Tune – Verbosity (tag words) – Matchin (image ranking) Web 2.0 systems? – Can/should tagging be directed? 4
5
Noisy labels can be problematic Many tasks rely on high-quality labels for objects: – learning predictive models – searching for relevant information – finding duplicate database records – image recognition/labeling – song categorization Noisy labels can lead to degraded performance 5
6
Quality and Classification Performance Labeling quality (labeling accuracy P) increases classification quality increases 6 P = 0.5 P = 0.6 P = 0.8 P = 1.0
7
Majority Voting and Label Quality Ask multiple “noisy” labelers, keep majority label as “true” label Given 2N+1 labelers with uniform accuracy P, integrated quality is P(Bin(2N+1, P) <= N) P=0.4 P=0.5 P=0.6 P=0.7 P=0.8 P=0.9 P=1.0 P is probability of individual labeler being correct (1) removing noisy labelers, (2) collect more labels as much as possible
8
Labeling Methods MV: majority voting Uncertainty preserving labeling (soft label) – Multiplied Examples (ME): for each example x i, ME considers the multiset of existing labels (L i,j ), and for each L ij, it creates a replica with weight 1/|L i,j | (??) These replicas with weights are fed into the classifier for training Another method is to use Naive Bayes (see WSDM’11): Modeling Annotator Accuracies for Supervised Learning, WSDM 2011
9
Get Another Label? Single Labeling (SL) – One label per each sample; get more samples Repeated labeling (w/ a fixed set of samples) – Round-robin Repeated Labeling Fixed Round Robin (FRR) – Keep labeling the same set of samples Generalized Round Robin (GRR) – Keep labeling the same set of samples, yet giving highest preference to a sample with the fewest labels – Selective Repeated-Labeling Consider label uncertainty of a sample (LU) Considerer classification (model) uncertainty of a sample (MU) Consider both label uncertainty and model uncertainty (LMU)
10
Experiment Setting 70/30 division (70% for training, 30% for testing) Uniform accuracy of labeling as P; for each sample, a correct label is given with probability P Classifier: C4.5 in WEKA
11
Single Labels vs. Majority Voting Sample size of training data set matters When sample size is large enough, MV is better than SL With low noise, more (single labeled) examples better MV-FRR (50 examples)
12
Tradeoffs for Modeling Get more labels Improve label quality Improve classification Get more examples Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0
13
Selective Repeated-Labeling We have seen: – With enough examples and noisy labels, getting multiple labels is better than single-labeling – When we consider costly preprocessing, the benefit is magnified Can we do better than the basic strategies? Key observation: we have additional information to guide selection of data for repeated labeling – Multi-set labels; e.g., {+,-,+,+,-,+} vs. {+,+,+,+} 13
14
Natural Candidate: Entropy Entropy is a natural measure of label uncertainty: – E({+,+,+,+,+,+})=0 – E({+,-, +,-, +,- })=1 Strategy: Get more labels for examples with high-entropy label multisets 14
15
What Not to Do: Use Entropy Improves at first, hurts in long run
16
Why not Entropy In the presence of noise, entropy will be high even with many labels Entropy is scale invariant (3+, 2-) has same entropy as (600+, 400-) 16
17
Binomial Dist. with Uniform Prior Dist. Let Y ~ Bin( , n) where ~ Uniform( 0, 1 ). This is the normalization constant to transform y (1- ) n-y into a beta distribution. You cannot just call the posterior a binomial distribution because you are conditioning on Y and is a random variable, not the other way around.
18
Estimating Label Uncertainty (LU) Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution – P=0.5, alpha1 and alpha2 – For more accurate estimation, we can instead use 95% HDR, Highest Density Region (or interval) S LU 0.5 0.01.0 Beta probability density function Beta(18,8) 95% HDR [.51,.84]
19
Label Uncertainty p=0.7 5 labels (3+, 2-) Entropy ~ 0.97 CDF =0.34
20
Label Uncertainty p=0.7 10 labels (7+, 3-) Entropy ~ 0.88 CDF =0.11
21
Label Uncertainty p=0.7 20 labels (14+, 6-) Entropy ~ 0.88 CDF =0.04
22
Label Uncertainty vs. Round Robin similar results across a dozen data sets
23
Another strategy : Model Uncertainty (MU) Learning a model of the data provides an alternative source of information about label certainty Model uncertainty: get more labels for instances that cannot be modeled well Intuition? – for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances – for modeling: why improve training data quality if model already is certain there? (LMU) 23 + + + + + + + + + + - - - - - - - - - - - - - - - - ? ?
24
Yet another strategy : Label & Model Uncertainty (LMU) Label and model uncertainty (LMU): avoid examples where either strategy is certain 24
25
Comparison Label Uncertainty GRR Label & Model Uncertainty Model Uncertainty alone also improves quality
26
Comparison: Model Quality Label & Model uncertainty Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU
27
Summary Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) has changed the landscape for data formulation Repeated labeling can improve data quality and model quality (but not always) When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap When labels are relatively cheap, repeated labeling can do much better Round-robin repeated labeling can do well Selective repeated labeling improves substantially 27
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.