Presentation is loading. Please wait.

Presentation is loading. Please wait.

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University.

Similar presentations


Presentation on theme: "Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University."— Presentation transcript:

1 Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University Stern School Victor Sheng Assistant Professor, University of Central Arkansas

2 22 Outsourcing KDD Preprocessing Traditionally, data mining teams invest in data formulation, information extraction, cleaning, etc. – Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008) “the best you can expect are noisy labels” Now, outsource preprocessing tasks, such as labeling, feature extraction, etc. – using Mechanical Turk, Rent-a-Coder, etc.Mechanical Turk – quality may be lower than expert labeling – but low costs can allow massive scale

3 3 Mechanical Turk Example 3

4 4 ESP Game (by Luis von Ahn) 4

5 55 Other “Free” Labeling Schemes Open Mind initiative (www.openmind.org)www.openmind.org Other gwap games – Tag a Tune – Verbosity (tag words) – Matchin (image ranking) Web 2.0 systems? – Can/should tagging be directed?

6 66 Noisy Labels Can Be Problematic Many tasks rely on high-quality labels: – learning predictive models – searching for relevant information – duplicate detection (i.e. entity resolution) – image recognition/labeling – song categorization (e.g. Pandora) – sentiment analysis Noisy labels can lead to degrade task performance

7 77 Noisy Labels Impact Model Quality Labeling quality increases  classification quality increases P = 0.5 P = 0.6 P = 0.8 P = 1.0 Here, labels are values for target variable

8 88 Summary of Results Repeated labeling can improve data quality and model quality (but not always) Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap When labels are relatively cheap, repeated labeling can do much better Round-robin repeated labeling does well Selective repeated labeling performs better

9 99 Setting for This Talk (I) Some process provides data points to be labeled with some fixed probability distribution – data points, or “examples” – Y is a multi-label set, e.g., {+,-,+} – each example has a correct label set L of labelers, L 1, L 2, …, (potentially unbounded) – L i has “quality” p i, the probability of labeling any given example correctly – same quality for all labelers, i.e.,p i = p j – some strategies will acquire k labels for each example A1A1 A2A2 … AnAn + … + X Y

10 10 Setting for This Talk (II) Total acquisition cost includes C U and C L – C U the cost of acquiring unlabeled “feature portion” – C L the cost of acquiring “label” of example – Same C U and C L for all examples – Ignore C U, ρ=C U /C L gives cost ratio A process (majority voting) to generate an integrated label from a set of labels e.g.,{+,-,+}  + We care about: 1) the quality of the integrated labels 2) the quality of predictive models induced from the data+integrated labels, e.g., accuracy on testing data

11 11 Repeated Labeling Improves Label Quality P=0.4 P=0.5 P=0.6 P=0.7 P=0.8 P=0.9 P=1.0 Ask multiple labelers, keep majority label as “true” label Quality is probability of being correct P is probability of individual labeler being correct

12 12 Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

13 13 Basic Labeling Strategies Single Labeling – Get as many examples as possible – One label each Round-robin Repeated Labeling (giving next label to the one with the fewest so far) – Fixed Round Robin (FRR) Keep labeling the same set of examples in some order – Generalized Round Robin (GRR) Get new examples

14 14 Fixed Round Robin vs. Single Labeling FRR (100 examples) SL With high noise, repeated labeling better than single labeling p= 0.6, labeling quality #examples =100

15 15 Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

16 16 Fixed Round Robin vs. Single Labeling FRR (50 examples) SL With low noise and few examples, more (single labeled) examples better p= 0.8, labeling quality #examples =50

17 17 Tradeoffs for Modeling Get more labels  Improve label quality  Improve classification Get more examples  Improve classification P = 0.5 P = 0.6 P = 0.8 P = 1.0

18 18 Gen. Round Robin vs. Single Labeling ρ=C U /C L =0, (i.e. C U =0), k=10 ρ : cost ratio k: #labels Use up all examples Repeated labeling is better than single labeling when labels are not particularly cheap

19 19 Gen. Round Robin vs. Single Labeling ρ=C U /C L =3, k=5 ρ : cost ratio k: #labels Repeated labeling is better than single labeling when labels are relatively cheap

20 20 Gen. Round Robin vs. Single Labeling ρ=C U /C L =10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling when labels are relatively cheaper

21 21 Gen. Round Robin vs. Single Labeling ρ=C U /C L =50, k=52 ρ : cost ratio k: #labels Repeated labeling is better than single labeling for this setting

22 22 Selective Repeated-Labeling We have seen: – With noisy labels, getting multiple labels is better than single-labeling – Considering costly preprocessing, the benefit is magnified Can we do better than the basic strategies? Key observation: additional information to guide the selection of data for repeated labeling – the current multiset of labels Example: {+,-,+,+,-,+} vs. {+,+,+,+}

23 23 Natural Candidate: Entropy Entropy is a natural measure of label uncertainty: E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1 Strategy: Get more labels for examples with high- entropy label multisets

24 24 What Not to Do: Use Entropy Improves at first, hurts in long run

25 25 Why not Entropy In noise scenarios – Label sets truly reflecting the truth have high entropy, even with many labels – Some sets accidently get low entropy, never get more labels Entropy is scale invariant (3+, 2-) has same entropy as (600+, 400-) Fundamental problem: entropy not for uncertainty, but for mixture 25

26 26 Estimating Label Uncertainty (LU) Uncertainty: estimated probability of causing the wrong label Model as beta distribution, based on observing +’s and –’s Label uncertainty S LU = tail probability of beta distribution S LU 0.5 0.01.0 Beta probability density function

27 27 Label Uncertainty p=0.7 5 labels (3+, 2-) Entropy ~ 0.97 S LU =0.34 27

28 28 Label Uncertainty p=0.7 10 labels (7+, 3-) Entropy ~ 0.88 S LU  =0.11 28

29 29 Label Uncertainty p=0.7 20 labels (14+, 6-) Entropy ~ 0.88 S LU  =0.04 29

30 30 Label Uncertainty vs. Round Robin 30 similar results across a dozen data sets

31 31 Gen. Round Robin vs. Single Labeling ρ=C U /C L =10, k=12 ρ : cost ratio k: #labels Repeated labeling is better than single labeling here

32 32 Label Uncertainty vs. Round Robin 32 similar results across a dozen data sets Until now, LU is the best strategy

33 33 Another strategy : Model Uncertainty (MU) Learning a model of the data provides an alternative source of information about label certainty Model uncertainty: get more labels for instances that cause model uncertainty Intuition – for data quality, low-certainty “regions” may be due to incorrect labeling of corresponding instances – for modeling: why improve training data quality if model already is certain there? Models Examples Self-healing process + + - - + + - - + + + - - - - + + + - -- -- - - - ?

34 34 Yet another strategy : Label & Model Uncertainty (LMU) Label and model uncertainty (LMU): avoid examples where either strategy is certain

35 35 Comparison 35 Label Uncertainty Label & Model Uncertainty Model Uncertainty alone also improves quality GRR

36 36 Comparison: Model Quality (I) Label & Model Uncertainty Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU.

37 37 Comparison: Model Quality (II) Across 12 domains, LMU is always better than GRR. LMU is statistically significantly better than LU and MU.

38 38 Summary of Results  Micro-task outsourcing (e.g., MTurk, ESP game) has changed the landscape for data formulation Repeated labeling can improve data quality and model quality (but not always) Repeated labeling can be preferable to single labeling when labels aren’t particularly cheap When labels are relatively cheap, repeated labeling can do much better Round-robin repeated labeling does well Selective repeated labeling performs better

39 Thanks! Q & A?


Download ppt "Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Joint work with Foster Provost & Panos Ipeirotis New York University."

Similar presentations


Ads by Google