Presentation is loading. Please wait.

Presentation is loading. Please wait.

Text Learning Tom M. Mitchell Aladdin Workshop Carnegie Mellon University January 2003.

Similar presentations


Presentation on theme: "Text Learning Tom M. Mitchell Aladdin Workshop Carnegie Mellon University January 2003."— Presentation transcript:

1 Text Learning Tom M. Mitchell Aladdin Workshop Carnegie Mellon University January 2003

2 1. CoTraining learning from labeled and unlabeled data

3 Redundantly Sufficient Features Professor Faloutsos my advisor

4 Redundantly Sufficient Features Professor Faloutsos my advisor

5 Redundantly Sufficient Features

6 Professor Faloutsos my advisor

7 CoTraining Setting If –x1, x2 conditionally independent given y –f is PAC learnable from noisy labeled data Then –f is PAC learnable from weak initial classifier plus unlabeled data

8 Co-Training Rote Learner My advisor + - - pages hyperlinks

9 Co-Training Rote Learner My advisor + - - pages hyperlinks - - - - + +

10 Co-Training Rote Learner My advisor + - - pages hyperlinks - - - - + + - - - + +

11 Co-Training Rote Learner My advisor + - - pages hyperlinks - - - - + + - - - + + + + - -

12 Co-Training Rote Learner My advisor + - - pages hyperlinks - - - - + + - - - + + + + - - + + -

13 What if CoTraining Assumption Not Perfectly Satisfied? - + + +

14 - + + +

15 Idea: Want classifiers that produce a maximally consistent labeling of the data If learning is an optimization problem, what function should we optimize? What if CoTraining Assumption Not Perfectly Satisfied? - + + +

16 What Objective Function? Error on labeled examples

17 What Objective Function? Error on labeled examples Disagreement over unlabeled

18 What Objective Function? Error on labeled examples Disagreement over unlabeled Misfit to estimated class priors

19 What Function Approximators?

20 Same fn form as Naïve Bayes, Max Entropy Use gradient descent to simultaneously learn g1 and g2, directly minimizing E = E1 + E2 + E3 + E4 No word independence assumption, use both labeled and unlabeled data

21 Gradient CoTraining

22 Classifying Jobs for FlipDog X1: job title X2: job description

23 Gradient CoTraining Classifying FlipDog job descriptions: SysAdmin vs. WebProgrammer Final Accuracy Labeled data alone: 86% CoTraining: 96%

24 Gradient CoTraining Classifying Upper Case sequences as Person Names 25 labeled 5000 unlabeled 2300 labeled 5000 unlabeled Using labeled data only Cotraining Cotraining without fitting class priors (E4).73.87.76 * sensitive to weights of error terms E3 and E4.89 *.85 * *

25 CoTraining Summary Key is getting the right objective function –Class priors is an important term –Can min-cut algorithms accommodate this? And minimizing it… –Gradient descent local minima problems –Graph partitioning possible?

26 The Problem/Opportunity Must train classifier to be website-independent, but many sites exhibit website-specific regularities Question How can program learn website-specific regularities for millions of sites, without human labeling data?

27 Learn Local Regularities for Page Classification

28 1. Label site using global classifier

29 Learn Local Regularities for Page Classification 1. Label site using global classifier (cont educ page)

30 Learn Local Regularities for Page Classification 1. Label site using global classifier 2. Learn local classifiers

31 Learn Local Regularities for Page Classification CEd.html 1. Label site using global classifier 2. Learn local classifiers, CECourse(x) :- under(x,http://….CEd.html) linkto(x,http://…music.html) 1 < inDegree (x) < 4 globalConfidence(x) > 0.3 Music.html

32 Learn Local Regularities for Page Classification CEd.html 1. Label site using global classifier 2. Learn local classifiers, 3. Apply local classifier, to modify global labels Music.html

33 Learn Local Regularities for Page Classification CEd.html 1. Label site using global classifier 2. Learn local classifier 3. Apply local classifier, to modify global labels Music.html

34 Results of Local Learning: Cont.Education Course Page Learning global classifier only: –precision.81, recall.80 Learning global classifier plus site-specific classifiers for 20 local sites: –precision.82, recall.90

35 Learning Site-Specific Regularities: Example 2 Extracting “Course-Title” from web pages

36

37 Local/Global Learning Algorithm Train global course title extractor (word based) For each new university site: –Apply global title extractor –For each page containing extracted titles Learn page-specific rules for extracting titles, based on page layout structure Apply learned rules to refine initial labeling

38

39

40

41 X X

42 Local/Global Learning Summary Approach: –Learn global extractor/classifier using content features –Learn local extractor/classifier using layout features –Design restricted hypothesis language for local, to accommodate sparse training data Algorithm to process a new site: –Apply global extractor/classifier to label site –Train local extractor/classifier on this data –Apply local extractor/classifier to refine labels

43 Other Local Learning Approaches Rule covering algorithms: each rule a local model –But require supervised labeled data for each locality Shrinkage-based techniques, eg., for learning hospital-independent and hospital-specific models for medical outcomes –Again, requires labeled data for each hospital This is different – no labeled data for new sites

44 When/Why does this work?? Local and global models use independent, redundantly sufficient features Local models learned within low-dimension hypothesis language Related to co-training!

45 Other Uses? + Global and website-specific information extractors + Global and program-specific TV segment classifiers? + Global and environment-specific robot perception? –Global and speaker-specific speech recognition? –Global and hospital-specific medical diagnosis?

46 Summary Cotraining: –Classifier learning as minimization problem –Graph partitioning algorithm possible? Learning site-specific structure: –Important structure involves long-distance relationships –Strong local graph structure regularities are highly useful


Download ppt "Text Learning Tom M. Mitchell Aladdin Workshop Carnegie Mellon University January 2003."

Similar presentations


Ads by Google