Wrapper Learning: Cohen et al 2002; Kushmeric 2000; Kushmeric & Frietag 2000 William Cohen 1/26/03
Goal: learn from a human teacher how to extract certain database records from a particular web site.
Learner
Why learning from few examples is important At training time, only four examples are available—but one would like to generalize to future pages as well… Must generalize across time as well as across a single site
now some details….
Kushmerick’s WIEN system Earliest wrapper-learning system (published IJCAI ’97) Special things about WIEN: –Treats document as a string of characters –Learns to extract a relation directly, rather than extracting fields, then associating them together in some way –Example is a completely labeled page
WIEN system: a sample wrapper
Left delimiters L1=“ ”, L2=“ ”; Right R1=“ ”, R2=“ ”
WIEN system: a sample wrapper Learning means finding L1,…,Lk and R1,…,Rk Li must precede every instance of field i Ri must follow every instance of field I Li, Ri can’t contain data items Limited number of possible candidates for Li,Ri
WIEN system: a more complex class of wrappers (HLRT) Extension: use Li,Ri delimiters only: after a “head” (after first occurence of H) and before a “tail” (occurrence of T) H = “ ”, T = “ ”
Kushmeric: overview of various extensions to LR
Kushmeric and Frietag: Boosted wrapper induction
Review of boosting Generalized version of AdaBoost (Singer&Schapire, 99) Allows “real-valued” predictions for each “base hypothesis”—including value of zero.
Learning methods: boosting rules Weak learner: to find weak hypothesis t: 1.Split Data into Growing and Pruning sets 2.Let R t be an empty conjunction 3.Greedily add conditions to R t guided by Growing set: 4.Greedily remove conditions from R t guided by Pruning set: 5.Convert to weak hypothesis: where Constraint: W + > W - and caret is smoothing
Learning methods: boosting rules SLIPPER also produces fairly compact rule sets.
Learning methods: BWI Boosted wrapper induction (BWI) learns to extract substrings from a document. –Learns three concepts: firstToken(x), lastToken(x), substringLength(k) –Conditions are tests on tokens before/after x E.g., tok i-2 =‘from’, isNumber(tok i+1 ) –S LIPPER weak learner, no pruning. –Greedy search extends “window size” by at most L in each iteration, uses lookahead L, no fixed limit on window size. Good results in ( Kushmeric and Frietag, 2000)
BWI algorithm
Lookahead search here
BWI example rules
Cohen et al
Improving A Page Classifier with Anchor Extraction and Link Analysis William W. Cohen NIPS 2002
Previous work in page classification using links: Exploit hyperlinks (Slattery&Mitchell 2000; Cohn&Hofmann, 2001; Joachims 2001): Documents pointed to by the same “hub” should have the same class. What’s new in this paper: Use structure of hub pages (as well as structure of site graph) to find better “hubs” Adapt an existing “wrapper learning” system to find structure, on the task of classifying “executive bio pages”.
Intuition: links from this “hub page” are informative… …especially these links
Idea: use the wrapper-learner to learn to extract links to execBio pages, smoothing the “noisy” data produced by the initial page classifier. Task: train a page classifier, then use it to classify pages on a new, previously-unseen web site as executiveBio or other Question: can index pages for executive biographies be used to improve classification?
Background: “co-training” (Mitchell&Blum, ‘98) Suppose examples are of the form (x 1,x 2,y) where x 1,x 2 are independent (given y), and where each x i is sufficient for classification, and unlabeled examples are cheap. –(E.g., x 1 = bag of words, x 2 = bag of links). Co-training algorithm: 1. Use x 1 ’s (on labeled data D) to train f 1 (x)=y 2. Use f 1 to label additional unlabeled examples U. 3. Use x 2 ’s (on labeled part of U+D to train f 1 (x)=y 4. Repeat...
Simple 1-step co-training for web pages f 1 is a bag-of-words page classifier, and S is web site containing unlabeled pages. Feature construction. Represent a page x in S as a bag of pages that link to x (“bag of hubs”). Learning. Learn f 2 from the bag-of-hubs examples, labeled with f 1 Labeling. Use f 2 (x) to label pages from S. Idea: use one round of co-training to bootstrap the bag-of words classifier to one that uses site-specific features x 2 /f 2
Improved 1-step co-training for web pages Feature construction. - Label an anchor a in S as positive iff it points to a positive page x (according to f 1 ). Let D = {(x’,a): a is a positive anchor on x’}. - Generate many small training sets D i from D, by sliding small windows over D. - Let P be the set of all “structures” found by any builder from any subset D i - Say that p links to x if p extracts an anchor that points to x. Represent a page x as the bag of structures in P that link to x. Learning and Labeling. As before.
builder extractor List1
builder extractor List2
builder extractor List3
BOH representation: { List1, List3,…}, PR { List1, List2, List3,…}, PR { List2, List 3,…}, Other { List2, List3,…}, PR … Learner
Experimental results Co-training hurts No improvement
Experimental results
Summary - “Builders” (from a wrapper learning system) let one discover and use structure of web sites and index pages to smooth page classification results. - Discovering good “hub structures” makes it possible to use 1-step co-training on small ( example) unlabeled datasets. – Average error rate was reduced from 8.4% to 3.6%. – Difference is statistically significant with a 2- tailed paired sign test or t-test. – EM with probabilistic learners also works—see (Blei et al, UAI 2002)