Download presentation
Presentation is loading. Please wait.
Published byHugh Shaw Modified over 9 years ago
1
Populating Ontologies with Data from Lists in Family History Books Thomas L. Packer David W. Embley 2013.03 RT.FHTW BYU.CS 1
2
What’s the challenge? 2
3
What “rich data” is found in lists? 1.Lexical vs. non-lexical 2.Arbitrary relationship arity 3.Arbitrary ontology path lengths 4.Functional and optional constraints 5.Generalization- specialization class hierarchies (with inheritance) 3 1.Name(“Elias”) vs. Person(p 1 ) 2.Husband-married-Wife-in- Year(p 1, p 2, “1702”) 3. 4.Person-Birth() vs. Person- Marriage() 5.Child(p 3 ) Person(p 3 ), Parent(p 2 ) Person(p 2 )
4
What’s the value? 4
5
What’s been done already? 5 Wrapper Induction General Lists Noise Tolerant Rich Data Effort- Scalable Blanco, 20100.50.00.51.0 Dalvi, 20100.50.0 0.8 Gupta, 20091.00.00.50.8 Carlson, 20080.0 1.0 Heidorn, 20080.80.5 0.2 Chang, 20030.50.0 0.5 Crescenzi, 20010.0 1.0 Lerman, 20010.80.0 0.8 Chidlovskii, 20000.80.0 0.8 Kushmerick, 20000.0 1.0 Lerman, 20000.80.0 0.8 Thomas, 19990.0 0.5 Adelberg, 19981.00.00.50.2 Kushmerick, 19970.50.00.51.0 1.0 = well-covered 0.0 = not covered
6
What’s our contribution? ListReader Mappings Formal correspondence among – Populated ontologies (predicates) – Inline annotated text (labels) – List wrappers (grammars) – Data entry (forms) 6
7
What’s our contribution? ListReader Wrapper Induction Low-cost wrapper induction – Semi-supervised + active learning Decreasing-cost wrapper induction – Self-supervised + active learning 7
8
Cheap Training Data 8
9
Automatic Mapping 9 Child(p 1 ) Person(p 1 ) Child-ChildNumber(p 1, “1”) Child-Name(p 1, n 1 ) …
10
Semi-supervised Induction 10 1. Andy b. 1816 2. Becky Beth h, 1818 3. Charles Conrad 1.Initialize 1. Andy b. 1816 2. Becky Beth h, i818 3. Charles Conrad C FN BD \n(1)\. (Andy) b\. (1816)\n 3.Alignment-Search 2.Generalize C FN BD \n([\dlio])[.,] (\w{4}) [bh][.,] ([\dlio]{4})\n C FN BD \n([\dlio])\[.,] (\w{4}) [bh][.,] ([\dlio]{4})\n X Deletion C FN Unknown BD \n([\dlio])[.,] (\w{4,5}) (\S{1,10}) [bh][.,] ([\dlio]{4})\n Insertion 1. Andy b. 1816 2. Becky Beth h, 1818 3. Charles Conrad Expansion 4.Evaluate (edit sim. * match prob.) One match! No Match 5.Active Learning 1. Andy b. 1816 2. Becky Beth h, i818 3. Charles Conrad C FN MN BD \n([\dlio])[.,] (\w{4,5}) (\w{4}) [bh][.,] ([\dlio]{4})\n 6.Extract 1. Andy b. 1816 2. Becky Beth h, i818 3. Charles Conrad Many more …
11
Alignment-Search 11 A B C E F G A B C’ D E F Branching Factor = 6 * 4 = 24 A B C’ E F G Goal State Start State A B C E F G H A B C’ E F Tree Depth = 3 This search space size = ~24 3 = 13,824 Other search space sizes = ~ (12*4) 7 = 48 7 = 587,068,342,272 Substitution @ 3 Deletion @ 6 Insertion @ 4 Insertion @ 7 And many more …
12
A B C E F G H A* Alignment-Search 12 A B C E F G A B C’ D E F Branching Factor = 2 * 4 = 8 A B C’ E F G Goal State Start State Insertion @ 4 Substitution @ 3Insertion @ 7 A B C’ E F Deletion @ 6 Never traverses this branch Tree Depth = 3 This search space size = ~10 (hard and soft constraints) Instead of ~8 3 = 512 (hard constraint) or 13,824 (no constraint) Other search space sizes = ~1000 instead of 587,068,342,272 f(s) = g(s) + h(s) 4 = 1 + 3 f(s) = g(s) + h(s) 3 = 1 + 2
13
Self-supervised Induction 13 No additional labeling required Limited additional labeling via active learning
14
Why is this approach promising? 14 Semi-supervised Regex Induction vs. CRF Self-supervised Regex Induction vs. CRF 30 lists | 137 records | ~10 fields / list Stat. Sig. at p < 0.01 using both a paired t-test and McNemar’s test + + +
15
What next? Improve time, space, and accuracy with HMM wrappers Expanded class of input lists 15
16
Conclusions Ontology population to sequence labeling Induce wrapper with single click per field Noise tolerant and accurate 16
17
17
18
Typical Ontology Population 18
19
Why not Apply Web Wrapper Induction to OCR Text? Noise tolerance: – Allow character variations increase recall decrease precision Populate only the simplest ontologies Problems with wrapper language: – Left-right context (Wien, Kushmeric 2000) – Disjunction rule nad FSA model to traverse landmarks along tree structure (Stalker, Softmealy) – Xpath (Dalvi 2009, etc.) – CRF (Gupta 2009) 19
20
Why not use left-right context? Field boundaries Field position and character content Record boundaries 20 OCRed List:
21
Why not use xpaths? OCR text has no explicit XML DOM tree structure Xpaths require HTML tag to perfectly mark field text 21
22
Why not Use (Gupta’s) CRFs? HTML lists and records are explicitly marked Different application: Augment tables using tuples from any lists on web At web scale, they can throw away harder-to- process lists They rely on more training data than we will We will compare our approach to CRFs 22
23
Page Grammars Conway [1993] 2-D CFG and chart parser for page layout recognition from document images Can assign logical labels to blocks of text Manually constructed grammars Rely on spatial features 23
24
Semi-supervised Regex Induction 24
25
25
26
List Reading Specialized for one kind of list: – Printed ToC: Marinai 2010, Dejean 2009, Lin 2006 – Printed bibs: Besagni 2004, Besagni 2003, Belaid 2001 – HTML lists: Elmeleegy 2009, Gupta 2009, Tao 2009, Embley 2000, Embley 1999 Use specialized hand-crafted knowledge Rely on clean input text containing useful HTML structure or tags NER or flat attribute extraction–limited ontology population Omit one or more reading steps 26
27
Research Project 27 Related WorkProject DescriptionValidationConclusion Child(child 1 ) Child-ChildNumber(child 1, “1”) Child-Name(child 1, name 1 ) Name-GivenName(name 1, “Sarah”) Child-BirthDate(child 1, date 1 ) BirthDate-Year(date 1, “1797”) Motivation
28
Wrapper Induction for Printed Text Adelberg 1998: – Grammar induction for any structured text – Not robust to OCR errors – No empirical evaluation Heidorn 2008: – Wrapper induction for museum specimen labels – Not typical lists Supervised—will not scale well Entity attribute extraction–limited ontology population 28 Project DescriptionValidationMotivationConclusionRelated Work
29
Semi-supervised Wrapper Induction 29 Related WorkValidationMotivationConclusionProject Description Child(child 1 ) Child-ChildNumber(child 1, “1”) Child-Name(child 1, name 1 ) Name-GivenName(name 1, “Sarah”) Child-BirthDate(child 1, date 1 ) BirthDate-Year(date 1, “1797”)
30
Construct Form, Label First Record 30 Related WorkValidationMotivationConclusionProject Description 1. Sarah, b. 1797.
31
Wrapper Generalization 31 Related WorkValidationMotivationConclusionProject Description Child.BirthDate.Year,.b/h Child.BirthDate.Year,..b\n … … ??.?? 1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836, d. 1876.
32
1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836, d. 1876. Wrapper Generalization 32 Related WorkValidationMotivationConclusionProject Description Child.BirthDate.Year,.b/h Child.BirthDate.Year,..b\n … … ??.?? Child.BirthDate.Year,.b/h … Child.DeathDate.Year,..d\n
33
Wrapper Generalization as Beam Search 1.Initialize wrapper from first record 2.Apply predefined set of wrapper adjustments 3.Score alternate wrappers with: – “Prior” (is like known list structure) – “Likelihood” (how well they match next text) 4.Add best to wrapper set 5.Repeat until end of list 33 Related WorkValidationMotivationConclusionProject Description
34
Mapping Sequential Labels to Predicates 34 Related WorkValidationMotivationConclusionProject Description Child(child 1 ) Child-ChildNumber(child 1, “1”) Child-Name(child 1, name 1 ) Name-GivenName(name 1, “Sarah”) Child-BirthDate(child 1, date 1 ) BirthDate-Year(date 1, “1797”) 1. Sarah, b. 1797. Child.ChildNumber.Child.Name.GivenNameChild.BirthDate.Year,..b\n
35
Weakly Supervised Wrapper Induction 1.Apply wrappers and ontologies 2.Spot list by repeated patterns 3.Find best ontology fragments for best-labeled record 4.Generalize wrapper – Both above and below – Active learning without human input 35 Related WorkValidationMotivationConclusionProject Description
36
Knowledge from Previously Wrapped Lists 36 Related WorkValidationMotivationConclusionProject Description Child.Child Number. Child.Name. GivenName Child.BirthDate. Year,;.b\n Child.DeathDate. Year ;.dm Child.Spouse.Name. GivenName..\n Child.Spouse.Name.Surname
37
List Spotting 37 Related WorkValidationMotivationConclusionProject Description 1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836. Child.Child Number. Child.Name. GivenName \n.
38
Select Ontology Fragments and Label the Starting Record 38 Related WorkValidationMotivationConclusionProject Description Child.Child Number.\n 1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836. Child.Birth Date.Year.b,
39
Merge Ontology and Wrapper Fragments 39 Related WorkValidationMotivationConclusionProject Description
40
Generalize Wrapper, & Learn New Fields without User 40 Related WorkValidationMotivationConclusionProject Description 1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836. Child.Death Date.Year.d.
41
Thesis Statement It is possible to populate an ontology semi- automatically, with better than state-of-the-art accuracy and cost, by inducing information extraction wrappers to extract the stated facts in the lists of an OCRed document, firstly relying only on a single user-provided field label for each field in each list, and secondly relying on less ongoing user involvement by leveraging the wrappers induced and facts extracted previously from other lists. 41 Related WorkValidationMotivationConclusionProject Description
42
Four Hypotheses 1.Is a single labeling of each field sufficient? 2.Is fully automatic induction possible? 3.Does ListReader perform increasingly better? 4.Are induced wrappers better than the best? 42 Related WorkProject DescriptionMotivationConclusionValidation
43
Hypothesis 1 Single user labeling of each field per list Evaluate detecting new optional fields Evaluate semi-supervised wrapper induction 43 Related WorkProject DescriptionMotivationConclusionValidation
44
Hypothesis 2 No user input required with imperfect recognizers Find required level of noisy recognizer P & R 44 Related WorkProject DescriptionMotivationConclusionValidation
45
Hypothesis 3 Increasing repository knowledge decreases the cost Show repository can produce P- and R-level recognizers Evaluate number of user-provided labels over time 45 Related WorkProject DescriptionMotivationConclusionValidation
46
Hypothesis 4 ListReader performs better than a representative state-of-the-art information extraction system Compare ListReader with the supervised CRF in Mallet 46 Related WorkProject DescriptionMotivationConclusionValidation
47
Evaluation Metrics Precision Recall F-measure Accuracy Number of user-provided labels 47 Related WorkProject DescriptionMotivationConclusionValidation
48
Work and Results Thus Far 48 Large, diverse corpus of OCRed documents Semi-supervised regex and HMM induction Both beat CRF trained on three times the data Designed label to predicate mapping Implemented preliminary mapping 85% accuracy of word-level list spotting Related WorkProject DescriptionValidationMotivationConclusion
49
Questions & Answers 49
50
What Does that Mean? Populating Ontologies – A machine-readable and mathematically specified conceptualization of a collection of facts Semi-automatically Inducing – Pushing more work to the machine Information Extraction Wrappers – Specialized processes exposing data in documents Lists in OCRed Documents – Data-rich with variable format and noisy content 50 Related WorkProject DescriptionValidationConclusionMotivation
51
Who Cares? Populating Ontologies – Versatile, expressive, structured, digital information is queryable, linkable, editable. Semi-automatically Inducing – Lowers cost of data Information Extraction Wrappers – Accurate by specializing for each document format Lists in OCRed Documents – Lots of data useful for family history, marketing, personal finance, etc. but challenging to extract 51 Related WorkProject DescriptionValidationConclusionMotivation
52
Reading Steps 1.List spotting 2.Record segmentation 3.Field segmentation 4.Field labeling 5.Nested list recognition 52 Related WorkValidationMotivationConclusionProject Description Members of the football team: Captain: Donald Bakken.................Right Half Back LeRoy "sonny' Johnson.........,........Lcft Half Back Orley "Dude" Bakken......,.......,......Quarter Back Roger Jay Myhrum.........................Full Back Bill "Snoz" Krohg,...........................Center They had a good year.
53
Special Labels Resolve Ambiguity 53 Related WorkValidationMotivationConclusionProject Description Child(child 1 ) Child-ChildNumber(child 1, “1”) Child-Name(child 1, name 1 ) Name-GivenName(name 1, “Sarah”) Child-BirthDate(child 1, date 1 ) BirthDate-Year(date 1, “1797”) 1. Sarah, b. 1797. 2. Amy, h. 1799, d. i800. 3. John Erastus, b. 1836, d. 1876. Child.ChildNumber.Child.Name.GivenNameChild.BirthDate.Year,..b\n
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.