Download presentation
Presentation is loading. Please wait.
1
Information Extraction PengBo Dec 2, 2010
2
Topics of today IE: Information Extraction Techniques Wrapper Induction Sliding Windows From FST to HMM
3
What is IE?
4
Example: The Problem Martin Baker, a person Genomics job Employers job posting form
5
Example: A Solution
6
Extracting Job Openings from the Web foodscience.com-Job2 JobTitle: Ice Cream Guru Employer: foodscience.com JobCategory: Travel/Hospitality JobFunction: Food Services JobLocation: Upper Midwest Contact Phone: 800-488-2611 DateExtracted: January 8, 2001 Source: www.foodscience.com/jobs_midwest.html OtherCompanyJobs: foodscience.com-Job1
7
Job Openings: Category = Food Services Keyword = Baker Location = Continental U.S.
8
Data Mining the Extracted Job Information
9
Two ways to manage information Xxx xxxx xxxx xxx xxx xxx xx xxxx xxxx xxx retrieval QueryAnswerQueryAnswer advisor(wc,vc) advisor(yh,tm) affil(wc,mld) affil(vc,lti) fn(wc,``William”) fn(vc,``Vitor”) Xxx xxxx xxxx xxx xxx xxx xx xxxx xxxx xxx inference “ceremonial soldering” X:advisor(wc,Y)&affil(X,lti) ?{X=em; X=vc} AND IE
10
What is Information Extraction? Recovering structured data from formatted text
11
What is Information Extraction? Recovering structured data from formatted text Identifying fields (e.g. named entity recognition)
12
What is Information Extraction? Recovering structured data from formatted text Identifying fields (e.g. named entity recognition) Understanding relations between fields (e.g. record association)
13
What is Information Extraction? Recovering structured data from formatted text Identifying fields (e.g. named entity recognition) Understanding relations between fields (e.g. record association) Normalization and deduplication
14
What is Information Extraction? Recovering structured data from formatted text Identifying fields (e.g. named entity recognition) Understanding relations between fields (e.g. record association) Normalization and deduplication Today, focus mostly on field identification & a little on record association
15
Applications
16
IE from Research Papers
17
IE from Chinese Documents regarding Weather Chinese Academy of Sciences 200k+ documents several millennia old - Qing Dynasty Archives - memos - newspaper articles - diaries
20
Wrapper Induction
21
“ Wrappers ” If we think of things from the database point of view We want to be able to database-style queries But we have data in some horrid textual form/content management system that doesn ’ t allow such querying We need to “ wrap ” the data in a component that understands database-style querying Hence the term “ wrappers ”
22
Title: Schulz and Peanuts: A Biography Author: David MichaelisDavid Michaelis List Price: $34.95
23
Wrappers: Simple Extraction Patterns Specify an item to extract for a slot using a regular expression pattern. Price pattern: “ \b\$\d+(\.\d{2})?\b ” May require preceding (pre-filler) pattern and succeeding (post-filler) pattern to identify the end of the filler. Amazon list price: Pre-filler pattern: “ List Price: ” Filler pattern: “ \b\$\d+(\.\d{2})?\b ” Post-filler pattern: “ ”
24
Wrapper tool-kits Wrapper toolkits Specialized programming environments for writing & debugging wrappers by hand Some Resources Wrapper Development Tools LAPIS
25
Wrapper Induction Problem description: Task: learn extraction rules based on labeled examples Hand-writing rules is tedious, error prone, and time consuming Learning wrappers is wrapper induction
26
Induction Learning Rule induction: formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data.scientific modelpatterns INPUT: Labeled examples: training & testing data Admissible rules (hypotheses space) Search strategy Desired output: Rule that performs well both on training and testing data
27
Wrapper induction Highly regular source documents Relatively simple extraction patterns Efficient learning algorithm Build a training set of documents paired with human-produced filled extraction templates. Learn extraction patterns for each slot using an appropriate machine learning algorithm.
28
Goal: learn from a human teacher how to extract certain database records from a particular web site.
29
Learner User gives first K positive—and thus many implicit negative examples
32
Kushmerick ’ s WIEN system Earliest wrapper-learning system (published IJCAI ’ 97) Special things about WIEN: Treats document as a string of characters Learns to extract a relation directly, rather than extracting fields, then associating them together in some way Example is a completely labeled page
33
WIEN system: a sample wrapper
34
l1, r1, …, lK, rKl1, r1, …, lK, rK Example: Find 4 strings ,,, l 1, r 1, l 2, r 2 labeled pages wrapper Some Country Codes Congo 242 Egypt 20 Belize 501 Spain 34 Learning LR wrappers
35
LR wrapper Left delimiters L1=“ ”, L2=“ ”; Right R1=“ ”, R2=“ ”
36
LR: Finding r 1 Some Country Codes Congo 242 Egypt 20 Belize 501 Spain 34 r 1 can be any prefix eg
37
LR: Finding l 1, l 2 and r 2 Some Country Codes Congo 242 Egypt 20 Belize 501 Spain 34 r 2 can be any prefix eg l 2 can be any suffix eg l 1 can be any suffix eg
38
WIEN system Assumes items are always in fixed, known order … Name: J. Doe; Address: 1 Main; Phone: 111-1111. Name: E. Poe; Address: 10 Pico; Phone: 777-1111. … Introduces several types of wrappers LR
39
Learning LR extraction rules Admissible rules: prefixes & suffixes of items of interest Search strategy: start with shortest prefix & suffix, and expand until correct
40
Summary of WIEN Advantages: Fast to learn & extract Drawbacks: Cannot handle permutations and missing items Must label entire page Requires large number of examples
41
Sliding Windows
42
Extraction by Sliding Window GRAND CHALLENGES FOR MACHINE LEARNING Jaime Carbonell School of Computer Science Carnegie Mellon University 3:30 pm 7500 Wean Hall Machine learning has evolved from obscurity in the 1970s into a vibrant and popular discipline in artificial intelligence during the 1980s and 1990s. As a result of its success and growth, machine learning is evolving into a collection of related disciplines: inductive concept acquisition, analytic learning in problem solving (e.g. analogy, explanation-based learning), learning theory (e.g. PAC learning), genetic algorithms, connectionist learning, hybrid systems, and so on. CMU UseNet Seminar Announcement
43
Extraction by Sliding Window GRAND CHALLENGES FOR MACHINE LEARNING Jaime Carbonell School of Computer Science Carnegie Mellon University 3:30 pm 7500 Wean Hall Machine learning has evolved from obscurity in the 1970s into a vibrant and popular discipline in artificial intelligence during the 1980s and 1990s. As a result of its success and growth, machine learning is evolving into a collection of related disciplines: inductive concept acquisition, analytic learning in problem solving (e.g. analogy, explanation-based learning), learning theory (e.g. PAC learning), genetic algorithms, connectionist learning, hybrid systems, and so on. CMU UseNet Seminar Announcement
44
Extraction by Sliding Window GRAND CHALLENGES FOR MACHINE LEARNING Jaime Carbonell School of Computer Science Carnegie Mellon University 3:30 pm 7500 Wean Hall Machine learning has evolved from obscurity in the 1970s into a vibrant and popular discipline in artificial intelligence during the 1980s and 1990s. As a result of its success and growth, machine learning is evolving into a collection of related disciplines: inductive concept acquisition, analytic learning in problem solving (e.g. analogy, explanation-based learning), learning theory (e.g. PAC learning), genetic algorithms, connectionist learning, hybrid systems, and so on. CMU UseNet Seminar Announcement
45
Extraction by Sliding Window GRAND CHALLENGES FOR MACHINE LEARNING Jaime Carbonell School of Computer Science Carnegie Mellon University 3:30 pm 7500 Wean Hall Machine learning has evolved from obscurity in the 1970s into a vibrant and popular discipline in artificial intelligence during the 1980s and 1990s. As a result of its success and growth, machine learning is evolving into a collection of related disciplines: inductive concept acquisition, analytic learning in problem solving (e.g. analogy, explanation-based learning), learning theory (e.g. PAC learning), genetic algorithms, connectionist learning, hybrid systems, and so on. CMU UseNet Seminar Announcement
46
A “Naïve Bayes” Sliding Window Model [Freitag 1997] 00 : pm Place : Wean Hall Rm 5409 Speaker : Sebastian Thrun w t-m w t-1 w t w t+n w t+n+1 w t+n+m prefixcontentssuffix If P(“Wean Hall Rm 5409” = LOCATION) is above some threshold, extract it. … … Estimate Pr(LOCATION|window) using Bayes rule Try all “reasonable” windows (vary length, position) Assume independence for length, prefix words, suffix words, content words Estimate from data quantities like: Pr(“Place” in prefix|LOCATION)
47
A “Naïve Bayes” Sliding Window Model 1. Create dataset of examples like these: +(prefix00,…,prefixColon, contentWean,contentHall,….,suffixSpeaker,…) - (prefixColon,…,prefixWean,contentHall,….,ContentSpeaker,suffixColon,….) … 2. Train a NaiveBayes classifier 3. If Pr(class=+|prefix,contents,suffix) > threshold, predict the content window is a location. To think about: what if the extracted entities aren’t consistent, eg if the location overlaps with the speaker? [Freitag 1997] 00 : pm Place : Wean Hall Rm 5409 Speaker : Sebastian Thrun w t-m w t-1 w t w t+n w t+n+1 w t+n+m prefixcontentssuffix … …
48
“Naïve Bayes” Sliding Window Results GRAND CHALLENGES FOR MACHINE LEARNING Jaime Carbonell School of Computer Science Carnegie Mellon University 3:30 pm 7500 Wean Hall Machine learning has evolved from obscurity in the 1970s into a vibrant and popular discipline in artificial intelligence during the 1980s and 1990s. As a result of its success and growth, machine learning is evolving into a collection of related disciplines: inductive concept acquisition, analytic learning in problem solving (e.g. analogy, explanation-based learning), learning theory (e.g. PAC learning), genetic algorithms, connectionist learning, hybrid systems, and so on. Domain: CMU UseNet Seminar Announcements FieldF1 Person Name:30% Location:61% Start Time:98%
49
Finite State Transducers
50
Finite State Transducers for IE Basic method for extracting relevant information IE systems generally use a collection of specialized FSTs Company Name detection Person Name detection Relationship detection
51
Finite State Transducers for IE Frodo Baggins works for Hobbit Factory, Inc. Text Analyzer: Frodo – Proper Name Baggins – Proper Name works – Verb for – Prep Hobbit – UnknownCap Factory – NounCap Inc– CompAbbr
52
Finite State Transducers for IE Frodo Baggins works for Hobbit Factory, Inc. Some regular expression for finding company names: “some capitalized words, maybe a comma, then a company abbreviation indicator” CompanyName = (ProperName | SomeCap)+ Comma? CompAbbr
53
Finite State Transducers for IE Frodo Baggins works for Hobbit Factory, Inc. 123 4 word (CAP | PN) CAB comma CAB word CAP = SomeCap, CAB = CompAbbr, PN = ProperName, = empty string Company Name Detection FSA
54
Finite State Transducers for IE Frodo Baggins works for Hobbit Factory, Inc. 123 4 word word (CAP | PN) CAB CN comma CAB CN word word CAP = SomeCap, CAB = CompAbbr, PN = ProperName, = empty string, CN = CompanyName Company Name Detection FST
55
Finite State Transducers for IE Frodo Baggins works for Hobbit Factory, Inc. 123 4 word word (CAP | PN) CAB CN comma CAB CN word word CAP = SomeCap, CAB = CompAbbr, PN = ProperName, = empty string, CN = CompanyName Company Name Detection FST Non-deterministic!!!
56
Finite State Transducers for IE Several FSTs or a more complex FST can be used to find one type of information (e.g. company names) FSTs are often compiled from regular expressions Probabilistic (weighted) FSTs
57
Finite State Transducers for IE FSTs mean different things to different researchers in IE. Based on lexical items (words) Based on statistical language models Based on deep syntactic/semantic analysis
58
Example: FASTUS Finite State Automaton Text Understanding System (SRI International) Cascading FSTs Recognize names Recognize noun groups, verb groups etc Complex noun/verb groups are constructed Identify patterns of interest Identify and merge event structures
59
Hidden Markov Models
60
Hidden Markov Models formalism HMM=states s 1, s 2, … (special start state s 1 special end state s n ) token alphabet a 1, a 2, … state transition probs P(s i |s j ) token emission probs P(a i |s j ) Widely used in many language processing tasks, e.g., speech recognition [Lee, 1989], POS tagging [Kupiec, 1992], topic detection [Yamron et al, 1998]. HMM = probabilistic FSA
61
Applying HMMs to IE Document generated by a stochastic process modelled by an HMM Token word State “reason/explanation” for a given token ‘Background’ state emits tokens like ‘the’, ‘said’, … ‘Money’ state emits tokens like ‘million’, ‘euro’, … ‘Organization’ state emits tokens like ‘university’, ‘company’, … Extraction: via the Viterbi algorithm, a dynamic programming technique for efficiently computing the most likely sequence of states that generated a document.
62
HMM for research papers: transitions [Seymore et al., 99]
63
HMM for research papers: emissions [Seymore et al., 99] authortitleinstitution Trained on 2 million words of BibTeX data from the Web... note ICML 1997... submission to… to appear in… stochastic optimization... reinforcement learning… model building mobile robot... carnegie mellon university… university of california dartmouth college supported in part… copyright...
64
What is an HMM? Graphical Model Representation: Variables by time Circles indicate states Arrows indicate probabilistic dependencies between states
65
What is an HMM? Green circles are hidden states Dependent only on the previous state: Markov process “ The past is independent of the future given the present. ”
66
What is an HMM? Purple nodes are observed states Dependent only on their corresponding hidden state
67
HMM Formalism {S, K, S : {s 1 … s N } are the values for the hidden states K : {k 1 … k M } are the values for the observations SSS KKK S K S K
68
HMM Formalism {S, K, are the initial state probabilities A = {a ij } are the state transition probabilities B = {b ik } are the observation state probabilities A B AAA BB SSS KKK S K S K
69
Need to provide structure of HMM & vocabulary Training the model (Baum-Welch algorithm) Efficient dynamic programming algorithms exist for Finding Pr(K) The highest probability path S that maximizes Pr(K,S) (Viterbi) Title Journal Author 0.9 0.5 0.8 0.2 0.1 Transition probabilities Year ABCABC 0.6 0.3 0.1 XBZXBZ 0.4 0.2 0.4 YACYAC 0.1 0.8 Emission probabilities dddd dd 0.8 0.2
70
Using the HMM to segment Find highest probability path through the HMM. Viterbi: quadratic dynamic programming algorithm House otot Road City Pin 115 Grant street Mumbai 400070 House Road City Pin 115 Grant ……….. 400070 otot House Road City Pin House Road Pin
71
Most Likely Path for a Given Sequence The probability that the path is taken and the sequence is generated: transition probabilities emission probabilities
72
Example A 0.1 C 0.4 G 0.4 T 0.1 A 0.4 C 0.1 G 0.1 T 0.4 beginend 0.5 0.2 0.8 0.4 0.6 0.1 0.9 0.2 0.8 05 4 3 2 1 A 0.4 C 0.1 G 0.2 T 0.3 A 0.2 C 0.3 G 0.3 T 0.2
73
oToT o1o1 otot o t-1 o t+1 Finding the most probable path Find the state sequence that best explains the observations Viterbi algorithm (1967)
74
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm The state sequence which maximizes the probability of seeing the observations to time t-1, landing in state j, and seeing the observation at time t x1x1 x t-1 j
75
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm Recursive Computation x1x1 x t-1 xtxt x t+1
76
Viterbi : Dynamic Programming House otot Road City Pin No 115 Grant street Mumbai 400070 House Road City Pin 115 Grant ……….. 400070 otot House Road City Pin House Road Pin
77
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm Compute the most likely state sequence by working backwards x1x1 x t-1 xtxt x t+1 xTxT
78
Hidden Markov Models Summary Popular technique to detect and classify a linear sequence of information in text Disadvantage is the need for large amounts of training data Related Works System for extraction of gene names and locations from scientific abstracts (Leek, 1997) NERC (Biker et al., 1997) McCallum et al. (1999) extracted document segments that occur in a fixed or partially fixed order (title, author, journal) Ray and Craven (2001) – extraction of proteins, locations, genes and disorders and their relationships
79
IE Technique Landscape
80
IE with Symbolic Techniques Conceptual Dependency Theory Shrank, 1972; Shrank, 1975 mainly aimed to extract semantic information about individual events from sentences at a conceptual level (i.e., the actor and an action) Frame Theory Minsky, 1975 a frame stores the properties of characteristics of an entity, action or event it typically consists of a number of slots to refer to the properties named by a frame Berkeley FrameNet project Baker, 1998; Fillmore and Baker, 2001 online lexical resource for English, based on frame semantics and supported by corpus evidence FASTUS (Finite State Automation Text Understanding System) Hobbs, 1996 using cascade of FSAs in a frame based information extraction approach
81
IE with Machine Learning Techniques Training data: documents marked up with ground truth In contrast to text classification, local features crucial. Features of: Contents Text just before item Text just after item Begin/end boundaries
82
Good Features for Information Extraction Example word features: identity of word is in all caps ends in “ -ski ” is part of a noun phrase is in a list of city names is under node X in WordNet or Cyc is in bold font is in hyperlink anchor features of past & future last person name was female next two words are “ and Associates ” begins-with-number begins-with-ordinal begins-with-punctuation begins-with-question-word begins-with-subject blank contains-alphanum contains-bracketed- number contains-http contains-non-space contains-number contains-pipe contains-question-mark contains-question-word ends-with-question-mark first-alpha-is-capitalized indented indented-1-to-4 indented-5-to-10 more-than-one-third-space only-punctuation prev-is-blank prev-begins-with-ordinal shorter-than-30 Creativity and Domain Knowledge Required!
83
Is Capitalized Is Mixed Caps Is All Caps Initial Cap Contains Digit All lowercase Is Initial Punctuation Period Comma Apostrophe Dash Preceded by HTML tag Character n-gram classifier says string is a person name (80% accurate) In stopword list (the, of, their, etc) In honorific list (Mr, Mrs, Dr, Sen, etc) In person suffix list (Jr, Sr, PhD, etc) In name particle list (de, la, van, der, etc) In Census lastname list; segmented by P(name) In Census firstname list; segmented by P(name) In locations lists (states, cities, countries) In company name list ( “ J. C. Penny ” ) In list of company suffixes (Inc, & Associates, Foundation) Word Features lists of job titles, Lists of prefixes Lists of suffixes 350 informative phrases HTML/Formatting Features {begin, end, in} x {,,, } x {lengths 1, 2, 3, 4, or longer} {begin, end} of line Creativity and Domain Knowledge Required! Good Features for Information Extraction
84
Landscape of ML Techniques for IE: Any of these models can be used to capture words, formatting or both. Classify Candidates Abraham Lincoln was born in Kentucky. Classifier which class? Sliding Window Abraham Lincoln was born in Kentucky. Classifier which class? Try alternate window sizes: Boundary Models Abraham Lincoln was born in Kentucky. Classifier which class? BEGINENDBEGINEND BEGIN Finite State Machines Abraham Lincoln was born in Kentucky. Most likely state sequence? Wrapper Induction Abraham Lincoln was born in Kentucky. Learn and apply pattern for a website PersonName
85
IE History Pre-Web Mostly news articles De Jong ’ s FRUMP [1982] Hand-built system to fill Schank-style “ scripts ” from news wire Message Understanding Conference (MUC) DARPA [ ’ 87- ’ 95], TIPSTER [ ’ 92- ’ 96] Most early work dominated by hand-built models E.g. SRI ’ s FASTUS, hand-built FSMs. But by 1990 ’ s, some machine learning: Lehnert, Cardie, Grishman and then HMMs: Elkan [Leek ’ 97], BBN [Bikel et al ’ 98] Web AAAI ’ 94 Spring Symposium on “ Software Agents ” Much discussion of ML applied to Web. Maes, Mitchell, Etzioni. Tom Mitchell ’ s WebKB, ‘ 96 Build KB ’ s from the Web. Wrapper Induction Initially hand-build, then ML: [Soderland ’ 96], [Kushmeric ’ 97], …
86
Summary Information Extraction Sliding Window From FST(Finite State Transducer) to HMM Wrapper Induction Wrapper toolkits LR Wrapper Finite State Machines Abraham Lincoln was born in Kentucky. Most likely state sequence? Sliding Window Abraham Lincoln was born in Kentucky. Classifier which class? Try alternate window sizes:
87
Readings [1] M. Ion, M. Steve, and K. Craig, "A hierarchical approach to wrapper induction," in Proceedings of the third annual conference on Autonomous Agents. Seattle, Washington, United States: ACM, 1999.
88
Thank You! Q&A
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.