Download presentation
Presentation is loading. Please wait.
1
Lecture 13 Information Extraction
CSCE Natural Language Processing Lecture 13 Information Extraction Topics Name Entity Recognition Relation detection Temporal and Event Processing Template Filling Readings: Chapter 22 February 27, 2013
2
Overview Last Time Today Readings Dialogues Human conversations
Slides from Lecture24 Dialogue systems Dialogue Manager Design Finite State, Frame-based, Initiative: User, System, Mixed VoiceXML Information Extraction Readings Chapter 24, Chapter 22
3
Information extraction
Information extraction – turns unstructured information buried in texts into structured data Extract proper nouns – “named entity recognition” Reference resolution – \ named entity mentions Pronoun references Relation Detection and classification Event detection and classification Temporal analysis Template filling
4
Template Filling Example template for “airfare raise”
5
Figure 22.1 List of Named Entity Types
6
Figure 22.2 Examples of Named Entity Types
7
Figure 22.3 Categorical Ambiguities
8
Figure 22.4 Categorical Ambiguity
9
Figure 22.5 Chunk Parser for Named Entities
10
Figure 22.6 Features used in Training NER
Gazetteers – lists of place names
11
Figure 22.7 Selected Shape Features
12
Figure 22.8 Feature encoding for NER
13
Figure 22.9 NER as sequence labeling
14
Figure 22.10 Statistical Seq. Labeling
15
Evaluation of Named Entity Rec. Sys.
Recall terms from Information retreival Recall = #correctly labeled / total # that should be labeled Precision = # correctly labeled / total # labeled F- measure where β weights preferences β=1 balanced β>1 favors recall β<1 favors precision
16
NER Performance revisited
Recall, Precision, F High performance systems F ~ .92 for PERSONS and LOCATIONS and ~.84 for ORG Practical NER Make several passes on text Start by using highest precision rules (maybe at expense of recall) make sure what you get is right Search for substring matches or previously detected names using probabilistic searches string matching metrics(Chap 19) Name lists focused on domain Probabilistic sequence labeling techniques using previous tags
17
Relation Detection and classification
Consider Sample text: Citing high fuel prices, [ORG United Airlines] said [TIME Friday] it has increased fares by [MONEY $6] per round trip on flights to some cities also served by lower-cost carriers. [ORG American Airlines], a unit of [ORG AMR Corp.], immediately matched the move, spokesman [PERSON Tim Wagner] said. [ORG United Airlines] an unit of [ORG UAL Corp.], said the increase took effect [TIME Thursday] and applies to most routes where it competes against discount carriers, such as [LOC Chicago] to [LOC Dallas] and [LOC Denver] to [LOC San Francisco]. After identifying named entities what else can we extract? Relations
18
Fig 22.11 Example semantic relations
19
Figure 22.12 Example Extraction
20
Figure 22.13 Supervised Learning Approaches to Relation Analysis
Algorithm two step process Identify whether pair of named entities are related Classifier is trained to label relations
21
Factors used in Classifying
Features of the named entities Named entity types of the two arguments Concatenation of the two entity types Headwords of the arguments Bag-of-words from each of the arguments Words in text Bag-of-words and Bag-of-digrams Stemmed versions Distance between named entities (words / named entities) Syntactic structure Parse related structures
22
Figure 22.14 a-part-of relation
23
Figure 22.15 Sample features Extracted
24
Bootstrapping Example “Has a hub at”
Consider the pattern / * has a hub at * / Google search 22.4 Milwaukee-based Midwest has a hub at KCI 22.5 Delta has a hub at LaGuardia … Two ways to fail False positive: e.g. a star topology has a hub at its center False negative? Just miss 22.11 No frill rival easyJet, which has established a hub at Liverpool
25
Figure 22.16 Bootstrapping Relation Extraction
26
Using Features to restrict patterns
22.13 Budget airline Ryanair, which uses Charleroi as a hub, scrapped all weekend flights / [ORG] , which uses a hub at [LOC] /
27
Semantic Drift Note it will be difficult (impossible) to get annotated materials for training Accuracy of process is heavily dependant on initial sees Semantic Drift – Occurs when erroneous patterns(seeds) leads to the introduction of erroneous tuples
28
Fig 22.17 Temporal and Durational Expressions
Absolute temporal expressions Relative temporal expressions
29
Fig 22.18 Temporal lexical triggers
30
Fig 22.19 MITRE’s tempEx tagger-perl
31
Fig 22.20 Features used to train IOB
32
Figure 22.21 TimeML temporal markup
33
Temporal Normalization
iSO standard for encoding temporal values YYYY-MM-DD
34
Figure 22.22 Sample ISO Patterns
35
Event Detection and Analysis
Event Detection and classification
36
Fig 22.23 Features for Event Detection
Features used in rule-based and statistical techniques
37
Fig 22.24 Allen’s 13 temporal Relations
38
Figure continued
39
Figure 22.25 Example from Timebank Corpus
40
Template Filling
41
Figure 22.26 Templates produced by Faustus 1997
42
Figure 22.27 Levels of processing in Faustus
43
Figure Faustus Stage 2
44
Figure 22.29 The 5 Partial Templates of Faustus
45
Figure 22.30 Articles in PubMed
46
Figure 22.31 biomedical classes of named entities
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.