Download presentation
Presentation is loading. Please wait.
Published byHeinz Graf Modified over 6 years ago
1
Supporting Annotation Layers for Natural Language Processing
Marti Hearst, Preslav Nakov, Ariel Schwartz, Brian Wolf, Rowena Luk UC Berkeley Stanford InfoSeminar March 17, 2006 Supported by NSF DBI And a gift from Genentech
2
Outline Motivation: NLP tasks System Description
Annotation architecture Sample queries Database Design and Evaluation Related Work Future Work
3
Double Exponential Growth in Bioscience Journal Articles
From Hunter & Cohen, Molecular Cell 21, 2006
4
BioText Project Goals Provide flexible, intelligent access to information for use in biosciences applications. Focus on Textual Information from Journal Articles Tightly integrated with other resources Ontologies Record-based databases
5
Project Team PI: Marti Hearst Co-PI: Adam Arkin Presley Nakov
Project Leaders: PI: Marti Hearst Co-PI: Adam Arkin Computational Linguistics and Databases Presley Nakov Ariel Schwartz Brian Wolf Barbara Rosario (alum) Gaurav Bhalotia (alum) User Interface / IR Rowena Luk Dr. Emilia Stoica Bioscience Janice Hamerja Dr. TingTing Zhang (alum)
6
BioText Architecture Sophisticated Text Analysis Annotations in
Database Improved Search Interface
7
Sample Sentence “Recent research, in proliferating cells, has demonstrated that interaction of E2F1 with the p53 pathway could involve transcriptional up-regulation of E2F1 target genes such as p14/p19ARF, which affect p53 accumulation [67,68], E2F1-induced phosphorylation of p53 [69], or direct E2F1-p53 complex formation [70].”
8
Motivation Most natural language processing (NLP) algorithms make use of the results of previous processing steps: Tokenizer Part-of-speech tagger Phrase boundary recognizer Syntactic parser Semantic tagger No standard way to represent, store and retrieve text annotations efficiently. MEDLINE has close to 13 million abstracts. Full text has started to become available as well.
9
System overview A system for flexible querying of text that has been annotated with the results of NLP processing. Supports self-overlapping and parallel layers, integration of syntactic and ontological hierarchies, and tight integration with SQL. Designed to scale to very large corpora. Most NLP annotation systems assume in-memory usage We’ve evaluated indexing architectures
10
Text Annotation Framework
Annotations are stored independently of text in an RDBMS. Declarative query language for annotation retrieval. Indexing structure designed for efficient query processing.
11
Key Contributions Support for hierarchical and overlapping layers of annotation. Querying multiple levels of annotations simultaneously. First to evaluate different physical database designs for NLP annotation architecture.
12
Layers of Annotations Each annotation represents an interval spanning a sequence of characters absolute start and end positions Each layer corresponds to a conceptually different kind of annotation Protein, MESH label, Noun Phrase Layers can be Sequential Overlapping two multiple-word concepts sharing a word Hierarchical (two different ways) spanning, when the intervals are nested as in a parse tree, or ontologically, when the token itself is derived from a hierarchical ontology
13
Layers of Annotations
14
Layers of Annotations
15
Layers of Annotations
16
Full parse, sentence and section layers are not shown.
Layers of Annotations Full parse, sentence and section layers are not shown.
17
Example: Query for Noun Compound Extraction
Goal: find noun phrases consisting ONLY of 3 nouns plastic water bottle blue water bottle big plastic water bottle FROM [layer=’shallow_parse’ && tag_name=’NP’ ˆ [layer=’pos’ && tag_name="noun"] [layer=’pos’ && tag_name="noun"] [layer=’pos’ && tag_name="noun"] $ ] AS compound SELECT compound.content
18
Query for Noun Compound Extraction (SQL wrapping)
SELECT LOWER(compound.content), COUNT(*) FROM ( BEGIN_LQL [layer=’shallow_parse’ && tag_name=’NP’ ˆ [layer=’pos’ && tag_name="noun"] [layer=’pos’ && tag_name="noun"] [layer=’pos’ && tag_name="noun"] $ ] AS compound SELECT compound.content END_LQL ) AS lql ORDER BY freq DESC
19
Query for Noun Compound Extraction (using artificial layers)
Goal: find noun phrases which have EXACTLY two nouns at the end, but no nouns before those two. “big blue water bottle” “plastic water bottle” FROM [layer=’shallow_parse’ && tag_name=’NP’ ˆ ( { ALLOW GAPS } ![layer=’pos’ && tag_name="noun"] ( [layer=’pos’ && tag_name="noun"] [layer=’pos’ && tag_name="noun"] ) $ ) $ ] AS compound SELECT compound.content
20
Example: Paraphrases Want to find phrases with certain variations:
Immunodeficiency virus(?es) in ?the human(?s) immunodeficiency virus in humans immonodeficiency viruses in humans immunodeficiency virus in the human immunodeficiency virus in a human
21
Query for Paraphrases (optional layers and disjunction)
[layer=’sentence’ [layer=’pos’ && tag_name="noun" && content = "immunodeficiency"] content IN ("virus","viruses")] [layer=’pos’ && tag_name=’IN’] AS prep ?[layer=’pos’ && tag_name=’DT’ && content IN ("the","a","an")] content IN ("human", "humans")] ] SELECT prep.content
22
Example: Protein-Protein Interactions
Find all sentences that consist of a An NP containing a gene, followed by a morphological variant of the verb “activate”, “inhibit”, or “bind”, followed by another NP containing a gene. Sentence Activate(d,ing) Inhibit(ed,ing) Bind(s,ing) protein protein
23
Query for Protein-Protein Interactions
SELECT p1_text, verb_content, p2_text, COUNT(*) AS cnt FROM ( BEGIN_LQL [layer='sentence' { ALLOW GAPS } [layer='shallow_parse' && tag_name='NP' [layer='gene'] $ ] AS p1 [layer='pos' && tag_name="verb" && (content ~ "activate%" || content ~ "inhibit%" || content ~ "bind%") ] AS verb ] AS p2 ] SELECT p1.text AS p1_text, verb.content AS verb_content, p2.text AS p2_text END_LQL ) lql GROUP BY p1_text, verb_content, p2_text ORDER BY count(*) DESC
24
Protein-Protein Interactions Sample Output
INTERACTION VERB PROTEIN 2 FREQUENCY Ca2 activates protein kinase 312 Cln3 activate 234 TAP binds transcription factor 192 TNF protein tyrosine kinase 133 serine/threonine kinase binding RhoA GTPase 132 Phospholamban inhibits ATPase 114 PRL activated 108 Interleukin 2 84 Prolactin AMPA 78 Nerve growth factor LPS inhibited MHC class II 75 Heat shock protein Binding p59 72 EPO STAT5 63 EGF PP2A 60 cis Sp1 50
25
Example: Chemical-Disease Interactions
“A new approach to the respiratory problems of cystic fibrosis is dornase alpha, a mucolytic enzyme given by inhalation.” Goal: extract the relation that dornase alpha (potentially) prevents cystic fibrosis. MeSH C subtree contains pancrediseases MeSH supplementary concepts represent chemicals.
26
Query on Disease-Chemical Interactions
27
Query on Disease-Chemical Interactions
[layer='sentence' { NO ORDER, ALLOW GAPS } [layer='shallow_parse' && tag_name='NP‘ [layer='chemicals'] AS chemical $ ] [layer='shallow_parse' && tag_name='NP' [layer='mesh' && tree_number BELOW 'C06.689%'] AS disease $ ] ] ] AS sent SELECT chemical.text, disease.text, sent.text
28
Results: Chemical-Disease
29
Query Translation
30
Database Design & Evaluation
31
Database Design Evaluated 5 different logical and physical database designs. The basic model is similar to the one of TIPSTER (Grishman, 1996). Each annotation is stored as a record in a relation. Architecture 1 contains the following columns: docid: document ID; section: title, abstract or body text; layer_id: a unique identifier of the annotation layer; start_char_pos: starting character position, relative to particular section and docid; end_char_pos: end character position, relative to particular section and docid; tag_type: a layer-specific token unique identifier. There is a separate table mapping token IDs to entities (the string in case of a word, the MeSH label(s) in case of a MeSH term etc.)
32
Database Design (cont.)
Architecture 2 introduces one additional column, sequence_pos, thus defining an ordering for each layer. Simplifies some SQL queries as there is no need for “NOT EXISTS” self joins, which are required under Architecture 1 in cases where tokens from the same layer must follow each other immediately. Architecture 3 adds sentence_id, which is the number of the current sentence and redefines sequence_pos as relative to both layer_id and sentence_id. Simplifies most queries since they are often limited to the same sentence.
33
Database Design (cont.)
Architecture 4 merges the word and POS layers, and adds word_id assuming a one-to-one correspondence between them. Reduces the number of stored annotations and the number of joins in queries with both word and POS constraints. Architecture 5 replaces sequence_pos with first_word_pos and last_word_pos, which correspond to the sequence_pos of the first/last word covered by the annotation. Requires all annotation boundaries to coincide with word boundaries. Copes naturally with adjacency constraints between different layers. Allows for a simpler indexing structure.
34
Data Layout for all 5 Architectures
Example: “Kinase inhibits RAG-1.” PMID PMID SECTION SECTION LAYER LAYER START START END END TAG TAG SEQUE SEQUE SENTE SENTE WORD WORD FIRST WORD POS LAST WORD POS ID ID CHAR CHAR CHAR CHAR TYPE TYPE NCE NCE NCE NCE ID ID POS POS POS POS POS POS 3345 3345 b (body) b (body) 0 (word) 34 34 40 59571 59571 1 2 59571 59571 1 2 3345 3345 b b 41 41 49 55608 55608 2 2 55608 55608 2 3 3345 3345 b b 50 50 55 89985 89985 3 2 89985 89985 3 4 3345 3345 b b 1 (POS) 1 (POS) 34 34 40 27 (NN) 27 (NN) 1 2 59571 59571 1 2 3345 3345 b b 1 1 41 41 49 53 (VB) 53 (VB) 2 2 55608 55608 2 3 3345 3345 b b 1 1 50 50 55 27 27 3 2 89985 89985 3 4 3345 3345 b b 3(s.parse) 3(s.parse) 34 34 40 31(NP) 31(NP) 1 2 1 2 Need to fix 3345 3345 b b 3 3 41 41 49 59(VP) 59(VP) 2 2 2 3 3345 3345 b b 3 3 50 50 55 31 31 3 2 3 4 3345 3345 b b 5 (gene) 5 (gene) 34 34 40 39(prt) 39(prt) 1 2 1 2 3345 3345 b b 5 5 50 50 55 39 39 2 2 3 4 3345 3345 b b 6(mesh) 6(mesh) 34 34 40 10770 10770 1 2 1 2 3345 3345 b b 6 6 50 50 55 16654 16654 2 2 3 4 Basic architecture Added, architecture 3 Added, architecture 5 Added, architecture 2 Added, architecture 4
35
Indexing Structure Two types of composite indexes: forward and inverted. An index lookup can be performed on any column combination that corresponds to an index prefix. The forward indexes support lookup based on position in a given document. The inverted indexes support lookup based on annotation values (i.e., tag type and word id). Most query plans involve both forward and inverted indexes Joins statistics would have been useful Detailed statistics are essential. Standard statistics in DB2 are insufficient. Records are clustered on their primary key
36
Indexing Structure (cont.)
Architecture Type Columns Arch 1-4 F *DOCID +SECTION +LAYER_ID +START_CHAR_POS +END_CHAR_POS +TAG_TYPE I LAYER_ID +TAG_TYPE +DOCID +SECTION +START_CHAR_POS +END_CHAR_POS Arch 2 DOCID +SECTION +LAYER_ID +SEQUENCE POS +TAG_TYPE +START_CHAR_POS +END_CHAR_POS LAYER_ID +TAG_TYPE +DOCID +SECTION +SEQUENCE POS +START_CHAR_POS +END_CHAR_POS Arch 3-4 DOCID +SECTION +LAYER_ID +SENTENCE +SEQUENCE POS +TAG_TYPE +START_CHAR_POS +END_CHAR_POS LAYER_ID +TAG_TYPE +DOCID +SECTION +SENTENCE +SEQUENCE POS +START_CHAR_POS +END_CHAR_POS Arch 4 WORD ID +LAYER_ID +TAG_TYPE +DOCID +SECTION +START_CHAR_POS +END_CHAR_POS +SENTENCE +SEQUENCE POS Arch 5 *DOCID +SECTION +LAYER_ID +SENTENCE +FIRST_WORD_POS +LAST_WORD_POS +TAG_TYPE LAYER_ID +TAG_TYPE +DOCID +SECTION +SENTENCE +FIRST_WORD_POS +LAST_WORD_POS WORD ID +LAYER_ID +TAG_TYPE +DOCID +SECTION +SENTENCE +FIRST_WORD_POS
37
Experimental Setup Annotated 13,504 MEDLINE abstracts
Stanford Lexicalized Parser (Klein and Manning, 2003) for sentence splitting, word tokenization, POS tagging and parsing. We wrote a shallow parser and tools for gene and MeSH term recognition. This resulted in 10,910,243 records stored in an IBM DB2 Universal Database Server. Defined 4 workloads based on variants of queries.
38
Experimental Setup: 4 Workloads
(a) Protein-Protein Interaction (c) Descent of Hierarchy: [layer='sentence' {ALLOW GAPS} [layer='gene'] AS gene1 [layer='pos' && tag_name="verb" && content="binds"] AS verb [layer='gene'] AS gene2 ] SELECT gene1.content, verb.content, gene2.content [layer='shallow_parse' && tag_name="NP" [layer='pos' && tag_name="noun" ^ [layer='mesh' && tree_number BELOW "G07.553"] AS m1 $ ] tree_number BELOW "D"] AS m2 $ ] SELECT m1.content, m2.content A01 A07 limb:vein shoulder: artery (Blaschke et al., 1999) (b) Protein-Protein Interaction (Rosario et al., 2002) [layer='sentence' [layer='shallow_parse' && tag_name="NP"] AS np1 [layer='pos' && tag_name="verb" && content='binds'] AS verb [layer='pos' && tag_name="prep" && content='to'] [layer='shallow_parse' && tag_name="NP"] AS np2 ] SELECT np1.content, verb.content, np2.content (d) Acronym-Meaning Extraction [layer='shallow_parse' && tag_name="NP"] AS np1 [layer='pos' && content='('] [layer='shallow_parse' && tag_name="NP"] AS np2 [layer='pos' && content=')'] (Pustejovsky et al., 2001) (Thomas et al., 2000)
39
Results Workload (a) (b) (c) (d) #Queries 54 11 50 1 #Results/query
303.4 77.5 1.6 16,701 LQL lines 8 6 5 4 Workload (a) (b) Architecture 1 2 3 4 5 SQL lines 37 34 29 91 77 75 65 50 # Joins 6 12 11 9 7 Time (sec) 3.98 4.35 3.59 1.69 1.94 3.88 5.68 5.41 3.85 3.55 Workload (c) (d) Architecture 1 2 3 4 5 SQL lines 45 38 39 41 59 50 53 35 # Joins 7 6 Time (sec) 17.9 23.42 21.49 30.07 4.06 1,879 1,700 2,182 1,682 1,582
40
Results Architecture Space (MB) 1 2 3 4 5 Data Storage 168.5 132.5 136.5 Index Storage 617.0 1,397.0 1,441.0 1,182.0 673.5 Total Storage 785.5 1,565.5 1,609.5 1,314.5 810.0 Architecture 5 performs well (if not best) on all query types, while the other architectures perform poorly on at least one query type. Storage requirement of Architecture 5 is comparable to that of Architecture 1 Architecture 5 results in much simpler queries Conclusion: We recommend Architecture 5 in most cases, or Architecture 1, if atomic annotation layer cannot be defined.
41
Scalability Analysis Combined workload of 3 query types
Varying buffer pool sizes
42
Scalability Analysis Buffer Pool Size (MB) Elapsed Time (ms) Buffer Read Time (ms) 1000 2300 1050 100 2900 1670 10 4600 3340 1 8300 6250 Suggests that the query execution time grows as a sub-linear function of memory size. We believe a similar ratio will be observed when increasing the database size and keeping the memory size fixed Parallel query execution can be enabled after partitioning the annotation on document_id
43
Study on a larger dataset
Annotated 1.4 Million MEDLINE abstracts 10 million sentences 320 million annotations 70 GB total database size Workload (a) (b) (c) (d) Random (a, b, c) #Queries 54 11 50 1 115 #Results/query 32,295 5,420 48 113,483 15,686 Time/query 0:50 55:44 1:35 3:33:57 6:26
44
Related Work Annotation graphs (AG): directed acyclic graph; nodes can have time stamps or are constrained via paths to labeled parents and children. (Bird and Liberman, 2001) Emu system: sequential levels of annotations. Hierarchical relations may exist between different levels, but must be explicitly defined for each pair.(Cassidy&Harrington,2001) The Q4M query language for MATE: directed graph; constraints and ordering of the annotated components. Stored in XML (McKelvie&al., 2001) TIQL: queries consist of manipulating intervals of text, indicated by XML tags; supports set operations. (Nenadic et al., 2002) SELECT I WHERE X.[id:I].Y <- db/wrd X.[:hv].[]*.Y <- db/phn; Annotation Graphs Find arcs labeled as words, whose phonetic transcription starts with a “hv“: [[Phonetic=A -> Phonetic=p] ^ Syllable=S] Emu Find sentences of phonetic “A” followed by “p“ both dominated by an “S” syllable: ($a word) ($b word); ($a pos ~ "NN") && ($a <> $b) && ($b # ~ "lesser") Q4M (MATE system) Find nouns followed by the word “lesser”: TIQL (TIMS system) Find sentences containing the noun phrase “COUP-TF II” and the verb “inhibit”: (<SENTENCE> <TERM nf=‘COUP TF II’>) <V lemma=‘inhibit’>
45
What about XQuery/XPath?
46
Main Advantages of LQL System
Stand-off annotation Flexible and modular Multi-layered, including overlaps LQL – simple yet powerful Support for hierarchies Optimized for cross-layer queries Much more expressive than standard text search engines Seamless integration with SQL and RDBMS Easy integration with additional data sources Simple parallelism Full text support Caption search Formatting-aware queries Flexible support for document structure
47
On the Horizon Full text documents support Query simplification
Really complex in bioscience text Caption search Formatting-aware annotation layers Flexible support for document structure Query simplification Shorthand syntax GUI helper
48
Syntax-Helper Interface
49
biotext.berkeley.edu/lql
Thank you! biotext.berkeley.edu/lql
50
Overlap Example
51
Meta-data tables BIOTEXT_ANNOTATION_LAYER LAYER_ID LAYER_NAME OWNER
LAST_UPDATED 1 pos hearst 6/12/2005 2 full_parse 3 shallow_parse 4 sentence 5 gene 6 mesh 7 chemicals
52
Meta-data tables BIOTEXT_ANNOTATION_ATTRIBUTES Create a new query:
LAYER_ID ATTRIBUTE ATTRIBUTE_FIELD TABLE_NAME ATTRIBUTE_ID ATTRIBUTE_TEXT DBL_QUOTE_ALIAS TREE_TABLE TREE_DESC TREE_NUM -1 layer layer_id biotext_annotation_layers layer_name None tag_name tag_type biotext_annotation_tag_types tag_type_id tag_group 1 content word_id biotext_annotation_word word content_lower word_lower 5 name locuslink_aliases locus_id 6 tree_number biotext_annotation_mesh_tree descriptor_ui mesh_term biotext_annotation_mesh_terms mesh_term_lower Create a new query:
53
Meta-data tables BIOTEXT_ANNOTATION_TAG_TYPES LAYER_ID TAG_TYPE_ID
TAG_NAME TAG_GROUP 21 2 1019 IN 22 1020 INTJ 23 1021 JJ adjective 24 1022 JJR 25 1023 JJS 26 1025 LS 27 1069 LST 28 1026 MD 29 1070 NAC 30 1027 NN noun 31 1028 NNP 32 1029 NNPS 33 1030 NNS 34 1031 NP 35 1032 NX
54
Meta-data tables BIOTEXT_ANNOTATION_WORD WORD_LOWER WORD_ID WORD 1
BCl bcl 2 2,2'-disulfonic 3 4 Premkumar premkumar 5 329: 6 EVPROC evproc 7 fascinae 8 fascines 9 Cox-Stuart cox-stuart 10 epidydimo-orchitis 11 10-20-min 12 ng/ml 13 1.016x 14 Goldberg-Lindblom goldberg-lindblom 15 Lundborg lundborg 16 graft-loss
55
References Steven Bird and Mark Liberman A formal framework for linguistic annotation. Speech Communication, 33(1–2):23–60. Steve Cassidy and Jonathan Harrington Speech annotation and corpus tools. Speech Communication, 33(1–2):61–77. David McKelvie, Amy Isard, Andreas Mengel, Morten B. Moller, Michael Grosse and Marion Klein Speech annotation and corpus tools. Speech Communication, 33(1–2):97–112. Goran Nenadic, Hideki Mima, Irena Spasic, Sophia Ananiadou and Jun-ichi Tsujii Terminology-Driven Literature Mining and Knowledge Acquisition in Biomedicine. International Journal of Medical Informatics, 67:33–48. Ralph Grishman Building an Architecture: a CAWG Saga. Advances in Text Processing: Tipster Program Phase II, Morgan Kaufmann, 1996. Steve Cassidy Compiling Multi-tiered Speech Databases into the Relational Model: Experiments with the Emu System. 6th European Conference on Speech Communication and Technology Eurospeech 99, 2127–2130, Budapest, Hungary. Xiaoyi Ma, Haejoong Lee, Steven Bird and Kazuaki Maeda Models and Tools for Collaborative Annotation. Third International Conference on Language Resources and Evaluation, 2066–2073.
56
Acquiring Labeled Data using Citances
57
A discovery is made … A paper is written …
58
That paper is cited … and cited … and cited … … as the evidence for some fact(s) F.
59
Each of these in turn are cited for some fact(s) …
… until it is the case that all important facts in the field can be found in citation sentences alone!
60
Citances Nearly every statement in a bioscience journal article is backed up with a cite. It is quite common for papers to be cited times. The text around the citation tends to state biological facts. (Call these citances.) Different citances will state the same facts in different ways … … so can we use these for creating models of language expressing semantic relations?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.